S4U extensions in krb5 1.8

September 14th, 2009 by ghudson

I recently merged Luke Howard’s work on the Microsoft S4U extensions into the krb5 trunk.  This feature is primarily of interest to application developers.  The S4U extensions are a combination of two services: S4U2Self, which allows a service to get a ticket to itself on behalf of a user, and S4U2Proxy, which allows a trusted service to get credentials to another service on behalf of a user.

To understand how these services are useful, consider the case of a webmail gateway.  Let’s say an organization has an IMAP email service with krb5 support, via GSSAPI and SASL, and you want to provide a webmail interface to that service.  Some power users might be able to authenticate to the webmail interface using HTTP Negotiate with krb5, but most will likely use certificates or passwords over TLS.

Fundamentally, a gateway service of this type must be trusted with users’ email, since there is no way to transmit commands end-to-end between the user and the IMAP service.  So, how does the webmail service authenticate to the IMAP service?  One possibility is for the webmail service to support authentication only by password, and use the password to obtain credentials for the user.  This is how it is generally done today.  This approach gives up the potential for single sign-on, teaches users to type their passwords into web pages which request them, and can’t work for user principals which require PKINIT or hardware preauth systems.

With the S4U extensions, the webmail service can operate in a more flexible manner.

  • First, using S4U2Self, the webmail service can authenticate a user by whatever means, and then request a ticket from the user to itself, just as if the user had used HTTP Negotiate with krb5.  Such a ticket allows the webmail service to examine any authorization data for the user.  Most MIT krb5 deployments do not make use of KDC authorization data at this time, so that part isn’t especially interesting, but this ticket is also useful for the next step.
  • Second, using either the ticket obtained from S4U2Self or from HTTP Negotiate, the webmail service can use S4U2Proxy to obtain credentials on behalf of the user to the IMAP service.  Traditionally, this was only possible if the client used Kerberos authentication and forwarded a TGT to the webmail service.

To perform the first step, the webmail service would use the new API gss_acquire_cred_impersonate_name, passing its service credentials as the impersonator_cred_handle and the user’s name as the desired_name.  The underlying GSSAPI code will perform an S4U2Self request to the KDC.  Despite the word “impersonate,” this part of the authentication is pretty innocuous and requires no special privileges on the part of the service, since it only allows a service to impersonate users to itself.  (In theory, a service can print arbitrary service tickets to itself without the help of the KDC, although that wouldn’t give it access to any authorization data.)

To perform the second step, the webmail service would use the existing gss_init_sec_context API.  The claimant_cred_handle argument to this call would either be the credentials obtained with gss_acquire_cred_impersonate_name, or–if the user authenticated with HTTP Negotiate–the delegated_cred_handle returned by gss_accept_sec_context.  Previously, delegated_cred_handle was only filled in if the user forwarded a TGT during authentication, but in 1.8 it is filled in whenever the application requests it.  The GSSAPI code will perform an S4U2Proxy request to obtain credentials on behalf of the user.  If the KDC is an MIT krb5 1.8 KDC, this request will only succeed if the webmail service principal has the new “ok_to_auth_as_delegate” flag set.

We hope this new feature encourages the development of better service intermediaries, as well as allowing better interoperability with Microsoft.  For further information, see the project page for the S4U work or the Microsoft documentation of the S4U protocol extensions.  krb5 1.8 is planned to be released around March 2010, plus or minus three months.

Key Accomplishments - Part 2

September 11th, 2009 by sbuckley

2. Knowing the customer

One of the problems associated with MIT Kerberos in particular, is our license is so permissive that organizations can download and use our code, and we never have to know who they are or how they are using it.  Some might see this as a “feature”, but I see it as a bug.  I want to know who our customers are!  Otherwise, how can we ask what they need in terms of new features or functionality in order to make our product more useful?

Of course, we knew many of the major players already.  We have had close working relationships with Apple, Debian, Red Hat and Sun for many years.  What was surprising, was how many other interesting uses people were finding for MIT Kerberos.

When the buzz about the launch of the Consortium got into the press, lots of organizations started identifying themselves as using our code in their products.  Centrify and TeamF1 were two of the first organizations to step forward and join the Consortium.  Centrify uses MIT Kerberos to integrate most platforms around Active Directory.  Team F1 puts Kerberos on embedded devices for OEMs.

Many of our other customers were discovered by what I’ll call “benevolent stalking”.  I always thought it was interesting that lots of the same people were answering very sophisticated questions on the krbdev list using nondescript email accounts, and also contributing patches to our RT system.  A few friendly introductory emails to some of these individuals yielded replies from four of the very large banks, where Kerberos was their primary authentication mechanism.  This was great news, because now we were in touch with some of very large organizational customers for the first time.

So we organized  several conference calls with these companies last year as we were planning the feature set for our 1.7 release.  To our surprise, one of the banks was using a fairly old version of our code.  This was odd, and several members of our team worried that they had not upgraded due to concerns about the security of our more recent releases.  So we asked why they had not upgraded.  The answer was simple: they had a modified version of MIT Kerberos with features they depended on, and there wasn’t much in the more recent releases that made an upgrade worth the considerable cost!

This was information we needed to hear.  So the subject immediately turned to one of what features would make an upgrade worth the cost.  We immediately identified two features that the bank depended on and put them on the work schedule for our 1.7 release.  We also asked many other organizations what features they would like to see, and soon patterns began emerging around a specific set of features that customers wanted.  So when we delivered Kerberos 1.7 this past June, for the very first time the features and improvements that went into it were based on what a large number of our customers actually said they wanted.

The 1.8 release features are being driven by the same process of asking customers what features they need most.  For example, we heard loud and clear that there was a need for a password lockout feature in MIT Kerberos.  We think the 1.8 release will be well received when it is released in Spring 2010.

3. Documentation
4. Database support
5. Better coding practices
6. A good test suite
7. Kerberos for Mobile
8. Release Kerberos 1.7
9. Simpler revenue model
10. Community Building

A Trusted Ticket System for Kerberos

September 3rd, 2009 by thardjono

In looking for solutions to deploying Kerberos in different environments I’m always surprised to find interesting work people have done using Kerberos.  A case in point is the Diploma project of Andreas Leicher (at Fraunhofer SIT) on a Kerberos Trusted Ticket System. The proposal put forward in this work is to use the Trusted Platform Module (TPM) hardware to increase the security of the ticket request/issuance process in Kerberos.

For those who don’t follow the efforts of the Trusted Computing Group (TCG), the TPM is a piece of tamper-resistant hardware that is now present in most mid-level to high-end PC computers from all the major OEMs (HP, Dell, Lenovo, etc). The major OEMs have in fact been shipping machines with TPMs for several years now, and the number of TPM’s shipped is today well over 100 million. Its not clear if Apple/MacOS-X has TPM hardware since Apple has not formally declared.  However, bearing in mind that Apple now uses Intel-based hardwares, it should not be surprising if the MacBook Pros also have TPMs.

There are a number of ways that a Kerberos deployment could make use of the TPM on a Client machine or on a KDC machine. The most obvious is to seal the client-side keying material (when not in use) using the TPM, such as the keytab and the credentials cache. Note that currently these are located on the client machine hard-drive, and thus subject to various attacks. In this sealing scenario the TPM is used in its most basic usage-mode, namely a key storage device.  The Kerberos client could simply command the TPM to seal its keying material using a TPM-generated internal key. The resulting (encrypted) blob is returned by the TPM/TSS, and simply placed on the hard-drive or other storage location (e.g. flash).

A more interesting use-case is for the TPM to perform some crypto operations pertaining to Kerberos, such as the encryption/decryption of the relevant parts of a ticket. In this case the TPM could hold the long-term key of the client (and KDC), as well as the session keys. More interestingly though, the TPM could be asked to self-generate the symmetric keys used in Kerberos.  The TPMv1.2 allows keys (upon generation) to be designated as non-migratable, meaning that it is resident and bound to the TPM hardware.  Keys could also be designated as certified migratable, meaning that they could be transferred from one TPM to another TPM using a secure migration protocol. The TCG has published a specification for such a key migration protocol, and a couple of vendors have actually implemented it.

Another use-case is for Kerberos pre-authentication protocols (eg. PKINIT) to use a TPM Certified Signing Key (CSK) to perform the public-key operations related to the pre-auth protocol.  Since a CSK is by definition TPM-resident and is provably bound to a given TPM, the AS/KDC gets the benefits of the certainty that it is speaking with the same TPM.

The solution proposed by Andreas Leicher in his Thesis goes beyond the above ideas, and uses the full potential of the TPMv1.2 (see Chapter 6 of the Thesis).  Among others it is proposing the use of some of the fundamental building blocks in trusted computing:

  • Measurements: Client platform integrity measurements are performed, and the results are reported to the KDC within service-ticket request to the TGS.
  • The TPM Quote is used: The TPM Quote function is called, reporting the status of the Platform Configuration Registers (PCRs) on the Client to the KDC.
  • Signing using AIK: The TPM’s digital signing capabilities are exercised, using the Attestation Identity Keys (AIKs) to signs the measured system state. This signing is part of the TPM reporting behavior and provides integrity protection on the reported measurements as it is delivered from the Client to the KDC.
  • Exercising the Certified Signing Key (CSK):  The TGS uses the TPM CSK key to sign relevant portions of the ticket, thereby ensuring that only the intended TPM can verify (since only that TPM holds the matching RSA private key, which is bound to the TPM hardware).

All in all this is a great Thesis project by Andreas. Its one of those projects that one wishes one had time to do oneself  :-)

Integer overflows

September 1st, 2009 by ghudson

Most C programmers are familiar with buffer overflows, and know how to avoid them: delegate.  In krb5, for instance, our coding practices recommend using asprintf() for simple string concatenation, and the k5buf module for more complicated constructions.  If you delegate your string construction this way, you know you can’t overflow an output buffer unless you screw up really badly.

Integer overflows are comparatively shrouded in mystery.  Programmers may be vaguely aware that adding large numbers will produce something other than the sum of the operands, but don’t really know what to watch out for or how to avoid attacks.  You can’t delegate your integer operations without rendering your code unreadable (and slow).  The web is surprisingly unhelpful.  So, here’s what I’ve learned about reducing the risk of integer overflow attacks:

  • Every time you write code which parses a number out of a string, or otherwise gets it from another trust domain, imagine that you’re getting the largest number possible.
  • Subtract, don’t add.  If you find yourself writing “if (current + len > available)”, instead write “if (len > available - current)”.  If your code doesn’t allow current to grow larger than available, the subtraction will always yield a positive value and you can’t have overflow problems.  (However, you do still have to protect against the possibility of len being negative, if it’s a signed type and you constructed it naively.)
  • Use unsigned types for lengths when appropriate.  (But avoid comparing signed and unsigned types; the semantics of such comparisons are more complicated than you’d expect.)
  • If you do find yourself having to add unbounded values, you can generally test for overflow by checking if (a + b < a), assuming a and b are of the same unsigned type.  If you’re adding three values in one operation, or multiplying values, that technique doesn’t work.

Testing

September 1st, 2009 by ghudson

Here are some thoughts about krb5’s regression test suite.  This is based mostly on my experience with krb5, Subversion, and other projects, and I’m sure these ideas could be greatly refined through research in the field.

For the most part, we have two different kinds of tests in the krb5 tree: unit tests and system tests.  The unit tests are typically in the form of C source files beginning with “t_”, which are compiled and (usually) executed when you run “make check”.  Sometimes the test program is self-contained, sometimes it produces output which is compared to an expected output file.  In a few cases, test programs are not executed (sometimes because they are merely tools to facilitate manual testing) or are executed but produce output which is not verified.

Partly because the framework for unit testing is so ad hoc, these unit tests are easy to write, and are popular among krb5 developers for that reason.  The primary challenges for unit testing in krb5 are isolation, coverage, and organization:

  • By isolation, I mean the difficulty of testing components which talk to the network, to a database, or something else much more complicated than the component itself.  In the Java web programming world, it is common to use “inversion of control” to facilitate unit testing.  Instead of referring to lower-level modules directly, classes are constructed with references to the network object or the database object or whatever.  During unit testing, the classes can be constructed with dummy versions of those dependencies which are rigged to produce the desired results, or even to yield fake errors to exercise failure paths.  That’s a bit harder to do in C, unfortunately, so in krb5 a lot of code is bypassed in the unit tests and tested only by system tests.
  • By coverage, I partly mean the amount of code not covered by unit tests, and partly mean the difficulty in measuring what is covered.  I made some progress on measuring coverage by bringing back partial support for static linking, which allows the use of gcov.
  • By organization, I mean that the ad hoc nature of our unit tests make them inflexible.  There’s no easy way to run unit tests without system tests, or to run all the unit tests and produce a report of which succeeded and which failed (instead, “make check” simply aborts on the first unit test failure), and no way to identify a particular unit test.  I’m not sure how important this problem really is.

Unit testing is great where it exists, because it allows code to be improved with confidence.  It’s no fun to be staring at a grotty function using outdated infrastructure and idioms, and knowing that if you bring it up to date you might introduce some subtle bug because the code has no tests.

Whereas unit tests exercise small isolated pieces of code, system tests exercise complete programs.  Most of our system tests are implemented in tcl and run in the dejagnu framework.  expect and dejagnu do not receive a lot of love and are sometimes buggy on any given machine, and there aren’t very many developers who are excited to learn more about tcl in order to write more krb5 system tests.  When I think about replacement infrastructure for dejagnu, I think about the following challenges:

  • Ease of setup and teardown: krb5 programs operate in an environment consisting of a KDC, a client, and (in many cases) a server, and in more complicated tests there may be multiple KDCs.  Test cases need to be able to construct environments to run programs in with minimal boilerplate.  This basically means that the testing infrastructure needs to be extensible with library functions like dejagnu is using tcl.
  • Program interaction: expect automates the footwork of testing programs which interact with the user via the tty.  Either our replacement infrastructure needs to duplicate this functionality, or we need to structure our programs to avoid the need for tty interaction in test cases.  (That’s probably easier now that we are unbundling the rlogin/telnet/ftp applications and their system tests.)
  • Output usefulness: our current dejagnu test suites output files named krb.log and dbg.log.  I have not been blown away by the accessibility of this information.  Hopefully any replacement infrastructure would be able to produce tidier and more useful debugging output.
  • Debuggability of test failures: when a system test fails, what does a developer  have to do in order to execute the relevant code inside a debugger?  For our current dejagnu tests, the answer varies from slightly annoying for the tests/dejagnu tests (add “spawn_shell” to the test case, figure out the exact command being executed, and execute it by hand under gdb in the spawned shell) to downright aggravating for the kadmin tests (add a sentinel loop to the appropriate part of the test case, gdb attach to the tcl interpreter in which the test is being run via bindings, set a breakpoint, touch a file to deactivate the sentinel loop, and continue the interpreter).  Any replacement infrastructure should have a decent answer to this question.
  • Performance: because of the amount of setup and teardown involved with each test case, system testing can be expensive.  In our case, because our software was originally designed to run on Vaxes, the actual setup and teardown costs are minimal, but the test suite can be slow because of sleep() statements peppered around the test suite, the delays from which are multiplied by multiple test passes.  We need to avoid these.
  • Barriers to entry: any reasonable system testing infrastructure necessarily involves a lot of locally built infrastructure to handle all the problems mentioned above.  How hard will it be for developers to come up to speed on all this machinery in order to write new tests?  The answer depends mostly on the quality of internal documentation.

I don’t have ready solutions in mind to these challenges at this time.  Our preferred scripting language at this time appears to be Python, so future developments for the test suite infrastructure will probably lean in that direction.

Encryption type configuration in krb5 1.8

August 31st, 2009 by ghudson

Here’s a note about a little project I finished a few weeks ago.

We have three variables configuring what types of encryption can be used in krb5:  default_tgs_enctypes, default_tkt_enctypes, and permitted_enctypes.  In krb5 1.7 and prior, the syntax of these variables is just a list of enctype names.  That’s fine if you know exactly what you want, but not so helpful if you just want to add or remove some enctypes from the default list.  For example, if you want to disable DES and triple DES support, you could list all of the remaining enctypes, but then your krb5 installation wouldn’t support any future enctypes we add support for in later versions of krb5.

In krb5 1.7, we added the allow_weak_crypto variable, which globally disables enctypes we consider to be weak (chiefly single DES) if set to false.  That’s a step forward, but isn’t very flexible.

In krb5 1.8, you will be able to use a more flexible syntax for enctype configuration.  There are three additions:

  1. The word DEFAULT expands to the default list of enctypes.
  2. There are four defined “families” of enctypes based on the underlying cipher: des, des3, aes, and rc4.
  3. You can put a ‘-’ before an enctype or family name to remove it from the list.

So if you want to disable a specific enctype like AES256, you could write “DEFAULT -aes256-cts”.  If you want to disable whole families, you can do so succinctly with something like “DEFAULT -des -des3″.  If you want to use only specific families of enctypse, you can also do that succinctly by naming the families, like “aes des3″.  If you want to prefer a specific enctype, you can move it to the front of the list by writing something like “aes128-cts DEFAULT”.

There is a fourth variable of importance to enctype configuration.  It is called “supported_enctypes”, and determines the default combination of key/salt-type pairs used when a principal is created or its password is changed.  Because of the added factor of salt types, the syntax of this variable is unchanged for krb5 1.8; you have to explicitly list all of the key/salt types you want to use.  We looked into eliminating the concept of “salt type” for krb5 1.8 so that this variable could work just like the other three, but there turn out to be a few complications.

krb5 1.8 is planned to be released around March 2010, plus or minus three months.

Keeping House

August 20th, 2009 by ghudson

With any non-trivial software project, there are a variety of housekeeping tasks oriented around keeping development going and maintaining quality.  Housekeeping work needs to be kept in balance–too much focus on it and the project makes no progress; too little and the product becomes difficult to maintain and loses quality.  I signed onto the MIT Kerberos Consortium last October with the understanding that I would focus on these housekeeping issues, although I spend a fair amount of time on feature work.  Here are some of the areas I think about, without trying to draw too much focus away from user-visible improvements:

  • Code style consistency: MIT Kerberos has been in development for decades, and historically there has not been a strong focus on consistency of formatting and idioms.  We have agreed on a set of rules for new work; the challenge now is converting our large body of existing code to follow these principles.  For this release cycle we have decided to do a “great reindent”–basically, to eliminate the use of tabs in our C sources and ensure that all code use four-space indents.  That’s only a small part of the style principles, but it’s one of the most noticeable parts.
  • Function size: Ideally, most functions in MIT krb5 would fit within a screenful of text and be understandable at a glance.  Unfortunately, many of our functions have accreted over time to huge lengths, such as the 615-line krb5_get_init_creds().  It is hard to change functions like these with confidence since they are so complicated.  At some point, I would like to spend some time decomposing these monster functions into smaller parts.  Going forward, we can learn from this problem–when adding to the work a function performs, use a helper function rather than open-coding a complicated series of steps into an already moderate-sized function.
  • Naming: At times, krb5 developers have been lazy about choosing intuitive names.  When a TGS request arrives at the KDC (i.e. when a client requests to acquire service tickets using its ticket-granting ticket), the KDC code invokes dispatch(), which invokes process_tgs_req(), which invokes kdc_process_tgs_req() and then does a whole lot of other stuff.  The latter two function names are essentially equivalent, making it unclear how labor should be divided between them.  A related problem is consistenct of abbreviations; for example, in the kadmin library, we have filenames like lib/kadm5/srv/svr_policy.c and lib/kadm5/srv/server_dict.c.  It is hard to even type one of these filenames when the word “server” has to be mentally translated into the keystrokes svr, srv, or server at different times. I would love to spend some time improving this situation at some point.  Going forward, we should pay attention to quality of naming when reviewing contributions and new work.
  • Code documentation: the biggest problem here is that we don’t have krb5 API documentation.  We have plans to use doxygen for this but we haven’t yet done the work.  We also don’t have a culture of documenting our internal function contracts; you generally just have to look at the function name and arguments and guess what it’s supposed to do, or read the code to find out what it actually does.  Changing that in a code base of this size is a pretty massive undertaking, which I hope to make a dent in at some point.  For new code, we should insist on at least a brief comment before each function explaining its purpose.
  • Regression testing: I could talk about testing for a long time, and I will do so in a later blog post.  The good news is that we have a fairly large test suite in krb5.  The bad news is that it isn’t comprehensive, much of it relies on technology which has fallen out of favor (dejagnu, and therefore tcl and expect), it isn’t always easy to add tests for new code, and it isn’t always easy to debug test failures.
  • Portability testing: Thanks to Ken Raeburn, we have some infrastructure for regularly running regression tests on a variety of platforms.  Unfortunately, we have not been spending enough time debugging the resulting test failures.  Some of them are amusingly obscure–for instance, a recent one was that ftpd was sending replies to the client in the wrong order if the server-side user doesn’t have a valid home directory, as was true of one of the test accounts–but we aren’t taking advantage of the testing infrastructure if we aren’t getting to a clean baseline.
  • Interoperability testing: We don’t do enough of this.  Events like the Interoperability Testing Workshop are a great opportunity to test new features against other implementations, but we should also be testing basic functionality against other implementations on a regular basis.  To accomplish this, we probably want to set up static infrastructure based on other implementations (such as a read-only Heimdal or Active Directory KDC) and create a separate suite of functional tests which invokes our code against that infrastructure.  We could also make more use of gssMonger.
  • Static analysis: We make use of Coverity, a commercial product, to scan our code for potential defects.  It tends to find a lot of unimportant problems in old, well-tested code, but is pretty good at uncovering real problems in new code.  Unfortunately, I have had limited time to clean up the defects it finds in the current krb5 code base (although I have mostly purged libkrb5 of defects since I signed on).
  • Build system: Like most open source C projects, we use autoconf for portability (but not automake or libtool) and make for the build itself.  I think autoconf is approaching the end of its useful life.  It’s good at ensuring portability to a lot of Unix systems few people care about, and bad about ensuring portability to Windows.  Autoconf tests are written in an obscure macro language (m4) on top of least-common-denominator Bourne shell; it’s rare to find a developer with the skill set to work with this, and rarer to find a developer who wants to.  Given infinite time, I would love to rework the krb5 build system using scons, which I think is the most promising successor to autotools and make.  (Update 8/25: After more reading, I think we would need to consider more carefully which cross-platform build tool to use, if we were to do this.)
  • Version control: We use Subversion, which is an industry standing for open source projects and mostly does a fine job of maintaining our project history.  However, it’s not very good at merging or distributed development.  At some point it might be worth considering a transition to git, which has probably been gaining the most traction among DVCS tools in the open source world.  Currently Subversion hasn’t reached the pain point to make this transition a priority.
  • Bug tracking: Lest I sound too negative about everything, I think we have a perfectly satisfying bug tracking infrastructure using RT.  I wish we could keep up with the tickets better, but that’s a universal constant in open source work and probably in software work in general.
  • Release management: The accepted practice when I signed on was to have a major release (1.x) every 18 to 24 months, with a three-month testing period, and patch releases (1.x.y) as necessary.  Such a long release schedule makes it difficult to avoid the temptation to cram new features into the release during the testing period or in patch releases.  For the future, we have agreed to use a 6 to 12 month release schedule with a shorter testing period.
  • Project management: We use a pretty informal project management discipline defined here.  We made some modifications to it recently to hold more of the discussion of projects on the mailing list and less on the wiki.  By and large I like how it works, and don’t plan to make any changes.

As you can tell, it would be easy to spend all of my time on these issues alone, and probably all of several other people’s time as well.  For better or for worse, we don’t have that luxury.  If you have opinions about how we should be prioritizing the time we spend on code quality or on approaches we should take, your comments are welcome.

Key Accomplishments - Part 1

July 28th, 2009 by sbuckley

The 2009 MIT fiscal year ended on June 30th.  The end of one year’s budget and the beginning of a new one is always a good time to take stock of how much progress has been made.  FY 2009 was our first full year of operations since the Kerberos Consortium was founded in October 2007.

We started out FY 2009 with ten basic things we wanted to make substantial progress on by the end of the year.  In guess my assessment isn’t that we got an “A”, but probably at least a “B+”.  Here’s a recap, broken up into ten parts.

1. An organization

This might seem like an odd goal, given that there has been a Kerberos development group at MIT for over 15 years.  However, that group was funded by MIT specifically to support MIT’s deployment of Kerberos.  MIT still uses and needs Kerberos, but as the technology matured, demands on the group were reduced, head count was cut, and releases became less frequent.  With the creation of the Kerberos Consortium, we needed a new type of organization.  We needed an organization that was externally focused, customer-centric, and execution oriented.  We also needed an organization that could do more than just development work on the MIT implementation of Kerberos.  We needed to provide for the interoperability testing requirements of our sponsors, and provide expert advice on all levels.  Most importantly, we needed to provide the intellectual leadership that our sponsors expected of MIT, and that was required to move Kerberos into new areas, such as the web and on mobile devices.

This change required an enormous cultural shift, and it took its toll.  But I’m pleased to say that we have a given the core MIT team an infusion of fresh blood.  Tom Yu, who has been working on Kerberos for 15 years was promoted to Development Team Leader.  We scooped up Zhanna Tsitkova from Novell, who has 15 years experience in commercial IT security.  We also convinced Greg Hudson to join our team from another area at MIT.  Greg is an amazingly productive and clear thinking engineer with lots of open source development experience.  Thomas Hardjono joined us as Strategic Advisor in December, after years at Verisign and as a long-time chair at the TCG.  Thomas is leading the evolution of Kerberos to the web.  Lastly, we also started hiring MIT undergraduate computer science students to work with us part-time.  So many of the most experienced Kerberos engineers in the workplace today got their first experience as students at MIT.  We now have two students working with us, so this important pipeline of new talent is getting refilled.

Next installments;

2. Knowing the customer
3. Documentation
4. Database support
5. Better coding practices
6. A good test suite
7. Kerberos for Mobile
8. Release Kerberos 1.7
9. Simpler revenue model
10. Community Building

Welcome to the MIT Kerberos Consortium blog

July 28th, 2009 by thardjono

Welcome to the MIT Kerberos Consortium blog.  The MIT Kerberos Consortium was created to promote and establish Kerberos as a universal authentication platform for the Internet.

Kerberos, originally developed for MIT’s Project Athena, has grown to become the most widely deployed system for authentication and authorization in modern computer networks. Kerberos is currently shipped with all major computer operating systems and is uniquely positioned to become a universal solution to the distributed authentication and authorization problem of permitting universal “single sign-on” within and between federated enterprises and peer-to-peer communities.

The MIT Kerberos Consortium is intended to provide a mechanism by which the numerous organizations that have adopted Kerberos in the last two decades may participate in the continuation of what was previously funded as an internal MIT project. By opening participation in the ongoing Kerberos effort, it will be possible to expand the scope of the work currently performed to encompass numerous important improvements in the Kerberos system, and to engage in much needed evangelism among potential adopters.

Building upon the existing Kerberos protocol suite, we will develop interoperable technologies (specifications, software, documentation and tools) to enable organizations and federated realms of organizations to use Kerberos as the single sign-on solution for access to all applications and services. We will also promote the adoption of these technologies so that ultimately all operating systems, applications, imbedded devices, and Internet based services can utilize Kerberos for authentication and authorization.