Five projects for Open Source for America, and other reports from the Open Source convention

By Andy Oram
July 24, 2009

A group of companies, advocacy groups, and open source projects announced Open Source for America at the O'Reilly Open Source convention on Wednesday. Trying to draw as many collaborators as possible into their coalition, they aim to establish a more effective voice supporting the use of free and open source software in the U.S. Federal government environment. Their three overarching goals include two for the government itself--changing its policies and raising awareness--plus one that could help educate and change the organizations that want to inject open source software and practices into government machinery.

The creators and advisors I talked to said they had fashioned a broad opening announcement without trying to raise specific proposals or discuss highly technical issues. In fact, they are still working out issues of governance, with the hope of ensuring the organization reflects grass-roots concerns Therefore, the work of fashioning an agenda for the organization is just starting. In that spirit I propose five projects:

  • To encourage all information to be stored in truly open formats. The touchstone of success is whether all the data stored can be retrieved by alternative means.
  • To instill government procedures that are friendly to projects that don't have well-organized corporate backers who can register with the government, and meet standards for liability, etc.
  • To train companies as well as independent projects on how to go through the steps required to be adopted by governments.
  • To encourage the government to develop its own Software as a Service offerings, requiring vendors to use an appropriate free software license, rather than depend on commercial services that were developed for other markets. (See a recent article of mine.) Update on October 30, 2009: Justin Seiferth, who signed up along with me for the acquisitions working group at OSA, suggested a change I support to this project: "To encourage the Government to examine all alternatives- commercial, in-house, existing open source, from domestic and foreign so long as they are open-source to fulfill its requirements." He, along with others, points out that we shouldn't discourage private companies and other organizations from offering services.
  • To join efforts at defining a maturity model for open source software, so governments can evaluate its quality, reliability, and security. This topic, incidentally, was covered by the O'Reilly book Open Source for the Enterprise. The Qualipso model being touted by some participants at OSCon looks complex, but it may well be the level of comprehensive needed, and that's why it calls for high-level involvement by large organizations like governments.
One of the OSA founders I talked to was Bill Vass, who was CTO of U.S. Army personnel systems for much of 1990s, then part of Secretary of Defense's Office of the CIO. Now he is president of the Sun division, Sun Microsystems Federal, where he pushes government agencies to adopt more open source, constantly maneuvering around resistance by the Business Software Alliance and other anti-open-source lobbyists.

He pointed out that the National Institute of Standards and Technology developed a National Vulnerability Database documenting as recently as 2007 that open source products were much more secure than proprietary ones by a ratio of about seven to one.

Everything in the OSA documents has been familiar for a decade or more to free software activists. We've made impressive headway in government despite all the barriers thrown up, deliberately or inadvertently. With an administration in place that seems to "get it" when it comes to technology and free access, we have a good chance to turn the tide.

Some activists complain because the OSA stresses the term "open source" instead of "free." The only terminology that bothers me a bit is the other part of the title--America. Open source is by nature a universal good, contributed to from many continents and equally open to all users across the globe (as well as on other astronomical bodies). But the name serves to remind us that laws and policies are defined on a national or local basis, and that we have a lot of work to do on those levels.

IDEs for web developers?

I chatted with Steve Souders, whose performance monitor YSlow set up a kind of industry-wide competition and cottage industry for tools showing what happens when a web page loads. Steve introduced YSlow during work on his first book, High Performance Web Sites. By the time he released his second book, Even Faster Web Site, all the big guys were in the game, trying to create cooler and cooler utilities that showed off more and more statistics in easy-to-view graphics.

For instance, I saw a new tool called MFast developed at MySpace by two engineers. Whereas YSlow ran on the server, MFast runs right on the browser (IE, to be exact) and therefore can show lots of interesting statistics that YSlow cannot, such as CPU usage by the browser at various points in the download and display of files.

Listening to Steve and the MySpace developers talk about the ways the manipulate the various files that make up a web page, I got the idea that web developers would benefit from a dedicated IDE that provided special tools for making sure the right include files were put in the code and that supported various optimization options that developers have found.

Banging and breaking locks

Thanks to the popularity of embedded consumer devices, Several developers at OSCon included material in their talks about reverse engineering devices, installing new components under the hood, and generally forcing systems to do things that they were built to do but prevented from doing by manufacturers.

Jesse Vincent gave a five-minute lightning talk that managed to squeeze in a list of interesting software that Amazon delivers with the Kindle, ways he exploited and added to that software, and tools he created for his Kindle. The conciseness of the talk is all the more impressive in that he cut it down from an earlier presentation he gave in Japan, that one all of ten minutes in length.

I discussed with him afterward why Amazon would include all sorts of software on the Kindle that they didn't use in the product but that made it easier for open source developers to turn the system to their own ends.

First, of course, came the decision to base the device on free software. The Kindle is based on Linux. As I described in another blog, Apple's use of BSD behind the iPhone (along with the symbolic information stored by Objective-C), facilitated the creation of the first SDK for the device by a community of free software hackers.

Jesse said the choice of free software is obvious because a manufacturer can save $6 per device in licensing fees. But I think the choice also involves quality (especially making good use of limited system resources) and familiarity. Amazon developers are obviously familiar with Linux, and Apple has used a version of BSD for desktop for years (as did NextSTEP).

The next interesting question concerns why Amazon stripped out certain parts of the standard Linux packages) such as the telnet daemon in BusyBox) but left in so much else. It didn't seem to be laziness of ignorance. My guess is that the Amazon developers thought they themselves might need those files in future versions of the device, and that these developers, unlike perhaps some product managers and lawyers, don't give a darn whether their customers use the files in the meantime.

But reverse engineering is not just a clever way to thumb your nose at a manufacturer or do things they don't want you to do. It's a basic fact of life for developers. For instance, at a presentation on a simple language for writing iPhone tests called Cucumber, Ian Dees pointed out that someone had to unpack log messages and make guesses over many iterations about what the words meant in order to provide an interface to the events produced by the iPhone user interface. This was a basic, if time-consuming, step in creating tools for developers who want to simulate user input during tests.

These sorts of late-night and weekend projects can be useful to keep hacker skills honed. It will be exciting to see what these entrepreneurial folks can do with their skills after open devices become widespread and the programmers can get much more out of their time.

Miscellaneous observations

Jesse Vincent also gave a talk about his newest bug tracker, called SD. This bug tracker follows in the footsteps of distributed version control tools (Arch, Git, Mercurial, etc.) in allowing people to keep separate bug databases that they resync wherever convenient.

In fact, SD seems to take distributed management a step further than the version control tools, because you don't have to explicitly sync with each repository that's tracking the same bugs. So long as you sync regularly with someone on the distributed graph of repositories, changes will propagate through the whole network and everybody will eventually have the same information.

How does the system deal with conflicting changes made by different people before they sync up? It uses some simple heuristics that almost always work. For instance, the majority is usually right, so if enough people accept a change, it's made the official one. And if there is a lot of disagreement, the network of repositories can notice that and notify the administrators.

I heard ThoughtWorks engineer Neal Ford speak about how modern dynamic and scripting languages make it easier to implement design patterns. Not only do modern languages provide constructs that implement some patterns, but introspection, hooks, and the interception of methods make the implementation of some other patterns quick work. These techniques also overlap with the design of domain specific languages, of which Ford pointed to the afore-mentioned Cucumber as a fine example.

Finally, I attended Larry Wall's annual State of the Onion talk about Perl. A half-hour discussion of error messages would be hard for any presenter to keep interesting, and I can't say Larry totally succeeded in that regard. But he did show that Perl has kept its unique sense of acting in a context and "just doing what you would expect" as error messages for Perl 6 evolve.

At run-time, Perl 6 tries to soften the impact of errors in order to let applications degrade gracefully. In a world of multiprocessors and systems with failure rates, one doesn't want an exception thrown by one thread to halt a whole applications. Wall lets the programs define various types of success and failure.

Compiler-time error are treated more strictly, but messages do their best to carry out a little mind-reading. The syntax itself often allows the compiler to determine what should be at the place where the error occurred and to suggest a specific fix. The parser is also aware of Perl 5 syntax and explains the problem to the developer when it comes across something that looks like obsolete syntax.

I attended a Birds of a Feather session about mobile devices, led by Stefano Maffulli of Funambol. We ranged over a number of interesting topics, including the opportunities for moving from GSM and CDMA to protocols such as LTE, and the proclivity of cell providers for tracking user behavior. Every HTC phone has a chip whose purpose is secret, but hackers know at least that it has access to all the contact information on the phone.

OSCon was a long event for me this year, particularly following the Community Leadership Summit, which I've covered separately. I reported on other events in this year's OSCon in another blog. The new venue in San Jose worked out fine. Conference chair Alison Randal pointed out that open source works independently of economic trends and continues to strengthen even as industries totter. OSCon itself seems to offer more and more each year, renewing itself on each occasion with new issues and topics.


You might also be interested in:

News Topics

Recommended for You

Got a Question?