You may also download this file. Running time: 00:30:34
Greg Kroah-Hartman is a longtime developer of the Linux kernel, known for his work maintaining USB drivers as well as for packaging the SUSE kernel at Novell. O'Reilly Media recently interviewed Greg about his claim that the Linux kernel now supports more devices than any other operating system ever has, as well as why binary-only drivers are illegal, and how the kernel development process works.
I've seen quotes from you and I've even heard from you in person that you believe Linux, the kernel, supports more devices than any other operating system ever.
I can back it up by that's true, and it's been independently verified by somebody from Microsoft.
What makes Linux capable of doing this? Is it development process; is it ease of writing drivers; is this sheer stubbornness on parts of people like you? What is it?
I think it's all of those. The ease of writing drivers; Linux drivers are at normally one-third smaller than Windows drivers or other operating system drivers. We have all the examples there, so it's trivial to write a new one if you have new hardware, usually because you can copy the code and go. We maintain them for forever, so the old ones don't disappear and we run on every single processor out there. I mean Linux is 80% of the world's top 500 super computers right now and we're also the number one embedded operating system today. We've got both sides of the market because it's--yeah it's pretty amazing. I don't know why, but we're doing something right.
A lot of people listening to this or reading this are going to say "Now wait a second! I just bought a new computer and plugged in this peripheral and it didn't work. How can you say that you support all this great hardware when I just bought a new piece of hardware and it doesn't work?"
Yeah. The thing about drivers is the vast majority, the number doesn't matter. You only care about what you have so it becomes personal. What you have is a very small number of devices. To be fair, so I originally thought that; we don't have a lot of devices that are supportive. Let's work on that. So I started the Linux Driver Project saying hey we will write any driver for anybody and maintain it for free for you--for companies. I got an underwhelming amount of responses from companies. I got a huge response from developers. I have over 300 people willing to help out with this.
I heard it was 100; 300 is just amazing.
Yeah; we have 300 now--it was 100 in the first week I think or something. That's crazy. i went and asked every single hardware manufacturer, the big guys that ship the boxes, Dell, IBM, HP--what do you ship that isn't supported by Linux? They came back with nothing. Everything is supported by Linux. If you have a device that isn't supported by Linux that's being shipped today, let me know.
To be fair there's two major classes of devices that we did find that have limited support. One is the video camera webcam type things and with the latest kernel, we've done a lot of work there and there is a lot more device support. I think the next kernel that's going to come out supports almost everything that we know about it, so those developers have gotten it together and are doing great strides. If it doesn't work, contact them; they'll get it to work. They're doing really good on that. After that there is wireless devices. About a year ago wireless wasn't doing so well. We got a bunch of people working on that now and everything is supported now. Atheros now has open source drivers. Intel--everything is supported; Marvell is supported. The one hold-out is Broadcom but even they have Linux drivers, they're just closed source. There is Linux support out there if you have hardware. I really don't know of any major device out there that we don't support.
Of course there's a difference between a major device and a minor device, but....
There is and so minor devices, so I go, I scour Fry's, or if I'm traveling in Japan, the weird USB devices that are funky. We have support for that. I had one of the button that you push....
Right; you showed that one to me after you came back from Japan.
You're right; yeah I saw that one, so that works from Linux now you know. There's a lot of weird minor devices like that, that we don't have the best support for and to be fair it's happening. Just talk to us and we'll talk with the company and get the device support and get the specs on how to work on the device and we'll get it done.
Are you having more success talking to companies now after you've started the project or is that still slow-going in some cases?
I'm having a really good success now days. A lot of companies are coming and talking to us and they want us to get support before their hardware is out. They're working with us to announce drivers at the time their hardware is released. We're working with some network card guys; they're doing really good stuff, some video capture guys that are working on really high speed devices that they want support when their hardware is available so it's working out really well.
Are you seeing this more from companies in Taiwan who don't really care about keeping their "intellectual property" really close or are you seeing it from all over the world?
From the US--yes; so there's a company here in Portland that does video capture devices, high speed video capture devices using USB. We have Linux drivers for them now. There's another company that does network cards. I have a one gigabit driver in my tree right now and I have a 10 gigabit driver for hardware that they don't even ship yet--that's in my tree; you can use it if you want. You don't have any hardware--and those are US-based companies, so there's a lot of US-based companies that are realizing their customers are asking for Linux support. Also there are Taiwanese companies; I'm talking with some other manufacturers there. Via is a great example; I've been talking with them for a while and now that Harald Welte is working for them we're continuing that and that's going very well. But it's not just the Taiwanese guys; it's everybody to be fair.
Did something change in the past year or so to improve the situation?
I don't know; it's just the word gets around, just being persistent; I don't really know what--to be fair I don't know. Two years ago I wasn't saying I do the support people, so I didn't know about that.
Even one year ago I don't think I had ever seen you say that the situation was improving.
No; you're right. One year ago it wasn't, so now I do feel like it is. It's enough to keep a couple of us working on this full-time so I feel very well. Even I'm working on VMWare Drivers right now too.
Yeah; not through them. They released a lot of their code under the GPL, so we took their drivers and cleaned them up and are working on getting them in the kernel tree, and by doing that I've actually talked to them and we're working with those developers after the fact.
One of the benefits I've always seen--people say of getting code into the kernel tree or really any benefit of getting code in upstream is that someone else will maintain it for you. One of the reasons I've seen you say why you really, really want to do this is not just because the kernel changes so fast, but because the more drivers you have in the tree, the more you can find similarities between them and find better extractions and better ways of doing things and sharing code between them. Is that argument compelling to a device manufacturer?
To individual people no; that's not--it's not a compelling one because they just care about making their device and running with it and they don't care about other companies. At first glance, I have a USB data acquisition driver in the kernel that originally some company in Germany created and they wanted to get it in and we realized that about four different companies' drivers all do the same thing, so we all merged into one. Now none of those companies have to maintain it; they all get their support for free. Users benefit in the fact that they all have one tinier code base and so the original German company is very happy about it, so they're ecstatic. They don't have to worry about it anymore and it just works for them. Actually there's another company here in Portland that uses that same driver and they're very happy about it too, so it makes people realize over time that the Linux development model has benefits but it's really hard to convince them until they've gone through that.
I've heard people say that there's a mentoring process you have to go through not just trying to get their code of quality--to get accepted in somebody's tree for eventual inclusion but to understand that there is a different developmental model and it works. Is that an easy process or a difficult process or does it really depend on the company?
Different companies work in different ways. Different engineers are working in different ways. We've done a really, really big job in trying to document what our process is, how it works, how to get involved. There's a how-to document in the kernel that shows you a pointer to all different things and Jonathon Corbet from Linux Weekly News just published I think a 20-page paper on detailing every step of how to do it (How to Participate in the Linux Community), and so it's documented extremely well on how to go about it. We have people--myself included, and the people that I work with that are glad to hand-hold companies through the process. If you have any questions about it let me know and I can put you in contact with the right people.
You also make me wish I were a company here, Greg.
Let's talk more about the process because you brought that up. One of the ways that most people only ever interact with the kernel is if they're a programmer at all they do syscalls and most people aren't programmers of course. They only interact with the kernel because there are lots of other pieces of software running on top of it. That requires them to get the kernel packaged from somewhere. What do you think about people's impressions of the kernel being formed based on I get a kernel from Red Hat, I get a kernel from Novell, I get a kernel from CentOS and they've compiled these things in and they've compiled these things out? Is that difficult for you to see software mispackaged sometimes or horribly behind the curve or is that just the fact of life here?
Well I also have another hat; I also work for Novell unpacking their kernels for them. So I hope I'm doing a good job there.
I haven't heard anybody complain about your particular kernel.
Right; I'm one of the people. There are many of us that do that job at Novell. We have different product; people do different for different things, but for--I think you got two different questions there. There's looking at packages in general, I think that this does a very good job in taking the kernel and setting everything properly and giving it to the customers. I think that works out really well. The kernel is very configurable; but by doing that we try and make it so that the defaults all just work properly. The most configurations we have is what drivers do you want to build in or build as a module and this shows just build them all because we don't know what hardware you have. Then we automatically load what you have based on your machine, what you have there at the time, so I think they're packaging it very well. But then there's the--you alluded to are they behind the times, and that goes--I have a big beef with the fact that the enterprise distros and a lot of the embedded companies ship very old kernels.
Not just very old kernels but very old kernels and they have to backport new features and drivers sometimes.
Exactly; they back port old features or new features and old--and drivers and stuff and it's a big, big pain. I think that model is wrong and I say this as somebody who gets paid doing this. I think that model is wrong; I've said I think it's wrong for a number of years because of the fact that if the developers are doing their job right you should be able to replace the kernel and everything that was working fine for you today, you replace the kernel, it should work for you tomorrow. If not, we did our job wrong; we don't break the API for user space, the kernel space; we don't do that so people are very hesitant in change.
What they're used to with the traditional model of operating systems is their kernel--their operating system--changes every four to five years just due to the fact that they had a very slow rate of change at a small number of engineers and there wasn't many features going on. Then you look at how companies, the users require support from usually big customers like SAP, Oracle--that certify their products on a specific kernel or product. I'm trying to get those companies to switch away from that model. I've talked to a lot of customers that have very big IT shops and they said we'd be glad to run the kernel of the day if only our SAP, Oracle--not to pick on those--but there's a lot of these big products--would be--they would say that they would support that. I'm working on that; so we're working on trying to do that. There's some fun things we can do with virtualization there and get away with some things that we're looking at doing but it would be really good if you could always just replace the kernel with the latest driver and the latest updates and your program.... I mean Oracle won't break and we're not going to change anything there.
I've upgraded my kernel once or twice and suspend stopped working but that's a different question. I wouldn't want to run Oracle on my laptop for example.
Right, right; we treat regressions very seriously. We'll rip out features we added if we find out that we've caused regression. Suspend is--I bet the suspend just didn't resume right?
Well no; actually I've seen kernel panics on hibernate occasionally.
Oh okay; that's not good.
But again I don't know whether that's one particular driver I have loaded or I put my wireless card in power saving mode and it didn't like that. It could have been a lot of things.
Right; and we'd be glad to help work on that. Suspend and resume is a tough problem. It's a very tough problem and the hardest problem is we don't have access to specs for the laptop.
There are some companies that do work with us and the kernel people and suspend and resume works wonderfully on those machines, and if you look at the kernel developer and you buy the machine that they have, that usually works.
That's a bad thing but the companies have realized they're having to start giving hardware away to keep the people because of that too.
Exactly. I think Jim Zemlin said--this was before the new version of Ubuntu came out--he said "I'm running the pre-released version because I just handed [a kernel hacker] my laptop and he fixes it for me."
You want to live near a kernel developer. I get that; I've done that.
You mentioned API support, you never break the API for user land but I'm going to quote you again here; you say it's nonsense to have a stable ABI and you've convinced me but I'm not sure you've convinced everybody.
I will point out that every operating system breaks the internal API within the kernel every single time. The only way it doesn't break their API is if their operating system is dead and is not being developed anymore. It's just a different timeframe. The Windows timeframe of internal API changes, it's four, five years--six years sometimes; Solaris--three, four, five years; AIX--six, seven years--but they change. Every operating system changes; it's not a new thing. It's just that Linux has developed so much faster that you see it more.
Ask anybody that has done Windows driver development. I used to do Windows driver development before I started working on Linux. It changes all the time; I'm on the Windows driver development mailing list and it's the best mailing list in the world because I hear things and I have bunches of quotes but lots of things are saying "Hey what happened? What changed in the service spec?" and they'll detail what changed and the people go off and fix their drivers. It changes all the time, so it is not unique to Linux by any means and I want to make that very clear. There's marketing perception out there that for some reason people think that the API doesn't change another operating system and that's just flat out no.
Is it a convenient scapegoat for something else then?
I think it's an excuse people use who don't understand the problem. You know the fact that our API changes erratically very, very quickly is not a problem once you get your driver into the kernel because the rate of change happens and your driver will be fixed along with it.
But if you're trying to get the driver in the kernel and it may take six months for that process I can see that not being pleasant every day.
Right; and I will agree with that, and so that is why we have things like the linux-next tree to test out the API changes; we have my staging tree, which you can stage it in there for as long as you want and we will fix up the API changes as it goes. But also if you're spending six months to write a driver that seems a little bit suspect to me.
It may not be six months to write the driver but you do need to meet coding standards to get it in the kernel and you have to get a subsystem maintainer's attention right?
Right and hopefully--I mean the coding standard is well documented. We have tools now that will pretty much do it for you. That's trivial; I mean I say it's trivial--I spend a lot of my time fixing up other companies' drivers to me that are standard, so it is a lot of work. But that's simple work to do. After that it's--yeah; getting attention and if you aren't getting people's attention let me know or let Andrew Morton know and we will get those people's attention. I think there should be no excuse there. I don't know of anybody these days--any subsystem maintainer, I think there's only one or two that do not get paid for their job. I think that's pretty rare these days. They all should pay attention to it and they all usually do. I don't know of anybody that falls down on the job.
It sounds like the big work now is just convincing people to work within the Linux development model.
Rather than trying to shoehorn Linux into their existing development model.
Exactly and that is part of an education process within the company itself when they change it and it's awesome. It's an evolutionary process that I've seen lots of companies work on it. I've worked on it at IBM, Novell, and all these other companies I'm working on a day-to-day basis with. It's a change and it's different from their old model.
I agree with you especially in terms of migration pain. If I upgrade every couple of weeks that's a much smaller upgrade, much smaller changes are possible then if I only upgraded every two or three years.
You have the migration and upgrade pain either way but it's like stubbing your toe versus breaking your leg. You know neither one is very fun but my toe is not going to throb in five minutes where it's going to take me six weeks to grow another leg.
Right; and some distributors out there provide a constant rolling upgrade. Debian and Gentoo are two very good examples of that. Once you install a Debian or Gentoo box you never do a major install again ever usually. There's constantly flow.
Hopefully; it's slow upgrades over time and it works very well. The Gentoo model is very nice; I like that little model.
Most kernel developers run in Gentoo don't they?
They did; I'm also a Gentoo developer. Gentoo and Debian was used a lot; a lot of the stuff we did with some of the testing for the enterprise distros was done in Debian and Gentoo because we get some good feedback and turnaround time for that. So we do play in both, but I also do a lot with openSUSE; I know the Fedora developers. Those are development systems that are kept up-to-date very quickly and always good ones to use.
Put a lot of resources into doing that as well.
Let's talk a little bit more about the kernel development model. It changed a couple years ago I think in part because a lot of people were frustrated by this backporting scenario and the fact that it took what--three years to get kernel 2.6 ready?
Now you're using a time-based release schedule which I find really fascinating especially looking at graphs that say okay, here's the merge window and here's the size of the next tree and it slips skip down to nothing every time the merge window opens. Are you and other developers really comfortable with this time-based model?
Yeah; it's really good. We went through the 2.4 to 2.5 model to 2.6 and that was horrible. Distros were back-porting stuff from the 2.5 kernel that they wanted and they needed new hardware support, they needed new schedulers, development and it just took forever. Once we got 2.6 going after the first year we kind of realized hey; this time-based model, it is kind of fun. Let's just keep doing it. We documented what we were doing and everybody was like "Oh you changed your model!" and we're like "We've been doing this for a year; haven't you been paying attention?"
We've kept going and it's very predictable and the distros like it because they can look at this and say hey we know a kernel is going to come out between two and three months and we can plan what we want to get done when and we can with that and--and run with it and--and it's very nice. It's working out really, really well. It's actually increased our rate of developers, our number of companies participating, our number of developers participating; they get better feedback so I think it's working really, really well. There are some people who are worried about stability but that's another question.
Yeah; that is my question. I mean I've heard questions about stability and heard questions about quality. One of the things I noticed from the earlier model is that Linus could release a pre-release kernel and you wouldn't get much testing on it and I think he's actually said this multiple times; if you want to get someone to test your software you release it and then you get them to come back to you and complain that it wasn't as stable as you said it was.
Right; so we're now doing that every three months which is good. We've got more people testing it out that way. Also people are very resistant to change. The traditional model is you don't change something that's not broken right?
The traditional software engineering model really, really flies in the face of the fact of reality is that the world changes, so we have to change in regards to that. We're not making all these changes just for fun; we're making these changes because we have to and reacting to external stimuli. Linus's famous quote is Linux is not intelligent design; it's evolution. We evolved the kernel. If we had set out to design an operating system that could support 48 different processors and all these devices, we couldn't have done it. But by all these little tiny changes all along the way it's turned out to be something that works really, really well. People were worried about quality and to be fair, I see all the bugs for the kernel for two different distros and the bugzilla.kernel.org ones and the number of bugs reported has been flat. All of our numbers of lines of code have gone up and I think our number of users have gone up.
I think you're probably right about the number of users going up too.
Right; so that it can either be that we only have so many people that are willing to report bugs, which is fair, and if--or their quality is getting better and we're actually getting less bugs reported for a large code base. It's an interesting model, but people are happy with it and we only go by what our users and our distros; I mean Linux is still growing at an installed base rapidly. Companies are betting the farm on Linux as far as their embedded devices. All these embedded devices are shipping with it, Garman and TomTom, little embedded GPS devices; they're all Linux, so it is working.
Do you think this time-based release model is appropriate for other projects or is it more necessary for the kernel because of the pace of the development?
I think it's a good model in getting something out to people to test. Like you said, people will only test when a release comes out. They're not going to be willing to pull trees from somewhere and build it themselves because they want distros package it up for them, so I think time-based model when you're dealing with a whole bunch of people that aren't being paid for, like I have no way to tell anybody else who works on USB because I'm a USB maintainer--go get this feature done by this certain period of time. I can't do that; it's all volunteer-based. If you say "Here is when the timeframe is going to be" everybody knows that if they want to get something done they have to get it in by then. Otherwise, they'll work on it at their own pace.
By going with the feature release you're having to push that out because you can't get promise that you're doing anything on a particular schedule.
Exactly; you're relying on a time of day when it cuts off. As long as you always have a product or something that builds and works--and we have the rule that every single individual patch to get that into the kernel should not break the build, it should not break the usage model, so you can't add a patch to the kernel--all those 10,000 patches that got added in the last three months, every single individual one of those should not have broken something.
You can bisect back to an individual patch which did if that happened.
Right and that's the reason why because we wanted to say hey if something broke and we could say we know this version worked and this version didn't work; we have tools that will automatically take you through and step through and try different kernels and you can say this one was good, this one was bad, and it will narrow you down to the exact patch to the broken problem--that caused your problem. It's a really, really powerful tool.
It's not just person management but also you have tools which support this development style.
Yes; we have the tools and also the process, so every thing patch that gets added to the kernel has been reviewed by at least one person--usually two and they sign off on it. They say this is good; so if you break it down and this is the patch that caused the problem you can look at all the people that said this patch is okay and then there's half the blame. You go hit them with an email saying something broke in this patch and they'll work to fix it. You have local ownership of each individual line of code. That's also another thing unique to Linux. We have this giant code base and every single line of code can be attracted to all the people that have ever reviewed that line. It's pretty powerful.
Are you also tracking copyright concerns for those lines of code as well?
Implicitly you can do that because it has the ownership of the author. You can do that if you care about that.
Let me get back to the time-based releases. Have you seen Mark Shuttleworth's proposal that distributions should get on the same release schedule?
What do you think about that? I'm not asking you to speak for Novell but just as Greg.
To be honest; anybody who's ever done this and packaged--I worked for three different distros over the years including Gentoo as a developer--as a volunteer. A distros main job these days is taking these individual pieces that are created by different people and putting them together in a unified way. That's a very tough problem and it's tough to get it working right; it's tough to get it working very well. Gentoo and Debian as they're constantly rolling along have real problems because they're trying to integrate as things go. The bigger ones like Fedora and openSUSE can say here's the release now; it works much better. They can spend the time and do that.
I think Mark is asking other people to do his job for him. He has a very small number of developers; he has a very limited development staff. He's starting to diverge more and more away from his Debian base, the issues between what Debian is shipping at the moment versus what he wants to ship at the moment. If you look, wanting to rely on other people to stabilize that integration is a great thing. If other people can do your work for you that's a wonderful thing to get them to do; I don't think that's going to happen.
You think it's not realistic to expect or you think it's a bad idea in general?
I think it's a combination. He wants, say, the enterprise kernel. They're going to ship a long-term kernel support based on this specific one. Trying to line up Red Hat and Novell and Ubuntu--just business model and internal business concern with each other is not fair to ask of a way of companies that compete against each other.
We have a very weird environment at Linux in the fact that there's these different companies that at the management level and the marketing level are competing. They're competitive companies. Red Hat and Novell go after the same customers, the same install base, and they're very, very competitive yet they have the exact same engineering staff. Red Hat is Novell's engineering staff and Novell is Red Hat's engineering staff. You can't do stuff to make that upset. It's a very, very weird model but just trying to align the business model and their timeframe of what they wish to do and trying to tell the Open Source community that you're going to line up with these businesses at the moment in time is a very unfair thing to ask.
How about the projects themselves? I mean not just Red Hat but other projects, GCC, Samba, Ruby, Perl; could those move to a similar type of release schedule and avoid some of those problems?
A time-based release schedule you mean?
Right; what if they said "we're going to do two a year or three a year" or something like that?
Yeah; you might but then again, say we all pick January 1st we're going to release and then we're going to try every three months after that to do a release for all these projects. Things happen. Things flip forwards, backwards. If you look, I can give you an average time period that the kernel has been released. It's two and three-quarters months, although sometimes we're faster and sometimes slower. We're dealing with individual people; we're dealing with problems. Would you want to bet on when Perl 6 is going to be out?
Actually we release a new version every month....
Right; that is--okay, sorry--.
Trapped you there!
To be fair there, things are going to start slipping and change. You've got different development models, different communities, different people working in different stages. If somebody gets sick for two weeks, what happens then? What happens when the whole country of France goes on vacation in August and your main developer was there at the time?
Or everybody gets food poisoning at Akademy?
Exactly; or everybody just goes to Akademy. That could be another thing, so that's a big deal. Even if we were to say that, over time it just wouldn't work out. It might be nice the first time around but after that....
Fair enough. One of my favorite pieces of writing you've done is Binary Only Kernel Drivers are Illegal. Not just because it's nice and controversial--I like that--but because I liked your reasoning on that. A lot of people don't. What do you say when people claim that you're just trying to prevent my hardware from working?
I said three things; I said they're illegal, they're unworkable, and they're immoral. Illegal in the fact that every single lawyer I've worked with has said there's no way you can create a Linux kernel driver these days that is does not fall under the GPL. IBM has publicly stood up and said so in a conference; a very high Vice President said that to a very large group of developers that they believe that it's not legal to create a non-GPL kernel driver, so it's not just me saying that.
Novell's public statement is something pretty close to the same thing. I can't remember the exact wording for it; so it's not me just saying that. That's illegal, so the license of the kernel and the way derivative works work because you can't ship a pre-built closed source Linux kernel driver.
It's not that the source code necessarily of the driver is the derivative work but the binary version almost certainly is. That was my understanding of Linus's argument.
Yeah, yeah exactly. You start getting into some parsing of the copyright law and the GPL. It comes under the distribution you can do whatever you want on your own machines and we're not ever changing the usage model on the distribution. Google have some very large closed source file systems, and they don't have to ship them because its all internal to the company and that's fine.
That covers the illegal part. How about immoral?
How about unworkable, first?
I think that's the most practical point. I want to move on to the last one first.
Okay, so immoral. The Linux Kernel community has released a product that is very, very good. For some reason, if you want to create a driver for this either your customers are asking for it or you see some value in using Linux. We've released it under a license that says, "We don't care what you do with this, but you have to abide by our license and release the source code back for any changes you make to it." I say it is immoral for you to take our work and use it in a way that saying that you feel that your tiny contribution is greater that the entire contribution of these thousands of other people who have helped you achieved some goal. You are not playing nice with others, bascially.
You are not playing nice with a bunch of others.
A very large bunch of others. The other fun thing is that it is a very large bunch of others who have very big lawyers. If you look at the copyright holders of the kernel, it is IBM, Intel, Novell, Red Hat, AMD... these are companies with very big lawyers. Think about whose copyright you are going to violate and what you would want to do there.
Having a bunch of lawyers doesn't mean a lot unless you are willing to use them.
Yes, right. I'm not disagreeing with you there, [but] how do you know they haven't been used?
The reality is that almost nothing has been taken to court, [though] there have been some things taken to court recently. Most lawyers I've spoken to say that if something gets taken to court they've done their job wrong, so that's the very last resort.
Look at what's been happening in Germany for the past two years, with GPL violations. Look at Skype. Skype fought it they said, "We don't believe the GPL is valid here" and they got shot down so hard. We do have court results backing up our agreements.
But there are still proprietary drivers.
There are still proprietary binary drivers. I agree with you, they are out there. They are very well known companies trying to do that. They have their reasons for why they are doing that, and if you look it is usually them that is taking the risk on the distribution of the kernel.
It kind of makes sense from a business view, I talked to one of them. One of them was saying at the time that they wanted to get a company I was working for to ship a closed source binary driver. And we responded with the simple fact that if something happened if we got taken to court and we lost, we would lose the ability to ship Linux. We would lose the GPL.
If they were to lose the ability to ship Linux, they don't care because they don't have to rely on it. They made a business decision to try to flaunt the license, and for some people they've gotten away with it. Others, they haven't. There are a lot of companies that have switched.
I would think that you'd be able to get a copyright injunction to say "Cease distribution this infringing piece of software now" and they woudn't be able to ship their drivers at all.
That has happened in the past. Oh, and there's the new fun rule, and this just got upheld, so the fact that the Artistic license case....
The Jacobsen v. Katzer (PDF) case.
Right. A bunch of us were watching that. There's some fun US law that says that if you feel that your copyright is being violated and it is being imported, you can tell the customs agents and they will ban it from being imported into the country and that's based on the fact that copyright violations of fashion industry and purses, you can stop them being shipped instantly. Now we can use that ruling to stop people form importing products.
That's interesting, I wonder if we could stop shipment of video cards at the border.
Well no, and to be fair the company that does that, they don't ship Linux drivers with their product.
That's true. They are very clever about that.
They are very clever and that's fine. There's legal ways you can get around it, and I will abide by the letter of the law of the GPL, I have no problem with that.
Which leads us to the impractical principle.
The impractical one is, say we were to accept closed source drivers. Somebody took that thought argument through and it was Arjen van der Ven who used to maintain Red Hat's kernel and who now works for Intel doing a lot of good stuff. We took that thought process through and saw what would happen to our community and how things would work out, and the whole process shuts down really really really fast. Linux ends up working and acting like the closed source operating system vendors. We do releases at a very long, 4-5 years timeframe, very few people can get support for devices for it, it just kills our entire development model and shuts the system down.
It just wouldn't work from a practical standpoint.
You have thousands of developers who don't want to go back to that, but also hundreds of companies who you've converted to use this new development model.
Exactly, but to be fair a lot of these companies are only doing this because their customers want it, and that's fine. There's no problem with that. There isn't a netowrk or SCSI card that gets made today that doesn't have Linux support for it because they know they need support for it.
The trick is pushing that towards all devices and laptops.
Right. If you look, Lenovo is offering some laptops with Linux on it Dell is offering some laptops with linux on it. Those companies are working with the community and getting that stuff. There also, those larger companies are pushing back on their hardware vendors. They buy the chips that go on the motherboards and they are asking them to provide the open source drivers. That's where you can get the strong ones. That's where you can get the real change happening.
I look forward to that day.
I think we're there. I've actually bought a laptop with Linux on it for my last three laptops.
I have one as well. I bought it from System 76 a year or so ago.
You can do it and it can work, so I think it's practical. The ASUS PC, that thing is Linux. That's a great model. All those small little tiny PCs are great. HP has a little tiny one they are trying to compete with that ships with Linux. All servers that come out are automatically working.
I talked to Arjen about that, he said he had his laptop booting in five seconds.
They've been doing a lot because Intel has that mobile platform. Some of the embedded guys, there are some really cool tricks where you make a suspended RAM image and you store that in flash. When you start up the machine you just load out of flash and go. A lot of embedded guys do that, I think Sony does that. A lot of their cameras run Linux now.
With 64 MB of Flash, you can do a lot.
You can do an awful lot. Those are devices that are doing that. GPSs. As long as you know your hardware you are usually really good. A lot of the slow down is in the BIOS. Try and boot a big server and see how long it takes to get around to loading the kernel.