How PowerTOP, LatencyTOP, and Five-Second Boot Improve Desktop Linux

By chromatic
September 25, 2008 | Comments: 7

You may also download this file. Running time: 00:33:23

Subscribe to this podcast series via iTunes. Or, visit the O'Reilly Media area at iTunes to find other podcasts from O'Reilly.

O'Reilly News recently interviewed Arjan van de Ven about his efforts to improve Linux performance and reduce power consumption. Arjan works for Intel in the Open Source Technology Center. This interview is approximately 30 minutes.

One of the projects you're probably most known for in the past couple of years is the PowerTOP utility, which I found very fascinating. Looking at some of the gains you've made over the past 18 months, it seems like Linux-based devices are saving a lot more power than they used to. What do you consider the big successes in the past year and a half?

To be honest we fixed effectively the entire Linux desktop space. It's not--PowerTOP is more--it's not just what we fixed with PowerTOP is not individual pieces. We fixed everything. For me that was a success.

Is that everything in terms of not just desktop but servers as well?

Yeah; we fixed not just Evolution. We fixed Firefox; the thing with Firefox was that it wasn't one thing that was broken. Everything had problems and we had to fix all of it. So for me the success was how quickly everything got fixed; it was just amazing.

book cover book cover book cover
For a complete list of all things Linux,
visit linux.oreilly.com.

In this context you consider fixed--everything is no longer broken in the same way or--?

Everything is no longer keeping the CPU out of idle basically.

Do you have a reference machine? I guess I'm asking what's your benchmark for this, a particular software configuration stack or particular type of machine, or are you willing to say it's pretty much every Linux based machine out there?

I'm looking at several machines--my own laptop but to be honest, what runs on my own laptop is what I care about most. At least that's where I got more battery life, this is where I see the changes. I tend to run a quite rich environment on my laptop but I also look at service. We look at all kinds of machines and we see the same trend everywhere in that all the various pieces of it--never polling or keeping the CPU up. They all got fixed.

In fixing this, is there a component of education, for example, saying "Instead of doing a busy wait on a select loop or continually polling you should set a kernel timer and wait for that to call you"?

That's part of it but the biggest thing is that you had no visibility. Just two days ago at IDF I spoke with a developer of the GNOME desktop and he said, yeah; when I saw it happen I fixed it in 10 minutes, but you don't know it's there until you see it from PowerTOP. Adding the visibility turns out to be enough for people to start fixing it. They know how to fix--how to not poll most of the time.

You can't fix something you can't measure.

If you don't see that it happens you don't know it happens and you can't fix it.

Are you getting the same sort of results from other projects you run into?

GNOME was there but it's almost everybody goes oh yeah; we should have not done that; either they fix it themselves or some--a lot of people give them the fix and in general it's like oh yeah; we shouldn't have done that. Unless you see what's happening you don't know what to fix, so the biggest thing that PowerTOP did was add visibility. We can see under the hood what's going on and then we can fix it. And quite often the fix is very simple.

It sounds then, maybe I should be able to say that just about everybody is happy to see this. Is that the case?

Yes; people--all the developers I've worked with--and that's quite a few--they all go oh yeah. Thank you for the fix; we should have no problems in the first place. We didn't know this; it's fixed now. In the beginning I did most of the fixing when PowerTOP was very new and now days the people do it themselves. The developers learn this while they're developing. So I don't even have to--it's at a point where I don't have to fix stuff anymore; everybody is using it and they fix it for me. That's not the right way to say it but they fix their own technology.

Right; and instead of you saying here is a problem in software someone else wrote, here is my suggestion on how to fix it. That knowledge is now in the community and the people can fix their own stuff before they even release it.

Yeah; exactly. If I don't have to chase them they pay attention from the start, which is of course the right way.

Do you know if any proprietary software developers are doing the same thing?

I don't know; I don't see their software. I suspect some of them do and I know some of them don't. The people--the components that are left that are known problematic are those proprietary components.

As usual when talking about the Linux kernel right?

Not just kernels; all of it is user space and I don't want to single people out here. That's not so nice but the pieces that are problematic tend to be the proprietary ones because I can't fix them, right. You can't fix them, so some of them got fixed and some of them didn't; does that make sense?

That definitely makes sense--absolutely.

But even those guys are sometimes looking at it.

Right; I was just kind of curious if you had traffic on the mailing list or patches coming in.

What I see is--how to put it--users go to their proprietary vendors and yell at them. The users are putting pressure on whoever gave them the source to fix it or get them binaries to fix it.

Because it's visible now--oh this, this Oracle client is waking up my kernel 100 times a second and I don't like that.

Yeah.

I see.

Part of it is not just for developers; it's also for users and the users will pressure their software vendors.

Right; I use primarily GNU/Linux based desktop applications and I know you do as well. But a lot of the software we're talking about--GNOME, Firefox, these are cross-platforms applications. I'm wondering, do these fixes affect other platforms positively, negatively, no effect?

In general positively; most of the stuff you find is just bad programming but in fact you fix it usually also improves performance because all the systems all the time, you consume CPU power--not just power but also CPU cycles and by fixing it you stop processing cycles so they're available for something else, and that's independent of what you OS you run. For Firefox that applies to everything, all OSes. Even if you don't care about power you still save CPU cycles, not always a lot but still it adds up because you schedule, you do a context switch, you flush your TLBs, just to do nothing and it does add up. I don't have the numbers for operating systems that--but conceptually it's an improvement.

Right; I was thinking about specific interfaces--kpoll; epoll for example--differences between BSDs and Linux.

All of pieces of problem so much; those guys are--epoll; kpoll are all tricks to deal with high frequency events and then deal with the request like a web server. The polling you're talking about adheres to the application and just keeps asking is there anything yet--is there anything yet, so it's kind of orthogonal all to those guys. Any polling loop in the user application--no matter what actual polling mechanism is used that's sort of independent of that. It's the fact that they look every 10 milliseconds to see if something happened; that's what hurts--not what mechanism they use.

Did you get more response from users or from Linux distributions or application developers?

A lot of users were helping because if you give people more battery life, that's what people care about a lot. Distributions also use it because they compete almost on battery life. They compete on usability and battery life is just part of usability; that's the thing that PowerTOP has done is put that more on the radar--that software matters for battery life. That does--if you don't fix it you consume through battery life and there's a difference between OSes on that. I know Fedora used to sort of--they sort of were the front-runner in this; so for a while Fedora was--used less power than the other guy. For them to at least not go behind the other guy--to the other people.

In that case Fedora might be willing to apply a patch to Pidgin or to Firefox to consume fewer resources even if that patch isn't necessarily upstream yet.

Quite often the patches were upstream. It's just a matter of putting them--back-porting them to whatever they had.

Right; well I know I had certain frustrations with my laptop. In particular I didn't have the force HPET patch which I really, really wanted and I was waiting for upstream to get that out.

All the patches took a little bit longer but in a three to six-month time window, all the important pieces fell into place, but yes, force HPET was a painful one because you had to go to the kernel and it's using hardware in a way that's not really meant to be used, and we had to work around BIOS issues and stuff, so it took a while to stabilize that, but I have to say some of the distros were very good about staying on top of the latest pieces, others took a bit more time. But that's okay; some of them--distros thankfully we have the diversity Linux, right?

Again, if users are pushing on this very little prevents a user from compiling a custom kernel or building his or her own version of Firefox.

Or pushing distros to do it anyway.

Or even contributing third-party packages to other users.

The good news--it's almost quiet now because everybody knows it's sort of almost last year stuff. Most of the server-side people keep track of that it doesn't get worse. I run PowerTOP every other week or so just to see that it didn't go backwards. Every once in a while we have new things to PowerTOP but on the software side we fix most of this. It's very, very fast, like three to six months and most of them fixed it.

Right; it seems like all the releases that came out this year, the distribution releases are so much better than last year's. It's just amazing. Do you consider this mostly a finished product then or a finished project?

We are looking at expanding it a little bit but the software--the wakeups part of it is I think finished. And now it's a matter of making sure we don't go back. The second thing we're trying to do with PowerTOP is making it a sort of a check tool. If you have a mode that can run on a system that says okay you have this feature, you have this feature; oops you missed this part. This way you can look up here what you want to do and then you can make it work. Make it very specific for like system integrator; they can only check to see if anything is missing and if anything is missing it should give--we'll give information of how to fix it. It's almost like--it almost becomes a QA team rather than a developer tool.

It's just a way to profile your code along another dimension.

Yeah; it will make it more suitable for validation teams basically. We started for the developer. Now that everything is fixed we have to have the QA team run it. Think of it that way.

You can say okay, in this past week we've gone from one wake-up per second to ten wake-ups per second. Let's trace it back to whichever patch did that.

Yeah; and also oh and this tunable we are not longer setting the right way or we have this driver on the desk--we have the old version of the driver. We should have had a new version. The focus is shifting a little bit but I think that's good; it means that all of the stuff is done and we just have to sustain it and make it something that the QA team can use.

Then it's not the case that you've solved the easy parts and the hard parts are remaining?

Right. I think we solved most of the pieces.

Let's talk a little bit about LatencyTOP then because I've seen that come out recently--for various values of recently--and it seems like that's a more interesting product. Latency is a little harder to understand because there is so many move moving parts.

It's also a different focus but PowerTOP was very focused on users as well. LatencyTOP is very, very much on the developer, which makes the audience a little smaller.

I've seen for years and years and years the Low Latency Patch Set aimed at people who are doing media development, audio development with the Linux kernel. I know that that's a concern but are there other projects or other focuses?

Well it's--different types of Latency and everybody knew how the Low Latency Patches worked. The problem with them--those patches is they show latency for when the CPU is running right; Process A is running and the time between that and then Process B gets to run that's the latency that Low Latency Patches touch. What LatencyTOP tries to attack is sort of a compliment to that; it is the time that the process is not running but waiting for something. We didn't have tools for--to measure that part; we had tools to measure the other part and the Low Latency guys, Ingo, went all the way. The Real Time Patch is basically the ultimate Low Latency Patch. But it's only for one that when the CPU is running but that's only half the story. If I look at my desktop all the stutters ] and what people think is a Latency problem are not when the CPU was busy because my CPU--I have a fast Tamarin CPU on my laptop. It's when my disk is doing something or I'm waiting for a lock or I'm waiting for DNS or something.

You're not CPU bound in other words?

I'm not CPU bound at all and I'm still having stutters. I still have what you would call latency problems. The visibility in what's going on was missing from--my laptop was stuttering. I have a pretty fancy laptop lately; I bought a new one a few months ago, fast CPU, lots of memory, nice screen and once in a while it was just slow. I wanted to know why it's slow. I see my disk light going or something like that but I don't see what is going on. That's what inspired me to make LatencyTOP to see what's going on. Why are we waiting? What are we waiting? Who is waiting? What's it doing when it's waiting, right? That sort of questions. LatencyTOP was designed for that question. So what's my application doing that causes my system to be slow--to sort of stutter. That's the misconception a lot of people have about preemption and about real-time. It doesn't help with that because you're not busy--the CPU. In fact, Linux, even without pre-emption and without real time patches for the desk top--it's more than good enough because the latencies that Linux has even with all those patches are so small it doesn't matter--on the CPU side but in the non-CPU use case that's where you hit the problems.

That's what you were not able to measure until LatencyTOP?

That's why--I need to find--I wanted to see what's going on on my system. Who is waiting; what it's doing when it's waiting and the next time then is okay can we fix that--for developers.

Were there specific patches for the kernel or development in the kernel that you needed to be able to trace these things like the tickless kernel for PowerTOP?

There's a very small patch due to the kernel for LatencyTOP, but it's in the mainline ] kernel since 25 I think or maybe 24; it's been a while but it's a very small patch. That gives me the information I have--that basically I need to expose the information that the new CFS scheduler collects. The new CFS scheduler has the infrastructure to collect the data; it just needs to sort of measure them and then expose it, and I wrote a patch for that and then I wrote the tools for--to use it.

You make it sound so easy. Have you started seeing results from that as well?

A little bit; people use it for diagnostic mostly. I'm going to be using it in the next four to six--twelve months just to nail down some of the issues in the IO scheduler, to show me at least that the IO scheduler needs a bit of work. CFQ that Jens Axboe wrote is very nice but it's other pieces of the IOS tag that actually are not working well yet. There's a glass jaw in there that sort of everything works fine at some point; you hit the wall and everything collapses.

I've noticed that as well when I do a large parallel make for example up to about nine processes and then I do a processor box--it's fine but beyond that, you're bogged down in disk IO.

Well that is because that you're into a limit of the hardware that somehow it could--it degrades at some point because I'm seeing it on my laptop already. At some point if my middle line just starts writing out the middle it just stops for a second so there is [something] that I want to chase down.

Do you think that's mostly at the kernel level?

I think that one is a kernel level.

That sounds more difficult to me in some ways to change than teaching people how not to write bugs in their applications.

I'm mostly a kernel developer so I know how to fix those.

You have probably more access to fix it in one spot than if it were in hundreds of spots and hundreds of applications.

Yeah; I suspect it's only one or two places I have to fix up, so in a way it's easy compared to what we had to do with PowerTOP.

Do you think there are also latency problems in applications?

It's not so much that as it is that applications do things that make the kernel do expensive latency things.

Is that because the kernel is mostly tuned for a specific type of workload?

Well no; the current does what the application asks. If the application asks "send your entire cache to disk" the kernel will be happy to do that. The application will have to wait for my four gigabytes of data to hit my SATA disk, but the application has stopped, so you might have to fix applications to not do that sort of thing.

Like the fsync bug in Firefox for example?

Exactly. LatencyTOP would point that out. Actually I thought that happened with Firefox. The RPM program has the same problem. If you run, while you're doing your upgrade, LatencyTOP it's going to be fsync. That's the kind of things we can use LatencyTOP to nail down and at least see what happens because it is sometimes harder. The PowerTOP things were easy to fix; some of the LatencyTOP things are going to be harder to fix.

Do you think then there's not one specific type of error pattern like you see with PowerTOP?

Yeah; it's more of a--for example the Firefox fsync problem they really wanted fsync. But also PowerTOP the people didn't want to poll; they just wanted the easiest lazy way to do it temporarily and fix it later.

Right; and no one ever said this is a problem so let's fix it.

The code is been onto--they made a quick implementation and never go back to it. This--the Firefox people really wanted fsync except that it was--the effect of it was bad, but perceptually it's not a stupid thing to do. It just takes a while, so it's a harder type of work because you had to re-architect your system to not need fsync.

Is that the type of bug where it's not a problem if your hard disk size is 30 or 40 megabytes but when your hard disk size is 80 or 100 or 250 gigabytes then it's more of a problem?

It deals more with the number of applications running, the--sort of the size of that desktop. Five or ten years ago there was only one application that was a little simpler. The complexity of the--the modern desktop is getting a little bigger which means there's more IO, there's more stuff in progress. And fsync is basically an enormous glass jaw; everything has to go through a little straw during that. And the straw is the same size as 10 years ago except it's just so much more going on.

That's--that's interesting because it reminds me in some way of what I think I heard with Ted T'so a couple years ago where he said fsck is dead because hard disks are too large.

The good news is that there's SSDs. Intel announced one days ago; they sold some of this because they're just an order of magnitude faster. Well it's not solving it; they hide it for a bit longer let's put it that way.

Just delaying the problem.

The LatencyTOP and we used to see the problems and fix them. They will still be there with SSDs; just you will only hit them three years later again.

Right; you cut it in half.

Instead of 100 milliseconds you go to five milliseconds. You don't perceive it as much but it's still there, because it scales with the size of the application three years from now when a desktop is even bigger, more things are going on, you still hit the wall--I think. Looking into the future is hard.

That makes sense to me as well. That's what I would expect.

So fixing things now--fixing things like an fsync. Again some of it is kernel level but some of the app stuff you just have to fix and Firefox got kind of fixed. RPM still needs fixing and there's--there's a few others like that. This is the kind of thing that we can find with LatencyTOP.

Are you having as much success as you did with PowerTOP, or in some ways is it more difficult?

It's a lot more arduous and it's a lot harder to fix--into the success it's a lot less and if you measure success by the number of people using it it's a little less because the audience by definition is a little smaller. i never intended LatencyTOP to be as big as PowerTOP because it's really for a few developers who need to nail down a complex problem, like the Firefox one. I don't think they used LatencyTOP; they should have because then it would have been immediately obvious.

I think the person who found it did, if I remember at the LWN discussion correctly.

Could have been; I don't remember exactly. But that's the kind of thing it's for; it's more a targeted tool.

Once you see you're having a problem, use this tool to figure out where it is.

Yeah; and everybody has a power problem.

Is this just something that Intel pays you to sit around and think up?

That's part of my job; I do a whole bunch of things.

Are you looking at other measurement profiling tools for other things, disk IO? I guess LatencyTOP kind of gets in there, but....

Yeah; one of the things I did recently was kerneloops.org. It's a kernel--it's a measurement for the quality of the kernel--not the behavior of the kernel. Tracking kernel oopses, finding out which crashes are popular that we can prove the quality of the kernel; that's one thing I've been doing the last few months. Going forward I'm working on boot time right now.

Which I'd love to see improve.

At the Plumbers Conference in September I'll be showing with some of my co-workers an Netbook booting in five seconds to a full UI.

I might just catch that talk then.

I'm not going to tell you how I'm doing that yet, [Laughs]; that's for the Plumbers Conference but I will be talking about a five-second boot and I'll show people. You have--

You have commodity hardware?

This is on the Asus Eee PC. We have it working; we boot in five seconds.

I think the OpenMoko folks would want to see that as well.

I'm already getting a lot of people thinking--just give a sneak peek. You can wait a few. Come to Plumbers if you want to see it. That's when I give the presentation; that's why we're launching--that's what I'm doing now and yes, you're right. I want to see it improve everywhere. If you can--if you read the abstract of my talk at Plumbers I don't--I do not accept that it should take a minute to boot my laptop. I have a really fast laptop; it has lots of memory. It shouldn't be--take a minute. I can--I'll accept five seconds. I do not want to accept more than five seconds.

Right; there is no reason you should have to go get a cup of coffee or tea every time you want to resume from hibernate or whatever.

Or just boot.

We do not accept that it takes more than five seconds and then we started making that happen. The Netbooks are nice because they have a small SSD; that solved some of the problems but causes other ones. Five seconds for me was like this should not take more than five seconds and it doesn't if you actually work on it. There's one downside to that kind of thing; my normal laptop now feels really slow. There's this fast CPU with four gigabytes memory; it's about three months old or something like that. It feels slow compared to the Netbook because it boots in a minute.

Wow.

One of the things I'm going to do next is also port the technology to my laptop.

That will help it spread to more people.

I'm hoping to achieve with my Plumbers presentation that distribution vendors will start working with it and actually take it seriously.

I have some suspicions of things you can do but mine are probably just cut the boot time in half and not get it down to five seconds.

That's the thing. A lot of people spend time on this and they make it go faster. Let's do parallel stuff or let's do this or that.

Or let's reduce the number of stats you have to do to initialize the system or something like that.

If--what we found is then they go from like 50 seconds to 45 seconds. Great; good progress. I want to be a five. It's a mindset thing almost; I want to be at five. I am at five right now so I'm happy about that. That's for me the next step in making Linux more usable--boot in five maybe ten seconds. If you don't have an SSD it's going to take you a little bit longer but it shouldn't be a minute; it shouldn't be 20 seconds. It should be five. It bugged me. Let's put it that way.

I hate how my laptop takes a minute to boot.

I think also people notice that if their boot time can get cut even 20-percent of what it is right now they'll start noticing.

Let's put it this way; I'll switch distros--I'll switch distros right away if someone else is five seconds. For me that's a reason to switch to whatever distros has five seconds for my laptop--even more than power almost.

Really?

Power is important; we fixed it last year. Everybody has now done power; for me boot time is next. If one distros takes a minute and the other one takes five seconds--guess what--guess which I'll pick right.

That's a compelling reason for you to switch?

That is more than enough for me to switch. If it's seven to five--different story. From a minute to five seconds I'll switch. It's one of those things that make it--the machine usable because I don't want to go into the office 10 minutes early so that I can--before the meeting, so that I can call in after 10 minutes.

Or fiddle with the projector for five minutes or--?

Yeah; there's a bunch of things. If I have a meeting at 8:00 a.m. I want to be in the office at 7:59, because those 10 minutes I can sleep longer right.

Exactly.

Or I can do other things--that sort of thing; if I have to wait 10 minutes just sitting there waiting for my phone number to come up so that I can call the meeting that's irritating.

Or you're on the MAX or you're in an airplane and you have a great idea and you pull open your laptop and in 10 seconds you can be typing your idea or waiting a minute; yeah. That's a compelling idea.

Or say I want to look something up on the Internet, if that takes five seconds I can look it up right away. If it takes 10 minutes, five minutes, two minutes....

It's going to have to be really something you want to look up.

I wouldn't bother, because it's going to be minutes. Minutes--now never mind.

In terms of desktop side from people who aren't developers like me, or who aren't kernel developers like you, what do you consider the next steps in making the free desktop more usable for everyone?

The boot time is one. Suspend/resume working, suspend to RAM working very reliably, that's another one, and it's not working too bad. If you look at what Acer or Asus or those guys put into Netbooks, that is really slick and anybody can use it. It's not an application usability problem. Five years ago, they said nobody has applications; it's hard to use. That's solved. If you look Asus, Acer, all the Netbook guys, there's no problem there. They're not perfect, but for me things like boot time, like power, like suspend/resume, there's still a bunch of work to be done there. UI Usability is moving so fast I--if I mention it a year from now people will laugh and say yeah, we solved it. There's a few changes happening; the Netbooks are different than laptops.

That's a good point because we're talking about a new market that didn't exist two years ago.

Yeah; screens are smaller, some of them may or may not have touch screens; if I look at the news announcements. Some of the usage is going to be different. If you have a touch screen like a Netbook, they have touch screens, some of them do. The touch paradigm will change along with applications. I'm not going to say iPhone but if you look at traditional phone versus the iPhone touch changes the rules.

Especially multi-touch.

Multi-touch for sure--specifically. Multi-touch is just doing touch right. That's changing the entire interaction paradigm, so I think the next year or two you'll see a little development around the new interaction paradigm, and speech could be one of the other ones, some of the other interaction paradigms.

Now we're predicting the future we've been predicting for 20 years right?

I'm not someone who looks at tea leaves and says this will happen, but if you look at where the next frontier is I suspect it's interaction paradigm.

I guess that's a question for Keith [Packard] who is sitting right next to you right?

Oh, he sits one row over, but that--that's just half of it but only half. The other half is you--at the application level right; how do you deal with not having a keyboard or using a keyboard as little as you can and getting the most out of touch? Infrastructure but also application--and I'm not sure I'm going to be involved in that as much, but that's where I think a lot of the next steps are.

The idea of taking away my keyboard scares me too.

I think there are always going to be keyboards.

We have to write software right?

Not just that; it's also the fastest way to get more than a few characters into the computer. Even my cell phone has a keyboard. Everybody's Blackberry will have a keyboard. People who text on the low end consumer phones that they hit five three times to get an E--that doesn't work. Cell phones are getting keyboards. For some applications, keyboard is the right thing. If you enter a bunch of text--even text messages or emails--keyboard is probably the right thing. For other things like media playback, if I want to hit my media player and just start playing I will. Web browsing, most of it is not tapping--to be able to type stuff if I have to but not otherwise. We'll see. It's going to be a lot of research, a lot of instrumentation, a lot of development; some of it will succeed, some of it will fail.

That's why we have an open development process.

It's evolutionary. The strong guys--the good stuff survives.

We hope anyway.

It mostly does. If it's really good it will survive. If it's only five-percent better maybe not but if it's five times better it will survive.

Yeah; five times better people notice.

Transcript provided by Shelley Chance t/a Pro.Docs


You might also be interested in:

7 Comments

Is there any chance that the slides or something from the 5-sec presentation for Plumbers will be online somewhere?

I'd like to see it booting with Gnome or KDE in 5 seconds. A minimal desktop in five seconds is great work, but it's not very 'real life'.

Not sure if all the bugs are fixed yet, look at the Fedora metabug.

https://bugzilla.redhat.com/show_bug.cgi?id=204948

note the many open bugs not yet resolved.

I don't really see XFCE as being "minimal". It's reasonably full featured. It's not twm.

Of course the big desktops could certainly improve things as far as startup times are concerned. My KDE (3.whatever) session *does* take more than 5 seconds to fully start. Although even if like MS Windows does, it presents the desktop before it's fully done, it is usable before it has loaded all of its gadgets (same with Gnome). So at least it does its stuff more intelligently so far.

Still lot need to be solved. In my Compaq v3000 laptop.. I get kernel bug found in the boot up. My gnome is far good to work. I am thinking of compiling the kernel with ck patch.


Smith,
http://www.samudhai.com

Re Nigel's comment about a "real desktop".
Sorry, but in real world deployments I find XFCE to be much more suitable than KDE (I happen to like KDE) and even more suitable than GNOME.

When consulting on a sitewide install for an organization with a large number of desktop (hundreds or thousands) it makes a more stable user experience and is more usuable across a much broader spread of hardware. Finally, it is so much more efficient (read "fast from a user perspective") it almost completely destroys the hadware churn issue: Where, in a large corporate environment IT will replace each PC every 3 - 5 years, Now they never have to be replaced just because they are "to old and too slow for the new OS..". IT is less happy with this feature as it reduces their budget significantly and, to a lesser degree, their headcount. No one needed to do replacements as often.

The CEO is pleased despite that. :-)

Further they seem to be easier to lock down and keep stable with regard to user based configuration issues.

This is, of course, just one slice of experience. Yours may be different.

Finally: the claim of almost completely destroying the hardware churn cycle must be viewed from the perspective of: "Hmm, its only been 5 years so far, and these units are still going strong. We haven't reached the point of needing to replace them yet. So we don't actually know what the length of the hardware cycle is yet!"

News Topics

Recommended for You

Got a Question?