Bryan O'Sullivan on the Power of Haskell

By chromatic
January 21, 2009 | Comments: 6

Bryan O'Sullivan is a widely-studied polyglot programmer and a co-author of Real World Haskell. He recently spoke to O'Reilly about changes in Haskell from its inception to now, and how the language and community have evolved as it's become more practical to write real world software in Haskell.

How did you get into Haskell in the first place? What drew you to the language?

Bryan: In 1992 or 1993, it was quite a new language at the time, and I had a background in some functional programming already. I don't actually remember specifically how I stumbled across it. I do remember reading a book by a guy named Martin Henson called Elements of Functional Languages back around then, and I thought, "Wow, that looks really interesting."

I was at Trinity College, Dublin at the time. I was an undergraduate student. Trinity has this enormous library; it's a copyright library so it gets basically a copy of every book that's published in the UK and Ireland just automatically because it's got this enormous repository. Wandering through the computer science shelves I came across this book called Implementing Functional Languages which seemed like a natural place to go after Elements of Functional Languages, written by a guy named Simon Peyton-Jones. I followed up on him a little bit and discovered that he had worked on this language called Haskell. Then I started playing around with Haskell and fairly quickly fell in love, even though at the time it was a tremendously impractical language for doing any real work in, not least because the IO subsystem was just almost unusable and very tricky to deal with. It was still a fun language to do programming tasks that were just mental exercises.

I ended up working for Simon for a summer as an undergraduate, working on the Glasgow Haskell Compiler. That solidified my love of the language and the community of people around it. In spite of that, I chose not to pursue an academic career in functional programming, even though that was the direction I had been thinking of going. I let life carry me away on its wave. A few years ago, I decided it's time to pay attention to the muse that called me as a young man and reentered the Haskell community. Substantially all of the same people who had been involved in it 15 years before still were, and they were doing similarly magnetic stuff as far as I was concerned. It was very exciting and very interesting, and obviously had developed a very long way in all of that time.

Nevertheless, nobody had actually sat down and written a book to synthesize it all. There were introductory books that were good for undergraduates or the like, but they weren't books that would take you as a working programmer and show you why Haskell is interesting; how to actually use it and then apply it to realistic programming tasks. So I thought, "Oh, oh, we need to fix that." That's where the genesis of the book came from.

Was that SPJ book about the spineless, tagless G machine?

Bryan: That's the one. That book is actually available from Simon's website at Microsoft Research. It's downloadable in, I believe, PDF and maybe HTML form, and it's a wonderful book. It's still probably the most comprehensive book on how do you go and implement a functional language. It's certainly 20 years behind the time compared to the current state-of-the-art, but the current state-of-the-art is pretty huge and complex these days.

Do you know if GHC 6.10 still resembles the G machine he describes there?

Bryan: The execution model behind GHC and what it complies down to is still based around the spineless, tagless G machine. I don't remember if that is actually described in the Implementation of Functional Languages book, but it's the same core idea. Obviously, it's been substantially refined over time, and some of the design decisions that were made early on then are maybe not as optimal as they once were, but it still does yeoman's duty. It's still the basic thing that goes on.

I wish I'd invented something that 20 years later would still be relevant like that....

Bryan: Yeah. It would be a pretty nice position to be in. The interesting thing though is that if you look at dynamic languages, some of the techniques that people who were implementing Smalltalk 20 years ago were using our being rediscovered and reimplemented and put to new use in the context of new virtual machines and modern languages. It's nice to see the wheel of reincarnation getting applied.

A few years ago, you and I discussed function composition. I like to think I'm a pretty good programmer, but I've never had a book that explained that sufficiently that I could look at code and read it without seeing diagrams and drawing pictures. It's not a difficult concept -- I use it on the Unix command line all the time -- but there was no book that could teach a practical programmer like me why it's interesting.

Bryan: A benefit to me in writing this and also to my co-authors, John and Don, was that we had all been at various times immersed in lots and lots of work in all kinds of other languages. I've written a huge amount of code in C++, in Python, in Lisp, in Perl. John has a similarly diverse background. If you look for his name on Amazon, you'll find that he's written several Python books for example.

Don is probably the most academically inclined of us, but he works in languages like C and C++ every day as part of his job. We didn't come at this with any kind of an ivory tower background or perspective. We were all rooted in how do I do things that are interesting and useful rather than how do I express things that are algorithmically pure. We're certainly interested in that and find it to be a lot of fun, but I find being able to execute things and have them do stuff that's valuable to me to be as much a charge as any of the more esoteric pursuits.

Why should your average programmer care if something is algorithmically pure? What benefits are there to that?

Bryan: Why should an average programmer care? An average programmer is going to be concerned with a lot of different things simultaneously, right? If you're working on a large body of code, you're going to be concerned about modularity. Can I understand part of a program in isolation from the rest of it? Can I test part of a program in isolation from the rest of it? Can I read a particular part of a program and be able to make a fair estimate as to what it's really trying to do? Those kinds of concerns pop up all the time in day-to-day programming.

the idea of writing code that is very tight and that has very little intertwining with the concerns of code nearby is not unique to Haskell; it's just that Haskell happens to carry it a particularly long distance. Several different things there play into that. One is the idea that most code, by default, does not have any kinds of externally visible side-effects. Given the same inputs, it will always produce the same outputs.

Is this most code in Haskell or most code in any programming?

Bryan: Oh, most code in Haskell. Most code in other languages, you can't make any such assumptions about. In Haskell because all of your data is by default immutable, there is no real such thing as a global variable where in between one call to a function and the next call to a function, a global variable that is used by the function could change its value thus changing the output of the function. That just doesn't arise. You've got a much stronger guarantee that a function is going to behave itself.

This plays into comprehensibility, into readability, and it also plays into testing. If you don't need to set up some sort of elaborate mock framework or preload a database with data because most of your code is pure, then you can test your code in a very different way than you would even in a more traditional language. Traditional languages, most testing is built around the idea of say unit testing or load testing. In Haskell you certainly can do those kinds of things, but we also tend to emphasis a more generative way of testing, randomized testing. Because functions are only going to produce data based on their inputs, it's much more straightforward to say with some confidence, "Yes, I believe that I've actually tested this under pretty stringent conditions."

You can express boundary conditions and not worry about anything else.

Bryan: Yeah. Those factors all play into why Haskell or a language with this kind of properties would be of direct interest to somebody who cares about the day-to-day practice of programming. When you come at it from a perspective of "I am going to construct most of my programs and most of my code in a way that is easy to understand, easy to test, easy to refactor and the language's implementation helps me in these ways", then a whole lot of mundane tasks become automatable or things that you can have your tools help you out with.

I can imagine a lot of programmers saying I understand those concepts, but Haskell is difficult to learn or it's difficult to find Haskell programmers or it doesn't have libraries available. To what degree do you believe this element of purity is achievable in a language such as C# or Java or Python or Perl or Ruby?

Bryan: It's doable to an extent. There's a guy named Walter Bright who's been working for several years on a language called D. He's put together D as a better successor language to C than C++. One of the things that he's been working on has been adding a functional subset to the D language. He's made an interesting amount of progress in that.

D is very much a traditional language in many respects. It certainly borrows some features from various different places and synthesizes them into a slightly novel form. For the most part, if you're a C or C++ or Java programmer, it's going to look fairly familiar. In conjunction with somebody whose name currently escapes me who did a lot of work on the STL, the C++ standard library, Walter went off and made a number of changes through language to add immutable language and a notation of functional purity to it in order that it be easier to do the kinds of STL style programming that people now find pretty appealing in modern mainstream languages.

What's interesting about his approach is from my perspective as somebody who cares about the prettiness of a language as well as its usability, I think that the immutability that he's added to D looks kind of like a bunch of -- golly, I don't know quite how to put it in a clean way, but it looks like some extra planks that were hammered on long after the initial language design occurred.

Somebody made the infamous description of C++ as an octopus that was created by nailing planks onto a dog. I wouldn't go so far as to call D's immutability features anything like that, but they certainly have a little bit of an element of it. When you're doing language design, it seems to be pretty important to get those kinds of ideas integrated in as early as possible rather than to retrofit them. In the case of Haskell, that went in right at the very beginning -- and you still end up making compromises, right?

Haskell is not as straightforward a language to do immutable stuff in as a language like D is. There's not going to be a free lunch where you get both beautiful immutability and beautiful mutability, at least anytime soon that I can see where there are a number of different concerns all juggling for or vying for attention. Haskell has strengths in one area and D has strengths in a different area. I look forward to the time when we may have a language that combines the best features of all of these.

Do we need purely functional data structures in mainstream languages? Would that help?

Bryan: Having purely functional data structures often helps, certainly for the kinds of workhorse programming tasks that a lot of people need them for. For example, things like balanced trees or text types that perform well are beneficial both from a reasoning point-of-view and from a how can I deal with this effectively in a highly concurrent program point-of-view.

People who have very concurrent servers to deal with will very often hew towards immutable data structures simply because there is never any issue with concurrency related bugs in structures that are only ever constructed and then left untouched afterwards. They're not as easy to deal with, of course, as structures that you can fiddle with at your leisure. Again, there's a tradeoff there. In a language that has first class features like pattern matching, then dealing with immutability tends to be a little bit more straightforward, but that's as much a syntactic as anything else.

Do you mean pattern matching as in Haskell List style pattern matching or pattern matching as in regular expression?

Bryan: I mean pattern matching in the form of what you get with a language like Haskell or ML or Erlang or Scala.

Please correct me if I'm mischaracterizing you, but it sounds like while many ideas from Haskell can apply to other languages, there's a real magic in Haskell and the language natively supports and encourages these features by default.

Bryan: Yeah. That's absolutely true. The influence of Haskell is being felt more widely, not least because of the influence of people like Eric Meyer at Microsoft who has had a substantial hand in the evolution of the C# Sharp language. And, of course, we also have people like Don Syme: Microsoft's now turning F# into a first class .Net language.

Of course, we can also see some Haskell influences in other places too like languages like Python. The ideas are applicable in a number of different contexts. It really helps when you sit down and consider them all simultaneously and take a principled stand that this is going to be our default, and then see what decisions flow from those choices.

A colleague says that it takes about a year to become good at Haskell; what's your thought there?

Bryan: That's difficult to quantify. It depends on your level of motivation obviously, and it depends on the amount of practice that you're willing to do. In the past, it depended on how good you were at doing research as well. Prior to Real World Haskell coming along, the way that you learned Haskell was you either went to graduate school or you spent a lot of time dragging a disparate pile of papers together and reading all of those papers and reading source code. There's now a central point of reference for you to get started with the language, but yes, it still definitely takes a lot of practice.

A year of hard work would probably be on the lower bound of what it would take to become a really good Haskell programmer. You could become a productive Haskell programmer in a matter of weeks I think. To go from productive to what I would think of as a really solid programmer, in any language that's going to take writing tens of thousands of lines of code and making lots of mistakes and throwing stuff away.

Haskell might take a little bit longer than a language like Python or something where you can take existing concepts that you might be familiar with from prior programming education. In the case of Haskell, what you have to do is you have to take your prior concepts and throw them all away because they don't apply anymore, which probably lengthens the curve a little bit.

Step away from Algol-style syntax and things look unfamiliar anywhere.

Bryan: Yes, they do.

Then you start dealing with the idea of pattern matching and recursion instead of iteration and you worry more about folds and maps and reduces.

Bryan: That's right. It's a lot of stuff to absorb. On the other hand, the nice thing is that most of those concepts are individually quite straightforward. The only issue is that there's a lot of them and it really is beneficial to master all of them. From my own experience of originally learning Haskell, I wrote an awful lot of code that it turned out would've been really easy to express using a couple of higher order combinators from the standard libraries. That's a repeated feature of newcomers to language because higher order programming is usually so foreign to people, using functions to operate on functions that then finally operate over data. It won't naturally be the first thing that they'll do. They'll tend to use pattern matching and explicit recursion where it would make more sense to use a fold.

That's just part of the learning tax. You'll still come out with useful skills from doing that. After a while, you'll eventually be able to spot things like, "Oh, this thing that I'm doing really wants to be a fold or an unfold or something along those lines." It all comes out in the wash, so to speak.

To what degree do you think learning Haskell is replacing unidiomatic Haskell code with functions from the Prelude?

Bryan: There's definitely a lot of that, and it's a very good way to learn. Writing straightforward function definitions is an easy way to get yourself familiar with the language in the first place. Then you're trying to write slightly larger programs, and you can go back and start looking through the standard Prelude.

The nice thing about Haskell being a strongly-typed language, which we haven't really touched on so far, is that once you've got a good notion of how the type system works, you really start to see it as this very carefully put together set of precisely machined sockets and bolts and so on that all goes together in a very straightforward way. You start being able to see, "Oh, when I do this kind of thing, I could use a function of this form because it's obviously going to fit into that particular context."

Learning that is just a matter of practice. It's like playing the piano; it's like riding a skateboard. You just do it over and over and over again and you learn from your mistakes. You show off your code to your friends, and they go, "Oh, but you could've done this." That's how you hone your craft. There's not going to be a shortcut to it. What you do find is that your use of abstraction grows and grows over time up to a point.

You could almost describe the prelude and these higher order functions -- combinators -- as a pattern language for Haskell.

Bryan: They kind of are. What would be a useful way to talk about that? Haskell is much less dominated by ideas of patterns than the OO languages are. That's not to say that it's a language without patterns. It's just that a lot of the patterns are literally expressed as, "Here's a function that does what you need to do." You don't talk about fold as a pattern or visitor as a pattern. You talk about, "Well, you just use fold on this list." Or, "You fold over a tree," or something like that. It just happens that the idea of folding is represented by a piece of code that you can actually execute in many of these instances.

I don't see a subtle difference between the two. Maybe the way we talk about them is different, but....

Bryan: I think that the difference between patterns and code is that most of the time when people are talking about patterns; they're talking in vagaries. You can't just write the visitor pattern in Java. You have to have something that you are visiting, and you have to have an API that is amenable to it. You have to have some sort of a collection that supports that idea. Whereas in Haskell, you have a type class that supports folding. Then any type that happens to implement that type class is going to support your fold function. You go from the abstract to this idea of here are some ideas about how you construct a piece of code down to you call this thing. That's what I think is the distinction between patterns and codes. Patterns are talking about how you would build code, and code is code.

You mentioned you were away from Haskell for a decade or so, and I know that the language's unofficial motto is "Avoid success at all costs." What kind of changes did you see in the language? You were away for the monadic revolution I assume.

Bryan: Yeah, I was. When I was working as an intern at Glasgow University, Phil Wadler was still writing papers, talking to everybody who would listen about how awesome monads were, and they had yet to make it into the Haskell standard library. I believe that happened in the Haskell 1.3 report which came out a couple of years later.

That was really the only significant change though. I have code that I wrote 15 years ago that complies under GHC 6.10 which was released what? Six weeks ago? The language hasn't really changed very much during all that time. There were one or two changes that were made that were actually in one of those cases subsequently reverted. The idea of list comprehensions was generalized to operate over monad comprehensions.

Also, not only were monads introduced into the language, but there was a special syntax for dealing with them so that they would be a little bit typographically easier for people to read. That's called do-notation, by the way, but those are the only significant changes. What's been much more significant has been the stuff that's actually happened over the past couple of years since I started engaging again. Those have been substantial changes on both the research directions and also in the practicality of the language.

One of the things that people in the Haskell community did a few years ago was look around at established languages like Perl and Python and say, "Well, what are the things that actually help to make these languages more successful?" In the case of Perl, a very obvious answer was CPAN, right? The Comprehensive Perl Archive Network, where you can go to get a library that has some probability at least of doing almost anything you can think of. That makes it very easy to build component-based software pretty rapidly. In the Haskell community over the past couple of years what we've done is built something quite similar to CPAN. It's really had a huge effect. Even though Haskell is known as a relatively small community, we now have a centralized library and tool database that you can upload open source libraries and software tools to. You can install a library and all of its dependencies with a single command on your machine. It'll get downloaded. All of its dependencies will get downloaded. They'll get built. The documentation will get installed with just a few keystrokes.

That really makes a substantial difference to how accessible the language is both to people who are developing libraries and people who are consuming them. We now have a vibrant community of people who are developing libraries, updating them, supporting them, and, of course, using them which has made a big difference to the accessibility of the language for day-to-day programming tasks. Five years ago, if you wanted to do XML processing or you wanted to talk to databases or you wanted to do a sort of tag soup scraping of somebody's webpage that was written using mucky HTML, those were things that you would probably have to do by hand or you would have to go to some random webpage and try and find something that might or might not work for you. What's changed now is that that's all turned into I just go to Hackage. I find a library that looks like it'll do what I want, and I run a single command on my machine to install it and a minute or to later I'm done.

Is there a real connection in the Haskell culture between the academic desire to share information and the open source nature required to build something like Hackage successfully?

Bryan: Yeah, actually. There's been a pretty direct connection between the two. For a long time Haskell has been very much an academic kind of affair for the most part. Over the past year, we've seen a change where now people who published papers or technical reports will both describe their work in their academic paper, but they'll also tend to upload it in library form to Hackage so that other people can use it. In many instances, somebody has given a presentation about their work and they've uploaded their library to Hackage on the same day. People have been able to start using it immediately. During the International Conference for Functional Programming this happened several times, and we were all quite cheered to see this.

What makes that a possibility is that the people who are developing infrastructure tools like Hackage and so on either are or have very close links to people who are in the academic community. There's this wonderful sort of virtuous back-and-forth between the people who are trying to make the language more approachable and the people who are trying to do new and interesting things with it where each one supports the efforts of the other.

As I say, this is all a relatively new thing, but it's been quite exciting to see. I'm not sure that there are any other language research communities that have this kind of rapid turnaround. The closest that I could think of would be organizations like Genehack where you're supposed to upload your sequence data if you're publishing say a whole genome analysis of some sort of nematode or something like that. You're expected to share both your data and the fruits of your analysis at roughly the same time. That's not a common thing in academic computer science. Now we're seeing it occur in this way that's accessible to the entire community with Haskell.

Why is that?

Bryan: I think a lot of it has to do with the fact that almost everybody currently still knows almost everybody else, at least in the academic sphere of things. People are constantly either meeting face-to-face or they're chatting in IRC. Compared to any other academic community that I've observed, for its size and for its intellectual brilliance, the Haskell community is extremely supportive and close-knit. That means that people are just readily likely to collaborate with each other -- to build on each other's work and to share code to really a remarkable extent. It's one of the things that most attracts me to the community as distinct from the language; people are just nice above and beyond the call of duty, reason or what you would normally expect. That brings it from a purely intellectual pleasure to having a social dimension that is hard to get in other rarified circles.

That's fascinating commentary on Haskell itself, especially the notion of this culture of sharing for something that seems academically-driven.

Bryan: Exactly. It comes from both the open source nature of the Haskell community and the academic nature. Each informs the other. The open source software tends to have a higher level of abstraction than you would see in other languages, and the academic stuff tends to be more easily applied to open source software development.

That's really pretty exciting and a whole lot of fun.

What was your goal in sitting down to write Real World Haskell? Besides saying there's no practical introduction to the language, what else did you hope to achieve?

Bryan: Obviously one of the things that I wanted was to help to grow the size of the Haskell community. It was not going to be easy for outsiders to learn Haskell and to join in and to contribute and to witness the shear fun that we get to have if what they had to do was read 60 research papers and download a whole pile of libraries and figure things out for themselves. For all of that, we have an open community and we have a huge IRC channel that's very friendly and a busy mailing list that's also very friendly. It really helps to just have a book that you can look at and say, "Oh, okay. This is how I do stuff." That was one part of it.

To an extent, a part of it too was kind of an apology from my older self to my younger self for not following my muse back in the day. I made an explicit choice to not go and do a PhD and nevertheless, I really wanted to be able to contribute something to the Haskell community. That's part of why I didn't just write the book with Don and John; we made sure that it was going to be available under a Creative Commons license so that we could through our work do our part for the benefit of all of these people with whom we are friends and colleagues and who inspire us with their work.

I felt the same way about a couple of books I've written. One's easily the equivalent of a master's thesis. Another's the equivalent of a doctoral thesis.

Bryan: I guess that's what gets us through the night, right?

We don't get letters after our name, but we occasionally get royalty checks.

Bryan: Yep, and there's no shortage of long nights when you're in mid-write; that's for sure.

I mean one of the things that struck me about the book when I first saw the printed copy was the pages and pages of thank yous to contributors.

Bryan: Yeah. The aspect that was the most exciting about it was how we went about developing it in a sort of semi-open way. We kept the source code under lock and key, but we produced chapters on a fairly regular schedule, and we made them freely available for people to read. With a very small amount of web application hacking, I ended up with this website that took off in a completely unexpected and crazy way.

To give you an example of how the technical book writing process normally works and I know that you know all of these details, but other people won't.... It's usual for there to be two or three technical reviewers on any given title who will go through and read the entire book and offer comments. We had no paid technical reviewers in our case, but we had -- I don't know how many, but we certainly had at least 800 people by name offer comments. We had seven-and-a-half thousand comments that they provided as we developed the book. We were both inspired and driven to greater efforts by their enthusiasm. In addition, it made a substantial difference to the ultimate quality of the book too. We had people who had never heard of Haskell before stumbling across the manuscript. We had people who were day-to-day Haskell programmers, people who were learning it. We also had people who had PhDs and had been working in Haskell at the very highest echelon for almost 20 years all commenting on the book. We, in many instances, got to be sure that we had both the technical content correct and that we were presenting it in an accessible way. We had this constant flow of encouragement, of conversation back-and-forth between people who would argue over what was the best way to present a particular point.

It was really quite astounding. I haven't seen that kind of community form around the development of any other book. A few other people have written their books in the open. We originally got the idea from two sources. I had read Adrian Holovaty and Jacob Kaplan Moss' draft of their Django book, and had seen how the Free Software Foundation had opened up the development of the Version 3 of the new general public license to comment from the community. Neither of those cases did get even 10% of the amount of input that we did. Part of that had to do with how we developed the software; part of it had to do with the nature of the Haskell community, and part of it I can't explain.

Magic happens here.

Bryan: Yeah. Exactly. But it was wonderful. There were many nights where I would've rather do almost anything else. I would've preferred to be peeling potatoes professionally than writing the book, but then somebody would show up with a word of encouragement and I'd go, "Oh, yeah. This is part of why I care about this stuff is that people actually find it useful and they're enjoying it."

Many tech reviews miss that. For many of the books I've written, we always search out experts on the subject. We rarely are able to find a novice. For some books that's okay, but for a book like this that's invaluable. Someone who's written shell scripts for five years as a system administrator might say "I want to learn Haskell, but I have no idea what covariants, contravariants, and catamorphisms are!" You try to explain it, but even the explanation doesn't sense to someone without the theoretical background. Tech reviews rarely get that reader.

Bryan: Exactly. We had plenty of input from people who had no prior knowledge of Haskell or functional programming. In addition, the fact that each of us has spent plenty of time out in the field grubbing around in the imperative and the object-oriented worlds meant that we were maybe better able to phrase a lot of presentation than somebody might've been who has had their PhD in Haskell 20 years ago and this was the only world that they knew. It's much easier to connect with people if you have a lot in common with them than if you're trying to figure out what that alien species needs for you to communicate by semaphore.

I noticed how you said "If you're used to Python list comprehensions, the syntax is a little bit different, but the same idea happens here."

Bryan: Yeah.

What's next for Haskell?

Bryan: I think over the coming couple of years what's left of the performance gap between Haskell and languages like C and C++ is going to be substantially narrowed for single-threaded code. For things like concurrent and parallel and distributed programs, certainly for concurrent and parallel programs, Haskell will be in by and large considerably better shape than almost all other languages.

The special thought that's coming in with GHC 6.10 is a technological preview of a technology called nested data parallelism. This is a way of writing numerically intensive code that operates over arrays which has been the domain of languages like Fortran for almost half a century now. Writing these in such a way that you can naturally express the code and the Haskell implementation, both the compiler and the runtime will conspire together on your behalf to have it run efficiently on a multicore or, perhaps within the not too distant future, a hydrogenous kind of system. By hydrogenous, I mean something like a multicore machine that might have say a GPU that's programmable or a system that might be somewhat similar to the cell style machines that have all kinds of peculiar processing elements with different amounts of bandwidth and memory between them. That is a very exciting area of research at the moment. It's not seeing an awful lot of progress in other languages because Haskell's type system and various other aspects of the language all play together to make it practical there.

Distributed programming, I'm less sure of, although I'd be happy to see it. Although the interesting thing about building distributed programs is that people have kind of given up on the single-system image idea of distribution in general. People who are developing software systems that need to operate over a large number of machines where the software systems have a lot of cooperating but not necessarily tightly coupled parts are switching over from systems like CORBA or Java RMI over to much more loosely coupled and language agnostic technologies that give you far fewer capabilities like nesting systems such as AMQP or XMPP and also to simple pieces of glue like HTTP with REST. The nice thing about moving to those language agnostic and fairly limited capability systems is that they even out the playing field for all kinds of different languages. haskell can play just as well in that style of distributed programming as any other language can without having to have a huge amount of crazy scaffolding like Erlang's OTP system or like Java RMI and all of the stuff around enterprise Java beans and so on.

No one in the Haskell world needs to write an IDL compiler for CORBA.

Bryan: Yeah. I don't see that really arising very much. Probably things like IDL compilers will show up as Haskell slowly diffuses out into the commercial world and somebody at some big company says, "Hey, we're going to write a module of our control system in Haskell and it needs to be able to talk to this C++ code that we wrote in 1994." The idea of those kinds of big frameworks, they're not dead. There will always be consulting companies and large computer vendors who will want to sell you that heavyweight technology because that's how they milk you for money. People who actually care about getting stuff done rather than meeting copy specifications and legal structures are going to choose the lighter weight technologies that let them do a little bit in PHP, a little bit in Perl, a little bit in Python, a little bit in C++, a little bit in every other language under the sun and build something that works with whatever tools happen to be appropriate at hand.

You might also be interested in:


Nice interview: I've definitely got to get back to studying the online version of RWH.

One minor nit: you want Eric (C#/Haskell) *Meijer*, and not Eric (CSS) Meyer.

As a total newbie, who has literally just stumbled onto this business, I'm fascinated. For instance I'm fascinated that something like a = b + c can be an equation and not just an assigment statement. The only other language that I've even seen that allowed you to write defining equations which remained in force throughout a run was Knuth's Metafont. That was quite startling at the time, over twenty years ago. I'm looking forward to digging into this stuff.

Interesting stuff. But I think the man must have said "heterogeneous", not "hydrogenous", unless he's figured out a way to manufacture fuel cells as a by-product of coding (mind you, if that were possible in any language, it would certainly be Haskell).

Just bought my copy of the book.

I am doing//learning//coding simple stuff in Haskell now a days and enjoying it. I look forward to both F# (done already) and Haskell (and maybe Erlang or Scala).

I wish you could give us more information about future applicability of Haskell to heterogeneous hardware stuff.

thanks for the interview, which I just stumbled over -- a year late, but still interesting.

small note: hydrogenous (in two places) should probably read

Hi Chromatics

Nice quit interesting an interview am quite amazed with your skill.

News Topics

Recommended for You

Got a Question?