Top Open Source SDN Projects to Keep Your Eyes On

By Sarah Sorensen
August 1, 2012 | Comments: 0

Interest and momentum around OpenFlow and software defined networking (SDN) has certainly been accelerating. I think people are so excited about SDNs because, while we have seen a lot of innovation around networking - in the wireless space, the data center, and all the applications - there has been very little innovation in networking - the routers and switches - within the last decade. The prospect of completely re-architecting the network, by separating the control plane from the data plane, opens up a lot of new possibilities.

With SDNs, organizations aren't constrained by how the network is built. They are free to build a dynamic, fluid infrastructure that can support fluctuating demands, shorter implementation cycles (check out Stanford's Mininet), and completely new business models. But, as I have mentioned before, we are just at the beginning. While those of us watching this space have been impressed by the rapid pace of innovation within SDNs to date, it's hard to predict what's going to happen next. But that won't stop us from trying!

I spent the last few weeks checking in with some SDN pioneers to find out what's going on that's of interest in the SDN space these days. Among those experts whom I spoke with were Chris Small (CS), Network Researcher at Indiana University, Phil Porras (PP), Program Director at the Computer Science Lab of SRI, and Dan Talayco (DT), Member of the Technical Staff at Big Switch Networks. The following are some excerpts from my discussions:

What are the top projects in your mind going on right now around OpenFlow and SDNs?

DT: "It's hard for me to choose just a couple to talk about. Which is a great thing, isn't it? There are three very different parts of the ecosystem in SDN. First, there are the switches providing the infrastructure that moves packets. Then there are controllers. This is a layer of centralized software controlling the forwarding behavior of the infrastructure (most often through the OpenFlow protocol) and providing a platform for the third layer, which is all the SDN Applications. These are software programs that run on controllers. They are given visibility into the topology of the network and are notified of events in the network to which they respond.

Here are four open source SDN projects I'd point to. I'm more familiar with the lower two layers (switches and controllers), so mine are from there:

Floodlight is an open source controller in Java. It was introduced less than a year ago I believe, but has been getting rapid acceptance in the OpenFlow community. Currently it has more public forum discussion traffic than all other controllers combined.

Open vSwitch (OvS) is a multi-layer virtual switch released under the open source Apache 2.0 license. Its focus is primarily as a virtual switch, though it has been ported to various hardware platforms as well. Some of the originators of OpenFlow created OvS.

OFTest was developed at Stanford. It's a framework and set of tests implemented in Python that give people a way to validate the functionality of their OpenFlow switches. There was even a simple software switch written in Python to validate OpenFlow version 1.1 that is distributed with OFTest.

Indigo is a project, also started at Stanford, providing an implementation of OpenFlow on hardware switches. It runs on several hardware platforms and has been used in a number of different environments. This project is currently being updated to describe a generic architecture for OpenFlow switches targeting hardware forwarding."

CS: "While the work that's being done with the Controllers is very important, I think the most interesting pieces to look at are the actual applications. These help us make sense of what's possible. The first one that I think is interesting is one we are doing at Indiana University. We have an OpenFlow load-balancer in FlowScale. We have deployed it out in our campus network, in front of our IDS systems, and are taking all of our traffic through it (48 port by 10Gig switch). It does all the routing, fail over, etc. you would want a load balancer to do, but cheaper than an off-the-shelf solution.

The other key project I would look at is the work that CPqD is doing. They are basically a Brazilian Bell Labs, and they are working on RouteFlow to run a virtual topology with Open Source software and then replicates the virtual topology into the OpenFlow switches. This is how you can take a top-of-rack switch and convert it into a very capable router and integrate a lot of different capabilities needed for research, campus and enterprise deployments."

PP: "I've been looking at this space with respect to security and think there are a few core strategies that researchers are exploring to see how best to develop security technology that can dynamically respond to either threats in the network or changes in the OpenFlow stack. The idea is to monitor threats and then have the security technologies interact with the security controllers to apply new, dynamic mediation policies.

There is FlowVisor, led by Ali Al-Shabibi out of Stanford and Rob Sherwood (who used to be at Stanford, but is now at Big Switch), which works to secure network operations by segmenting, or slicing, the network control into independent virtual machines. Each network slice (or domain) is governed by a self-contained application, architected to not interfere with the applications that govern other network slices. Most recently, they started considering whether the hypervisor layer could also be a compelling layer in which to integrate enterprise- or data center-wide policy enforcement.

We [at SRI] have been working on FortNOX, which is an effort to extend the OpenFlow security controller to become a security mediation service - one that can apply strong policy in a network slice to ensure there is compliance with a fixed policy. It's capable of instantiating a hierarchical trust model that includes network operations, security applications, and traditional OpenFlow applications. The controller reconciles all new flow rules against the existing set of rules and, if there's a conflict, the controller, using digital signatures to authenticate the rule source, resolves it based on which author has highest authority.

CloudPolice, led by Ion Stoica from U.C. Berkeley in concert with folks from Princeton and Intel Labs Berkeley, are trying to use OpenFlow as a way to provide very customized security policy control for virtual OSs within the host. Here, the responsibility for network security is moved away from the network infrastructure and placed into the hypvervisor of the host to mediate the flows with custom policies per VM stack.

The University of Maryland, along with Georgia Tech, the National University of Sciences and Technology (Pakistan) are working on employing OpenFlow as a delivery mechanism for security logic to more efficiently distribute security applications to last hop network infrastructure. The premise is that an ISP or professional security group charged with managing network security could deploy OpenFlow applications into home routers, which is where most of the malware infections take place, to provide individual protection and better summary data up to the ISP layer (or other enforcement point) to produce both higher fidelity threat detection and highly targeted threat responses."

Why are these projects important?

DT: "Because controllers are the pivot between switching and SDN applications, it's a really important part of the system to develop right now. This is why I think Floodlight is so important. It's been exciting to see the growing public contributions to the basic functionality and interfaces that were originally defined. I think a full web interface was recently added.

What's important is changing, though, because of new projects and the rapidly growing eco system we are seeing. For instance, OFTest has started to get more attention again, partly because we've been adding lots of tests to it and partly because the broader ONF test group has been developing a formal test specification.

OpenFlow on hardware is still interesting to me because I think being able to control and manage the forwarding infrastructure via SDN will be important for the foreseeable future and maybe forever. This is why I continue to be active in Indigo."

CS: "FlowScale is a proof point of the flexibility of OpenFlow and its potential to enable innovation. If you have an application that you want to deploy out, you don't have to wait for vendor implementations, don't have to wait to get hardware that's capable, you can take existing hardware and a little bit of software and implement it very quickly. For example, we have been working with other researchers who are interested in new multi-cast algorithms or PGP implementation, instead of having to wait for major vendors to decide it's okay to put in their hardware, we can very inexpensively implement it, try it, at line rate, and then deploy it more widely.

It's a little like the stuff that ONRC, the collaboration between Stanford and Berkeley, have been working on the past years. They are doing a lot of proof of concept applications with OpenFlow and continue to push new ideas out. They are taking new research and building implementations that can be used in the future for new products. These applications are further out, but it gives you ideas around what can maybe be expanded on and made into new products. They have worked on a number of research projects - such as Load Balancing as a network primitive (which we incorporated into FlowScale) and their recent Header Space Analysis which can verify the correctness of the network to ensure the policy of the network match its actual physical deployment.

Routeflow is important because it proves you can remove the complexity from the hardware and get the same capabilities; it puts all the features and complexity in the PCs rather than the switches. We have been working with them on a demonstration of it at the Internet2 Joint Techs Conference, where we are going to show RouteFlow operating in hardware switches as a virtualized service deployed on the Internet2 network. This is the first time we have seen anything like this on a national backbone network."

PP: "The security projects represent two branches of emphasis: one focused on using SDNs for more flexible integration of dynamic network security policies and the other for better diagnosis and mitigation. One branch is exploring how and where dynamic network security can be implemented in the OpenFlow network stack: the controller (control plane), the network hypervisor (flowvisor), or even the OS hypervisor. The other branch is attempting to demonstrate security applications that are either written as OpenFlow applications for more efficient distribution or are tuned to interact with the OpenFlow controller to conduct dynamic threat mitigation."

What are some of the hurdles?

DT: "The rapid change in the OpenFlow protocol specification has been a challenge we've all faced. It's probably a symptom of the desire to drive change into these projects as quickly as possible. OvS, for instance, has not been updated since 1.0, though it has a number of its own extensions.

The second challenge faced by those working on open source, especially at the protocol level, is that there are often conflicting requirements between generating code which can be a reference to aid in understanding, versus code which can provide a basis for developing production quality software.

The Indigo project has suffered from two other things: first are the high expectations that it should provide a complete managed switch implementation, which normally involves a large company to implement and support, and second because there is still a significant component that's only released as a binary. I think as the community goes forward, we are going to see additional work that's going to make it a lot easier to use all these tools and products in many environments."

CS: "Right now OpenFlow projects on hardware switches are still immature. It's important to recognize it's a different technology, with different limitations and there are some things that are simply not possible right now. But if you don't need that complete list of features, then it may make perfect sense to use some of these applications. Looking at the space, it's easy to recognize that things are moving a long quite rapidly, with new vendors, specifications, hardware support, etc. every day, so things will catch up and we can implement many things that are not possible right now."

PP: "The entire concept of SDN appears to be antithetical to our traditional notions of secure network operations. The fundamentals of security state that at any moment in time you know what's being enforced. This requires a well-defined security policy instantiated specifically for the target network topology, that can be vetted, tested and audited for compliance.

Software defined networks, on the other hand, embrace the notion that you can continually redefine your security policy. They embrace the notion that policies can be recomputed or derived just in time, by dynamically inserting and removing rules, as network flows or the topology changes. The trick is in reconciling these two seemingly divergent notions.

In addition, OpenFlow applications may compete, contradict, override one another, incorporate vulnerabilities, or even be written by adversaries. The possibility of multiple, custom and 3rd-party OpenFlow applications running on a network controller device introduces a unique policy enforcement challenge - what happens when different applications insert different control policies dynamically? How does the controller guarantee they are not in conflict with each other? How does it vet and decide which policy to enforce? These are all questions that need to be answered in one way or another.

I think it's best to have these conversations about how we envision securing OpenFlow and empowering new security applications now. Security has had a reputation of being that last to arrive at the party. I think this is a case where we could assist in making a big positive impact on a technology that could, in turn, provide a big positive impact back to security."

What Does the Future Look Like for Open Source and SDNs?

DT: "I think we are going to see new architectures and reference implementations that will accelerate the deployment of SDNs in the very near future. People are often dismissive of 'one-off' projects, but the reality is that we face a host of problems; each of which requires a slightly different solution, while all of them can be addressed by SDN approaches. These projects are already coming out of the wood work as more people better understand SDN. I've heard a few people start to say 'the long tail is the killer app for SDN.'"

CS: "I believe there will be bottoms up adoption, where more and more applications are implemented until there is critical mass and it makes more sense, from a time and cost perspective, to not have to manage two different networks - traditional and SDN-based. When that happens I think we will see a switch to SDNs."

PP: "OpenFlow has some very exciting potential to drive new innovations in intelligent and dynamic network security defenses for future networks. Long term, I think OpenFlow could prove to be one of the more impactful technologies to drive a variety new solutions in network security. I can envision a future in which a secure OpenFlow network:

•incorporates logic at the control or infrastructure layer to mediate all incoming flow rules against an organization's network security policy in a way that can't be circumvented and is complete.

•allows the full dynamism of OpenFlow applications to produce optimal flow routing decisions, while being free to remain unaware of the current security policy and not depended upon to preserve network security. Rather, operators will trust that security enforcement will occur at the control or infrastructure layer.

•enables InfoSec practitioners to develop future powerful OpenFlow-enabled security applications that can dynamically reprogram flow routing to mitigate threats to the network, remove or quarantine assets that violate security or fail to exhibit runtime integrity, and react to network-wide failure modes.

When we can achieve all three of these, we'll be able to provide some compelling reasons why OpenFlow has a distinct advantage over existing networking, while instilling the confidence we need to embrace all the other benefits of SDNs. I believe we can reconcile static and dynamic policy enforcement and create all new mitigation services that are much more intelligent and effective countermeasures to better defend our networks."

You might also be interested in:

Leave a comment

News Topics

Recommended for You

Got a Question?