Sunday, 1 September 2013

Software Defined Networking - Tutorial (In Making)

Hi Friends,

I am new to this world of Software Defined Networking (SDN) , which many (even I ) believe will revolutionize how we look at the computer networking. I took a course on SDN in coursera (https://www.coursera.org/) under Prof. Nick, Georgia Tech. I am trying to put here what I learnt, my assignments, etc., . Your suggestions are welcome.

Thanks.

                                                SOFTWARE DEFINED NETWORKING (SDN)

INTRODUCTION:
SDN is an emerging network architecture where the control plane is decoupled from the data plane of the network equipment. As a result, the network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications. Please refer to [1].
Open Flow protocol:
Open Flow protocol helps in establishing the communication between the control and data planes of supported network devices.

SDN Controllers (POX)
POX is a python based SDN controller.
Features:
It supports only Open Flow v1.0
Advantages:
Widely used, maintained, and supported.
Relatively easy to read and write the code.
Disadvantage:
Low Performance

Tutorial on POX controller
TOPOLOGY (Fig 1):

 












 


To verify the working of the SDN controller, we use a MININET, a network emulator to create the topology. The following section gives an insight into MININET and the useful API’s to create the topology.
Mininet is a network emulator. It runs a collection of end-hosts, switches, routers, and links on a single linux kernel. It uses a lightweight virtualization to make a single system look like a complete network, running the same kernel, system, and user code.
Working with the MININET:
a.      Creating the Topology
Important classes, methods, functions, and variables to code the topology in python

Topo: the base class for Mininet topologies. Base class:
addSwitch() : adds a switch to a topology and returns the switch name
addHost():adds a host to a topology and returns the host name
addLink(): adds a bidirectional link to a topology
Mininet: main class to create and manage a network
start(): Starts your netwok
pingAll(): tests connectivity by trying to have all nodes ping each other
stop(): Stops your network
net.hosts: all the hosts in  a network
dumpNodeConnections(): dumps connections to/from a set of nodes.
setLogLevel(‘info’|’debug’|’output’): set Mininet’s default output level; ‘info’ is recommended as it provides useful information.
b.      Useful VI Editor commands
a.      “ i “ – Insert the text after opening (sudo vi filename.py)
b.      “ :q “ – Quit without saving
c.       “ :wq “ – Save and quit


MININET TOPOLOGY TEMPLATE:
Please look for the comments, which helps you to place the logic for the topology of your wish.
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel

class Topo_of_wish(Topo):
    #"Single switch connected to n hosts."
    def __init__(self, n=2, **opts):
        # Initialize topology and default options
        Topo.__init__(self, **opts)
        “””
         Please enter the logic for the Topology you wish to perform
         “””
def simpleTest():
    #"Create and test a simple network"
    topo = Topo_of_wish(n=4)
    net = Mininet(topo)
    net.start()
    print "Dumping host connections"
    dumpNodeConnections(net.hosts)
    print "Testing network connectivity"
    net.pingAll()
    net.stop()

if __name__ == '__main__':
    # Tell mininet to print useful information
    setLogLevel('info')
    simpleTest()

Example 1
The logic for the single switch topology shown in Fig 1 is written in the class Topo_of_wish present in the above template:


The following screen shot shows the network cnnectivity and start of Topology construction and stop of Topology




Example 2

Data center Networks:
The Data center Networks are generally tree-like topology. End-hosts connect to top-of-rack switches, which form the leaves (edges) of the tree; one or more switches form the root; and one or more layers of aggregation switches form the middle of the tree. In a basic tree topology, each switch (except the core switch) has a single parent switch. Additional switches and links may be added to construct more complex tree topologies (e.g., fat tree) in an effort to improve fault tolerance or increase inter-rack bandwidth.



                                                Figure : 2

The following snapshot gives the logic to write the structure of the topology.


                                    POX CONTROLLER
The POX controller is a python based controller. The POX controller is worked on the single switch topology shown in example 1 (Figure 1)
a.      Useful POX API’s
1.      connection.send( … ) function sends an Open Flow message to a switch. When a connection to a switch start, a ConnectionUp event is fired. The above code invokes a _handle_ConnectionUp() function that implements the a particular application logic.
2.      ofp_action_output class is an action for use with ofp_packet_out and ofp_flow_mod. It specifies a switch port that you wish to send the packet out of. It can also take various “special” port numbers. For instance, to create an output action that would send packets to all ports except the port on which the packet originally arrived on. out_action = of.ofp_action_output(port = of.OFPP_FLOOD)
3.      ofp_match the objects of this class describe packet header fields and an input port to match on. All fields are optional – items that are not specified are “wildcards” and will match on anything.
Some notable fields of ofp_match objects are:
dl_src – The data link layer (MAC) source address
dl_dst – The data link layer (MAC) destination address
in_port – The packet input switch port
Example: Create a match that matches packets arriving on port 3:
match = of.ofp_match()
match.in_port = 3
4.      ofp_packet_out message instructs a switch to send a packet. The packet might be one constructed at the controller, or it might be one that the switch received, buffered, and forwarded to the controller. Notable fields are:
buffer_id – The buffer_id of a buffer you wish to send. Do not set if you are sending a constructed packet.
data – Raw bytes you wish the switch to send. Do not set if you are sending a buffered packet.
actions – A list of actions to apply
in_port – The port number this packt initially arrived on if you are sending by buffer_id, otherwise OFPP_NONE.
5.      ofp_flow_mod
This instructs a switch to install a flow table entry. Flow table entries match some fields of incoming packets, and executes some list of actions on matching packets. The actions are the same as for ofp_packet_out, mentioned above. The match is described by an ofp_match object.
Notable fields are:
idle_timeout – Number of idle seconds before the flow entry is removed. Defaults to no idle timeout.
hard_timeout – Number of seconds before the flow entry is removed. Defaults to no timeout.
actions – A list of actions on matching packets (e.g., ofp_action_output)
priority – When using non-exact (wildcarded) matches, this specifies the priority for overlapping matches. Higher values are higher priority. Not important for exact or non-overlapping entries.
buffer_id – The buffer_id of a buffer to apply the actions to immediately. Leave unspecified for none.
in_port – If using a buffer_id, this is the associated input port.
match – An ofp_match object. By default, this matches everything, so you should probably set some of its fields!
Example: Create a flow_mod that sends pavckets from port 3 out of port 4.
Fm = of.ofp_flow_mod ()
Fm.match.in_port = 3
Fm.actions.append(ofp_action_output( port = 4))

How to make the POX controllers use the MININET topology created ?
When you do not specify the controller when creating the topology, it will be treated as ovsc controller (default).
The above shows a POX controller to which an application has to be written.
            def  _handle_ConnectionUp(event):
                        msg = of.ofp_flow_mod()
                        msg.actions.append(of.ofp_action_output(port = of.OFPP_FLOOD))
                        event.connection.send(msg)  
log.info("Hubifying    %s",     dpidToStr(event.dpid))
            def       launch  ():        
core.openflow.addListenerByName("Connec-onUp",_handle_ConnectionUp)       
log.info("Hub  running.")





No comments:

Post a Comment