Design And Implementation Of Virtual Network Testbeds
For Routing Protocols
Julius A. Bankole
BSc., University of Ibadan, (Nigeria), 1998
MSc., Beijing University of Posts & Telecommunications, (China), 2003
Project Report Submitted In Partial Fulfillment
Of The Requirements For The Degree Of
Master Of Science
in
Mathematical, Computer, And Physical Sciences
(Computer Science)
UNIVERSITY of NORTHERN
BRITISH COLUMBIA
LffiRARY
.Prince George, B.C.
I ·'
The University Of Northern British Columbia
April 2009
©Julius A. Bankole, 2009
Abstract
In this project, we present the design and implementation of virtual network test beds
for studying routing changes. A virtual network testbed is a computer network that
is completely created in software, while routing changes directly impact on the reliability and the reachability information of the network. We used testbeds to emulate
a small and a large-scale network on a single Linux machine. These emulated networks allow the study of network behavior and operations which are examined using
two routing protocols: Routing Information Protocol (RIP) and Open Shortest Path
First (OSPF). We implemented a fifteen-node network to study RIP , and a model of
the GEANT network to examine OSPF in virtual network testbeds. Each testbed represents an autonomous system (AS) or an intra-domain environment. Therefore, these
environments provided us with the opportunit ies to evaluate routing changes in an
AS. We used the testbeds to compare the routing of the original network with the
new routing of the missing links and routers to see what changes occur. The GEANT
network is the large-scale network used for investigations in this project. We then
used our emulation results of the large-scale network to compare with the simulation
work for the same network topology -
the GEANT network, and confirmed that our
emulation studies also identified important links and routers in the same network.
ii
Acknowledgments
First of all, I give thanks to God, the Almighty. The one who was, who is, and who
is to come for preserving me throughout this academic period.
My sincere thanks and appreciation to my supervisor Dr. David Casperson for his
mentoring and guidance. I really appreciate his doggedness and willingness to defy
all odds and supervise me to completing this Masters' degree at UNBC. His patience,
dedication to duty and attention to details have really helped me to improve my
writing style. I am deeply indebted to his "touch of class".
I wish to thank members of my supervisory committee: Dr. Charles Brown and
Dr. Matt Reid for reviewing my project report and their invaluable remarks. They
were always there to provide me guidance and support. Their useful guidance, prompt
and constructive feedback have been of tremendous help to my training at UNBC.
Many thanks to Dr. Reid for helping me with technical assistance regarding experimental processes and reporting.
Most importantly, I would like to thank my darling wife Abisola and kids and Tolu.
Their love, support and belief in me never waned.
Temmy
They provided a
pillar of strength that nurtured the environment for me to complete this M.Sc. In
addition, I wish to extend gratitude to my family, friends and mentors for their help,
encouragements and prayers.
lll
Contents
Abstract
ii
Acknowledgm ents
iii
Table of Contents
IV
List of Figures
vii
Glossary
ix
1 Introduction
1
1.1
Preamble
1
1.2
Motivations
2
1.3
Contributions
4
1.4
Overview of the proj ect .
5
2 Background and Literature Review
7
2.1
Introduction . . . . . . . .
7
2.2
Virtualization t echnologies
8
2.3
UML-based virtual networks
9
2.4
Simulation versus emulation of networks
11
2.5
Routing in the Internet . . ..
12
2.5.1
13
Intra-AS routing: RIP
lV
2.6
3
2.5.2
Intra-AS routing: OSPF .
14
2.5.3
Inter-AS routing: BGP .
14
Related works . . . . . . . . .
15
Modelling RIP Routing
17
3.1 Introduction . . . .
17
3.2
Experimental setup
19
3.3
Modelling of a fifteen-node virtual network
19
3.3.1
Topology of the virtual network ..
20
3.3.2
Implementation and configuration with RIP
21
3.3.3
Validating the virtual network . . . . . .
23
Experiments on a fifteen-node network testbed .
26
3.4
3.5
3.4.1
Single-link failures with RIP .
26
3.4.2
Single-router failures with RIP
29
Conclusions . . . . . . . . . . . .. .
32
4 Modelling the Routing of an Autonomous System
36
4.1
Introduction . . . . . . . . . . .
37
4.2
Modelling of the GEANT network
37
4.2.1
Topology of the GEANT network
38
4.2.2
Implementation and configuration with OSPF .
40
4.2.3
Validating a model of the GEANT network .
42
4.2.3.1
Testing the network reachability
43
4.2.3.2
Managing the routing information with OSPF
43
4.2.3.3
Tracing packets in the virtual GEANT network
44
4.3
Case studies in the virtual GEANT network
46
4.3.1
47
Single-link failures with OSPF
v
. . .
4.3.2
Single-router failures with OSPF . . . . . . . . . . . . . . . . .
50
4.4
Comparison of emulation and simulation results for the GEANT network
54
4.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
5 Conclusions and Future Work
58
5.1
Project Summary
58
5.2
Conclusions
60
5.3
Future work
62
Bibliography
63
A The XML file for a test bed with RIP
66
A.l A fifteen-node virtual network testbed
B Configuration files for Zebra, RIP and vtysh
66
74
B.l zebra.conf
75
B.2 ripd.conf .
76
B.3 vtysh.conf
76
C The XML file for the virtual GEANT network
77
C.l A twenty-three node virtual network testbed
78
D Configuration files for Zebra, Ospfd and Vtysh
92
D.l zebra.conf .
93
D.2 Ospfd.conf .
94
D.3 vtysh.conf .
95
E Electronic version of my Project Report
Vl
96
List of Figures
2.1
The architecture of the UML [9] . . . . . . . . . . . . . . . . . . . . .
10
3.1
A fifteen-node network topology . . . . . . . . . . . . . . . . . . . . .
20
3.2
Commands for creating a virtual network for a fifteen-node topology.
22
3.3
Commands for releasing a network scenario for a fifteen-node topology.
23
3.4
Screen shot of a fifteen-node virtual network testbed
23
3.5
Commands for starting and stopping RIP protocols .
23
3.6
XML code for starting and stopping zebra and ripd dremons
24
3.7 An example of a telnet session with the ripd dremon.
24
3.8
Pinging from R1 to Host F . . . . . .. . . . . . . . .
25
3.9
Case 1: Routing table for router R2 with full links -
3.10 Case 2 -
19 routing entries 27
Routing table for R2 before network restabilized -
12 rout-
ing entries . . . . . . . . . . . . . .
28
3.11 Single link removal analysis for RIP
29
3.12 Routing table for the router-R5 with fully functional network.
31
Vll
3.13 Routing tables when the router-R3 was missing
32
3.14 Routing tables when the router-R1 was missing
33
3.15 Single router removal analysis for RIP . . . . . .
34
4.1
The GEANT backbone network: Courtesy of DANTE . . . .
38
4.2
A twenty-five node network topology configured with OSPF
39
4.3
Commands for creating virtual GEANT network
41
4.4
Commands for killing virtual GEANT network .
41
4.5
A screen shot of the virtual GEANT network testbed
42
4.6
Commands for starting and stopping the OSPF dremons
42
4. 7 XML code for starting and stopping the zebra and ospfd dremon .
42
4.8
Pinging from R5 to Host B . . . . . . . . . . . .
44
4.9
An example of OSPF routing table for R5 console .
45
4.10 Traceroute from R5 to Host B . .
46
4.11 Single link failure analysis for OSPF
49
4.12 Single router failure analysis for OSPF
53
Vlll
Glossary
Abbreviations, Acronyms
AS
Autonomous System
BGP
Border Gateway Protocol
C-BGP
a BG P routing solver for large scale simulation of ASes
cow
Copy-On-Write
EGP
Exterior Gateway Protocol
GEANT
a pan-European computer network for research and education
IGP
Interior Gateway Protocol
ISP
Internet Service Provider
OPNET
Optimized Network Engineering Tools
ICMP
Internet Control Message Protocol
OSPF
Open Shortest Path First
LA
Local Area Network
RIP
Routing Information Protocol
SSFNET
Scalable Simulation Framework Network Models
TCP/IP
Transmission Control Protocol/Internet Protocol
UML
User Mode Linux
VELNET
Virtual Environment for Learning Networking
VNUML
Virtual Network User Mode Linux
WAN
Wide Area Network
XML
The eXtensible Markup Language
IX
Chapter 1
Introduction
This project uses emulation techniques to investigate the impact of link and router
failures on routing changes of networks. In this chapter, we explain our motivations
for the study, discuss our responses to these motivations and provide an outline for
this project report.
1.1
Preamble
The phenomenal growth of the Internet has led to the deployment of many network
applications such as Voice or Video over IP (VoiP), electronic mail, Web browsing,
e-voting, and e-shopping, to name a few. While the Internet has been designed for a
best-effort service, many of these new applications and services require a better guarantee of services. End users may sometimes find the Internet service to be unreliable.
There are many factors leading to this poor performance, such as link bandwidth, the
efficiency of the application software, and robustness of the routing protocol. Routing protocols are a critical component of the Internet and their aim is to ensure that
there is efficient traffic flow from source to destination. In this project, we use a
1
virtualization tool to create testbeds and examine the routing changes of networks on
these testbeds. This routing investigation enables network operators and researchers
to gain further understanding of the routing protocols reactions to events such as
traffic re-distribution and routing instability in the network. Routing instability is
the rapid fluctuation of network reachability information: an important problem that
directly affects the service reliability of the Internet.
The Internet is a network of networks. It is made up of a collection of over 21,000
domains or Autonomous Systems (ASs) . An AS can be an Internet Service Provider
(ISP) , a university campus network, or a company network. An AS is made up of
a collection of routers that are interconnected. Previous research has focused on the
inter-connection of ASs and less attention has been paid to the intra-connection (i.e.,
intra-domain routing). In this project, we concentrate on intra-domain routing, and
use it to study routing changes in our testbeds. The complex nature of a physical
network often makes it difficult to carry out studies on how link and router changes
affect the distribution of traffic across the network. Hence, we use virtual networks
to emulate physical networks in this project.
This project investigates how to use virtual networks for studying routing changes in
complex network environments. We use an emulation method to model routing of ASs,
for a fifteen-node network and the GEANT network -
a pan-European backbone that
connects Europe's national research and education networks. We study and evaluate
the impact of link and router failures versus routing changes in these networks.
1.2
Motivations
In this section, we explain why the study of intra-domain routing is important, and
what motivated us to carry out this particular research. There are four principal
2
motivations: cost, networking administration training, student experimentation, and
the possibility to offer particular practical advice.
Firstly, sometimes there is a need to quickly test a network configuration, e.g., a
firewall rule set, but setting up the configuration on real equipment is too time consuming (e.g., physically wiring and installing multiple operating systems), and very
expensive (e.g., multiple hosts and switches). We need a cheaper and more convenient
testbed that can be used for this test. Therefore, we need to design virtual networks;
and these networks can be used to carry out this test at little or no cost.
Secondly, students and network designers often need to obtain practical experience
by learning how to design, build and maintain computer networks. CISCO offers users
simulation software for this purpose, however, the experiences gained are restricted
to CISCO products only. This is insufficient for a thorough grasp of the expected
technical intricacies. In addition, network administration often involves activities
like network addressing, assignment of routing protocols and routing table configurations. We provide a network emulation environment for conducting and testing these
activities.
Thirdly, a number of situations frequently arise that require the use of more than one
computer. Faculty and researchers often want to have extra full-fledged machines to
aid their teaching and research work. In communities with limited funding, such as
universities , the possibility of having as many full-fledged computer systems as necessary to create real networks for experimentation purposes is less likely. Therefore,
creating effective virtual network testbeds will be a suitable alternative to assist faculty, researchers and other users with limited budgets, instead of investing in physical
equipment.
Finally, investigating the problem of intra-domain routing in any network is very
3
important. This is because many of the new applications and services on the Internet
often demand service reliability. We use an emulation method to model a real network
and evaluate the impact of changes to links and routers on the traffic distribution.
In doing this, our results identify which links or routers in this network model need
to be maintained. We also compare the results obtained from both simulation and
emulation models of our selected network.
In the next section, we give a summary of how we address these motivations.
1.3
Contributions
In response to our motivations and the need for examining networks' routing changes,
we develop two virtual networks. We use these virtual network testbeds to implement
two dynamic routing protocols: Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). We also provide sufficient documentation in this project report
to allow prospective students and network administrators to make use of the models.
In this project report , we aim to make the following contributions:
• The first contribution of this project is to develop virtual network testbeds that
can be used and re-configured by students and network administrators. These
testbeds will enhance learning and testing of network applications and services
without requiring a real network. The designed virtual network testbeds can
serve as working templates with which students can practise and modify for
specific network configurations.
• The second contribution of this project is to implement a realistic network
topology by emulation of the GEANT network and by viewing it as an AS. The
network topology of GEANT is taken from the work in [4, 21]. Next, we present the
4
techniques of how to configure routers and use the UML-utilities to implement the
switches and routers on the virtual network testbed in our specification scripts.
See Appendix A and Appendix C for these scripts.
• The third contribution of this project is a demonstration of a practical configuration of the routing protocol- RI P. We create and use a virtual network testbed
to configure an RIP dcemon from Quagga [11]. We use the RI P dcemon to show
how to detect the link failures , understand path selection using hop count, and
dynamically adjust the routes.
• The fourth contribution of this project is to use case studies for investigating
intra-domain routing using the OSPF dcemon from [11]. We model our network
after the network used in a similar work conducted in [20, 21] . The first case
study provides the measurements of link failures against the total routing cost
at the head nodes of the links while the second provides the measurements
of router failures against total routing cost in the GEANT network. These case
studies provide us with a better understanding of the links whose loss produce
higher routing cost and the routers whose loss yields the largest total routing
costs. This project report includes details of our configuration experiences,
networking administration, and virtual networking experiments.
1.4
Overview of the project
The rest of the project report is organized as follows. Chapter 2 examines a summary
of techniques, background information and literature review of related works that are
used in this project. Chapter 3 provides reports on modelling of a simple network
that is configured with the RIP routing protocol and discusses experimental results.
In Chapter 4, we model the GEANT network, conduct two case studies on this network,
5
and provide the experimental results of our findings . Lastly, Chapter 5 presents the
project report summary, our conclusions and a discussion of future work.
6
Chapter 2
Background and Literature Review
In this chapter, we provide background information, summary of techniques and literature review of related works that are necessary for this project. In Section 2.2, we
give an overview of virtualization technologies and briefly discuss how network virtualization techniques have been used successfully in the teaching context. Section 2.3
contains the overview of the principles of User Mode Linux (UML) for designing virtual
networks. Section 2.4 compares benefits and drawbacks of simulation and emulation
techniques. In Section 2.5, we discuss different types of routing protocols that are
connected to this project. Finally, Section 2.6 reviews previous research work that
has been done using network virtualization techniques.
2.1
Introduction
We need to understand how virtualization technologies can support our investigations
of link and router failures in the network. Virtualization techniques are often used
to combine hardware and software resources, and are used to model a network for
experimental purposes in this project. In addition to virtualization techniques, the
7
principles of UML enable us to model a complex network. We make a comparison of
emulation and simulation techniques, and present the major difference between the
two techniques. We limit the focus of our virtualization techniques to network virtualization, and use this concept to investigate the performance of emulation techniques.
The emulation techniques enable us to study routing changes when there are link and
router failures in any network.
2.2
Virtualization technologies
In this section, we briefly explain the concept of virtualization in the context of computing. We also provide some examples of previous work using network virtualization
techniques.
Virtualization is the term used to describe the abstraction of computer resources,
and is often defined as the technique for the mapping of virtual resources to real resources. The user of the virtual resources is partially, or sometimes totally, detached
from the real resources [32] . Virtualization technology hides the physical characteristics of the computing resources from the way that other systems, applications or end
users communicate with those resources. Examples of various types of virtualization
technologies include the following: virtual memory, redundant array of independent
disks, network virtualization and storage virtualization. More on virtualization techniques can be reviewed in [2] and [32] . In this project, we limit our discussion of
virtualization to network virtualization only.
Network virtualization is the technique of combining hardware and software network
resources and network functionality into a single, software-based administrative entity: this is sometimes referred to as a virtual network. Network virtualization often
includes platform virtualization, and occasionally combines with resource virtual8
ization. Some previous research uses network virtualization as a tool for teaching
computer networks and system administration [13, 14]. In [13, 14], Kneale et al.
develop a tool called VELNET , which is a virtual environment for learning networking.
VELNET is made up of one or more host machines and operating systems, commercial
virtual machine software, virtual machines and their operating systems, and a virtual network connecting the virtual machines and remote desktop display software.
Yuichiro et al. in [26] design a system that offers students a learning environment
for LAN construction and troubleshooting. Their system reproduces virtual networks
that consist of about ten Linux servers, clients, routers and switching hubs on one
physical machine.
2.3
UML-based virtual networks
In this section, we explain principles and applications of UML in the context of networking. This UML technique is described as a port of a Linux kernel that allows
running one or more instances of a complete Linux environment [17]. These instances
are run as user-level processes on a physical host machine.
These user-level processes provide us with the virtualization of machines, routers and
other nodes on a network. Within the UML process, an instance or a process of that
UML communicates with the UML kernel which in turn talks with the host kernel in
the same way that any user or application would. This UML technique allows a Linux
kernel to be run in user space and possesses all of the features of a complete Linux
machine. With UML, additional virtual machines or nodes can be created using the
hardware of a single Linux machine. Therefore, it is possible to carry out multiple
tasks and experiments on these virtual machines using a single computer system.
Figure 2.1 shows the description of process space using UML approach.
9
••
Virtual machine
(UML process)
Virtual machine
(UML processt
Figure 2.1 : The architecture of the UML [9]
A virtualization tool, Virtual Networking User Mode Linux (VNUML) [8] allows us to
easily create simple and complex network emulation scenarios based on UML virtualization software. The Linux machines that run over the host using UML virtualization
software are called "virtual machines" or simply "UMLs".
In UML, a filesystem uses the copy-on-write (COW) technique to save disk space and to
share a single filesystem when a number of virtual machines are run. This technique,
COW, allows multiple UML processes/nodes to share a host file as a filesystem without
interfering with each other's read-write operations [5]. In this mechanism, COW, the
data objects are not copied until a write is made. When writing occurs, the data object
is copied and non-shared afterward. Each process stores changes to the filesystem
inside its own COW file. This technique allows the filesystem to be shared among all
processes or virtual machines, it is also possible to revert to the original filesystem
contents by simply deleting a COW file in case problems occur. Our virtualization
tool, VNUML, uses COW to perform write functions while it uses the host filesystem as
read-only. This COW mechanism is used in all UML-based networks in order to reduce
frequent access to host memory.
With the aid of UML , virtual networks of different sizes can be created [5]. This UML
technique is used to design and test networks of complex topologies and different
configurations. Therefore, network designers can use the principles of UML to model
10
virtual networks, and implement new communication protocols on these virtual networks.
2.4
Simulation versus emulation of networks
In this section, we compare experimental techniques of simulation and emulation for
any network. We discuss benefits and drawbacks of these techniques.
Network designers often employ three experimental techniques in the design and validation of new and existing networking ideas. These techniques are: simulation,
emulation and live network testing. All of these techniques have their strengths and
weaknesses, and should not be viewed as competing methods.
Network simulation usually allows a repeatable and controlled environment for network experimentation. The simulation environments make it possible to predict outcomes of running a set of network devices on a complex network by using an internal
model that is specific to the simulator. The set of initial parameters assumed for
the simulators determines the model behavior of each simulation. Such environments
often include simulation tools such as OPNET [27], ns2 [6, 7] and SSFNet [3]. The
fundamental drawback with simulators is that simulated devices often have limited
functionalities, and the predicted behavior may not be close to that of the real system.
Network emulation reproduces features and the behavior of the real network devices.
The emulation environment is made up of the software and hardware platform that
provides the benefit of testing the same pieces of software that will be used on real
devices. In sharp contrast to simulation systems, emulators allow the network being
tested to undergo the same packet exchanges and state changes that would occur in
the real world. Simulators, on the other hand, are concerned with the abstract model
11
of the system being simulated and are often used to evaluate the performance of the
protocols and algorithms.
A fundamental difference between simulation and emulation is that while the former
runs in simulated time; the latter must run in real time, showing the close resemblance
of the real world devices. The emulation environments closely reproduce the features
and behaviors of real world devices. In an emulation environment, the network that
is being tested often undergoes the same packet exchanges and state changes that
usually occur in real world.
2.5
Routing in the Internet
In Section 2.3, we discussed how we can use UML to create virtual networks. In this
section, we will briefly describe routing in the Internet and different types of routing
protocols to regulate packets' routes in a network.
Routing is the process of determining the paths or routes that packets take on their
trip from the source to the destination node. On the Internet, routing protocols are
used to select the end-to-end path taken by a datagram, or packet, between the source
and destination. In Chapter 1, we define the Internet as a collection of domains or
ASs. Each AS is a collection of routers that are under the same administrative and
technical control. An AS runs the same routing protocol among its multiple subnets.
A routing algorithm within an AS is called an Interior Gateway Protocol (IGP) while
an algorithm for routing between ASs is called an Exterior Gateway Protocol (EGP)
[10, 15, 19, 25, 30].
Routing protocols specify how routers communicate with each other and disseminate
information that allows them to select routes between any two nodes on a network.
12
Readers who are not familiar with routing protocols are encouraged to read [10, 15,
19, 25, 30].
2.5.1
Intra-AS routing: RIP
In this section, we explain one of the intra-AS protocols. The Routing Information
Protocol (RIP ) is the earliest intra-AS routing protocol. This RIP protocol uses a
"hop" count as a cost metric, which is the term used to describe the number of
subnets traversed along the shortest path from the source router to the destination
subnet. The maximum cost of a path in RIP is fifteen. This number limits the use
of RIP to smaller ASs. This protocol, RIP , is a distance vector protocol based on the
Bell-Ford algorithm [10], and is based on a shortest path computation. A distancevector routing protocol requires that a router periodically inform its directly attached
neighbors of topology changes, and perform a routing calculation. The result of the
calculation is distributed back to the attached neighbors. The primary goal of this
protocol, like other intra-AS protocols, is to find the shortest path to the chosen
destination based on a selected metric.
Normally, each router has a RIP table often called a routing table. Routers use their
routing tables to decide the next hop to which they should forward a packet. The
routers configured with this protocol, RIP , exchange advertisements approximately
every thirty seconds. If a router fails to hear from its neighbor at least once every
180 seconds, that neighbor is considered to be no longer reachable; that is, it is
either the neighbor (router) has died or the link has gone down. When this occurs,
RIP modifies the local routing table and then propagates this information by sending
advertisements to neighboring routers that are still reachable.
13
2.5.2
Intra-AS routing: OSPF
Similar to the previous section, here we discuss another intra-AS protocol: Open
Shortest Path First (OSPF). This protocol is the successor to RIP, and was developed to
handle limitations of RIP. This routing protocol, OSPF uses cost as the routing metric,
and uses link-state information that is based on the Dijkstra least-cost algorithm [10].
This algorithm computes the shortest path to all subnets based on cost and selects
the source node as the root node for cost computation. The network administrator
assigns a cost to each link. The OSPF protocol floods the network with link state
advertisements (LSAs), unlike RIP where a node only exchanges information with its
neighbors. At periodic intervals, OSPF protocols use a "HELLO" message to check
whether the routers are operational or not. This protocol is also a dynamic routing
protocol.
Each router periodically sends an LSA across the network. This message is sent to
provide information on a router 's adjacencies or to update others when a router's
state changes. By comparing adjacencies to link states, failed routers can be detected
quickly, and the network's topology can be updated appropriately. From the topological database generated from LSAs, each router calculates a shortest-path tree, with
itself as root . The shortest-path tree, in turn, yields a routing table.
2.5.3
Inter-AS routing: BGP
Here, we briefly explain an inter-AS protocol. Border Gateway Protocol (BGP) is the
routing protocol for interconnecting different ASs. This protocol, BGP, is a path vector
protocol, and does not use traditional IGP metrics. It makes routing decisions based
on path, network policies and/ or rule sets. This BGP protocol maintains a table of
IP networks or "prefixes" which show network reachability among ASs. Because the
14
Internet is made up of a collection of ASs and it is used everywhere, BGP is critical to
the proper functioning of the Internet. This BGP protocol is the core routing protocol
of the Internet.
2.6
Related works
In this section, we present a review of some work on network virtualization. From
the literature [9, 16, 18, 22- 24, 28], some of the previous research concentrates on
providing the concepts and implementation methods of virtualization, while some
focus on producing commercial software.
Liu et al. [16] and Ham et al. [28] discuss the concepts and implementation approaches
for designing a virtual network testbed. Both [16] and [28] only provide good insight
regarding the concepts and implementation methods for virtualization without providing necessary hands-on learning experiences.
Massino uses an emulator called Netki t in his PhD thesis [22, 23] to study interdomain routing policies on a network. Mottola uses a virtualization approach in his
PhD thesis [18] to study simulation of mobile ad hoc networks. This approach is
used for testing publish-subscribe middleware on mobile ad-hoc networks. Steffen et
al. also use a UML-based network to set up an environment for automated software
regression tests [24]. Software regression tests are carried out before the release of
new official software to eliminate bugs during software development, hence Steffen
et al. design an automated testing framework for the regression tests. Similarly,
Galan et al. use virtualization techniques to design and implement an IP multimedia
subsystem testbed [9]. This testbed is used for development and functional validation
of multimedia services for next generation networks.
15
Previous work on network routing protocols for some ISPs can be found in [21, 29). In
[21), Quoitin et al. develop an open source routing solver, C-BGP. This is an efficient
solver for BGP and is used for exchanging routing information across domains in the
Internet. This solver can be used with large-scale topologies to predict the effect of
link and router failures in an AS. C-BGP is also used by ISP operators for conducting
case studies on the routing information collected in their network. C-BGP is used to
collect the BGP updates for the work on page 16 of [21) and in Section 2.5.9 of [20).
It is also used to study inter domain traffic engineering techniques and to model the
network of ISPs.
In [29), Watson et al. conduct an experimental study of an operational OSPF network
for a period of one year. This network is a mid-size regional ISP that is running
an intra-domain routing protocol, OSPF. The network is characterized by routing
instability, different traffic levels and changes in the routing updates. They find out
that the information from external routing protocols leads to significant levels of
instability within OSPF .
Clearly, there is a substantial interest in network virtualization for purposes that
range from testing new protocols, configuring networks, studying routing changes
and traffic distributions to experimenting with new network designs.
16
Chapter 3
Modelling RIP Routing
This chapter discusses the design, description of the implementation, and results of
our experimental studies with RIP on a fifteen-node virtual network testbed. In Section 3.1, we present an overview, some background information, and the problem.
Section 3.3 describes how to model a network, explains its implementation and configurations, and defines how to validate a virtual network. Section 3.4 consists of
our experimental studies and results. Section 3.5 contains the limitation of RIP and
conclusions drawn from our experimental studies.
3.1
Introduction
Network simulation and emulation have been indispensable tools for understanding
the performance of network systems. In this project, we focused on the use of emulation testbeds for studying various types of networking environments. We designed
a testbed to demonstrate that emulation techniques produce reasonable results in a
small network. In our experiments, we tested the reliability of RIP to dynamically
learn and fill the routing table with a route to all subnets in the network. This feature
17
of RIP routing protocol allowed us to examine routing changes in the network caused
by link and router failures.
Routing changes affect the network reachability information, and are such an important problem that it directly affects the service reliability of AS. While much
research has been conducted on inter-domain routing, the study of intra-domain
routing has been quite limited. Inter-domain routing simply refers to the routing
of inter-connected networks, while intra-domain routing refers to the routing within
a network or an ISP. Most network operators do not have sufficient understanding of
this problem. Some network operators often complain that they do not know to what
extent intra-domain protocol can cause changes in their networks. There is a lack of
understanding of the causes of these routing changes because it is difficult to detect
in their live networks.
In our efforts to investigate this problem, we use an intra-domain protocol, RIP , to
determine how routing is performed within a virtual network testbed. We explained
briefly the concepts of intra-AS routing in Section 2.5.1.
In this chapter, our first goal is to use an emulation technique to design a fifteennode virtual network testbed. The second goal is to use a routing protocol, RIP, to
configure a small network and use this network to understand how a datagram or
packet efficiently travels from source to destination on the testbed. The third goal is
to use our virtual network testbed to investigate routing changes caused by link and
router failures.
Most especially, the purpose of this set of experiments is to determine how sensitive a
network is to link and router failures, which is critical to understanding the reliability
of a network.
18
3.2
Experimental setup
In this section, we describe hardware and software components of the computer system that was used to design the virtual network testbed for this project. All the
experiments were performed on testbeds that are built on a TOSHIBA Satellite Pro
P300 notebook. This notebook is an Intel®Core, consists of a dual-processor P4250
running at 1.5GHz, has 32KB/32KB 11 Cache and 3MB 12 Cache. The RAM is
2GB DDR2 running at 667MHz and a 250.0 billion bytes 8-ATA disk.
The computer system is a dual-booting type with pre-installed Windows™Vista and
UBUNTU 7.10 operating systems. We modified a kernel of UBUNTU 7.10 by patching it with skas3 for better performance. Next, we installed VNUML 1 . 8 [8] on the
modified host kernel of UBUNTU 7.10 for easy creation, execution, and release of
virtual networking scenarios. The typical use of VNUML consists of: step 1 to create
the scenarios, step 2 to execute commands as many times as desired or needed, and
step 3 to release or destroy the scenarios. More information on the use of VNUML can
be obtained from [8].
3.3
Modelling of a fifteen-node virtual network
In this section, we explain necessary steps to model and test a virtual network. We
also discuss methodologies for implementing the topology of a virtual network testbed.
Finally, we validate this virtual network testbed by testing for network connectivity
on a RIP-configured testbed. This validation is done to verify whether or not RIP is
functioning properly on the testbed.
19
3.3.1
Topology of t he virtual network
We describe the topology of our virtual network testbed in this section. In order to
build a network or a virtual network, we found it useful to produce a detailed map
representing the network before proceeding to write the configuration files .
Firstly, we designed the topology of our virtual network to consist of a total of fifteen
nodes that include nine virtual routers, six host machines, and eighteen links or subnetworks. Secondly, we assigned appropriate I P addresses to the routers and the
links, and subsequently proceeded to write configuration files for each router in the
network. We selected this network topology to demonstrate that emulation techniques
produce reasonable results in a small network where the expected results are known.
Figure 3.1 shows a detailed map of the implemented network topology.
Figure 3.1: A fifteen-node network topology
20
3.3.2
Implementation and configuration with RIP
Before we conducted our experiments for this project, there were two fundamental
tasks that we needed to carry out for preparing an enabling environment on a virtual
network testbed. The first task was to create the network, and the second task was to
configure each router in the network. In this section, we discuss an implementation
of the network topology described in Section 3.3.1 using VNUML , and describe how we
used VNUML to create networking scenarios. We also explain how we used Quagga to
configure each router in the networking scenarios with a dynamic routing protocol.
Quagga is open source, and is obtained from [11]. Finally, we set up and produced
a fifteen-node virtual network testbed that was configured with a dynamic routing
protocol, RIP. Below are the basic steps for the implementation and configuration of
the fifteen-node virtual network.
1. We installed the VNUML tool on the LINUX environment of the host machine. This
tool and the installation procedures can be downloaded from [8]. The VNUML
tool is designed to easily create simple and complex networking scenarios.
2. Next, we installed Quagga in the system-wide /etc/ directory of the host machine. Quagga is a routing software package that provides TCP / IP-based routing services and protocol dremons. A machine installed with Quagga served as
a dedicated router.
3. We encoded the network topology specified in Figure 3.1 in an XML file. The
purpose of this file was to include specifications for creating a fifteen-node virtual
network testbed. We ensured that the XML file specifications conformed to the
VNUML DTD that comes with the VNUML tool. Details of the XML specifications
are included in Appendix A.
4. We then created the VNUML session and individual machines by running the
21
commands in Figure 3.2. When we were finished with the networking scenario,
we killed the scenario processes by running the commands specified in Figure 3.3.
A screen shot of a fifteen-node virtual network testbed is shown in Figure 3.4.
Each of the windows or machines in Figure 3.4 represents a node on the virtual
network testbed.
5. The network created in step four had strictly local connectivity, but this particular network ignored the global network topology. This type of connectivity means that only adjacent routers could communicate with each other. To
globally connect the network, we then configured each router in the network
with RIP by creating these files: zebra. conf , ripd. conf and vtysh. conf in
/etc/quagga directory. These three configuration files were created and designated for each router. A sample of each configuration file for router, Rl, is
included in Appendix B. See Appendix B for more details.
6. We then started and stopped the RIP dremon by running commands as shown
in Figure 3.5. A piece of code from XML specifications for starting and stopping
the ripd dremon is shown in Figure 3.6. See the XML file in Appendix A for
more details. For the purpose of our experiments as described in Section 3.4, we
verified that each router could connect to the local host. We achieved this goal
by running the command, telnet localhost ripd, on each router to confirm
that a telnet session was possible from each router. The telnet session for the
router-Rl console is shown in Figure 3.7.
vnumlparser.pl -t /usr/share/vnuml/RIP15nodes.xml -v -u root
Figure 3.2: Commands for creating a virtual network for a fifteen-node topology.
22
vnumlparser.pl -d /usr/share/vnuml/RIP15nodes.xml -v
Figure 3.3: Command::5 for releasing a. network ::5cenario for a. fifteen-node topology.
Figure 3.4: Screen shot of a fifteen-node virtual network testbed
sudo vnumlparser.pl -x start@RIP15nodes.xml # Starting
sudo vnumlparser.pl -x stop©RIP15nodes.xml
#
Stopping
Figure 3.5: Commands for starting and stopping RIP protocols
3.3.3
Validating the virtual network
The validation test is used to confirm that there is network connectivity in the virtual network. ·w ithout a routing protocol, a router knows neighbors or routes that
are directly connected to it. \Vhen we configured the testbed with the RIP routing
23
----------------------------------------------------,
R1
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ripd -d
killall zebra
killall ripd
Figure 3.6: XML code for starting and stopping zebra and ripd dremons
R1:\-# telnet localhost ripd
Trying 127.0.0.1 ...
Connected to localhost.
Escape character is '-] '.
Hello, this is Quagga (version 0.99.7).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
User Access Verification
Password:(zebra)
ripd>
Figure 3. 7: An example of a telnet session with the ripd dremon.
protocol, each router could obtain the routing information of distant neighbors. Each
router contains a RIP table known as a routing table. The routing table has three
main columns among others: the first is the destination subnet, the second is the
gateway or identity of the next router along the shortest path to the destination subnet and the third indicates the "metric" or the number of hops to the destination.
An example of a routing table is discussed and shown in Section 3.4.1.
We used the ping command to test whether or not a particular host was reachable
across the virtual network. A computer network tool, ping, is used to test whether
a particular host is reachable across an IP network and to self test the network
interface card of the router. The ping command sends an ICMP echo request to the
stated destination address and the TCP /IP software at the destination then replies to
24
the ping echo request packet with a similar packet, called an ICMP echo reply. If the
network is connected and functional , it reports the number of packets transmitted, the
percentage of packet loss, and the round-trip time. If the network is not connected,
the ping command replies that the "Network is unreachable". The ping command
estimates the round-trip time in milliseconds, records packet loss, and displays a
statistical summary when it is finished.
When we tested router Rl in our virtual network with the ping command, we obtained
the results as shown in Figure 3.8. The results from this test displayed information
about the network and confirmed that the ping command was working. In the first
case, the nodes that were not immediate neighbors were not reachable when RIP
protocol was not running in the network. In the second case there was a global network
connectivity when the network was configured with RIP protocol. An example of a
session testing for connectivity from a router, Rl console to a distant Host F is shown
in Figure 3.8 .
.------------------------ pinging: R1 console ------------------------~
R1:-# ping 10.0.14 . 5 -c1 #Host F without RIP
connect: Network is unreachable
R1:-# ping 10.0.14.5 -c1 #Host F with RIP
PING 10.0.14.5 (10.0.14.5) 56(84) bytes of data.
64 bytes from 10.0.14.5: icmp_seq=1 ttl=61 time=0.589 ms
--- 10.0.14 . 5 ping statistics--1 packets transmitted, 1 received, O% packet loss, time Oms
rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms
Figure 3.8: Pinging from Rl to Host F
In this section, we carried out the fundamental steps necessary to confirm that that we
have successfully created a network testbed, before conducting further experiments.
Most especially, XML code has to be specified in such a way that routers can be started
and stopped easily without affecting the networking scenario in the VNUML session. At
25
this point, our virtual network was validated, and we could proceed.
3.4
Experiments on a fifteen-node network testbed
In this section, we describe two experiments on the fifteen-node virtual network
testbed. The goal of the first experiment was to use an emulation technique to understand routing changes in a small network for single-link failures. The goal of the
second experiment was to use the same emulation technique to study routing changes
for single-router failures in the same network. We will use the information obtained
from these experiments as a background preparation towards our project's primary
goal of making comparison for a large scale network using emulation and simulation
technologies in Chapter 4.
3.4.1
Single-link failures with RIP
In this experiment, we investigated the effects of single-link failures on the virtual
network testbed. We observed routing changes in the routing tables when single-link
failures occurred.
We emulated all the single-link failures in the network, and observed the effect of the
failures on each router configured with RI P in the network. We removed each link in
the network sequentially, and recorded routing changes at the head node of a link in
the network. Several tests were conducted by disabling each link in the network for
this experiment and routing changes were recorded. This experiment was performed
by specifying a command in each router as follows -
R1: rv# ifconf i g et h1 down.
Interfaces on the links for each router could be ethO, e t h1 , eth2 and so on. The
removal of each interface has a corresponding effect on the updates of the routing
26
table for each router in the network.
We recorded the number of routing entries generated in these networking scenarios.
This recording was carried out by running a network command route directly on
each ro uter to collect routing entries in the routing table. This command displays
all the RIP routes in the network. We collected results from the routing tables for
each ro uter in the network. Examples of the results obtained from one of the experiments when the network was fully functional, and when an interface eth3 of link
R2-R4 was removed (before the network restabilized) are shown in Figure 3.9 and
Figure 3.10. We then compared the results of routing changes in both cases obtained
from their respective routing tables -
the fully functioning network, and the missing
links ne twork scenarios before restabilizations -
to see what changes occur.
R2 console
Case 1 : Full Links with 19 routing entries in the network.
R2: -# route
Kernel IP routing table
Destin at ion
Gateway
Genmask
Flags Metric Ref Use I face
192.168.0.8
255.255.255.252 u
0
0
0 ethO
*
10.0.4 .0
255.255.255.0
u
0
0
0 eth2
*
10.0.5 .0
255.255.255.0
u
0
0
0 eth3
*
10.0.6 .0
10.0.7.3
255.255.255.0
UG
3
0
0 eth4
10.0.7 .0
255.255.255.0
u
0
0
0 eth4
*
10.0.1 6.0
10.0.5.5
255.255.255.0
UG
3
0
0 eth3
10.0.0 .0
10 . 0 . 2.3
255.255.255.0
UG
2
0
0 eth1
10.0.1 7.0
10.0.5.5
255.255.255.0
UG
3
0
0 eth3
10 . 0 . 1 . 0
10.0.2.3
255.255.255.0
UG
2
0
0 eth1
10.0.2 .0
255.255.255.0
u
0
0
0
eth1
*
10.0.3 .0
10.0.7.3
255.255.255.0
UG
2
0
0 eth4
10.0.1 2.0
10.0.7.3
255 . 255.255 . 0
UG
3
0
0 eth4
10.0.1 3.0
10.0.7.3
255.255.255.0
UG
4
0
0 eth4
10.0.1 5.0
10.0.7.3
255.255.255.0
UG
4
0
0 eth4
10.0.8 .0
10.0 . 5.5
255.255.255.0
UG
2
0
0 eth3
10.0.9 .0
10.0.5.5
255.255.255.0
UG
2
0
0 eth3
10.0.1 0.0
10.0.5.5
255.255.255.0
UG
3
0
0 eth3
10.0.1 1.0
10.0.7.3
255 . 255 . 255.0
UG
3
0
0 eth4
Figure 3.9: Case 1: Routing table for router R2 with full links -
27
19 routing entries
R2 console
Case 2 : Link removal before network restabilized.
R2:-# ifconfig eth3 down # R2-R4 link removed.
R2:-# route
Kernel IP routing table
Flags Metric Ffef Use If ace
Genmask
Destinat ion
Gateway
255 . 255.255.252 u
0
0
0 ethO
192.168.0.8
*
255.255.255.0
u
0
0
0 eth2
10.0.4 .0
*
255 . 255.255.0
UG
3
0
0 eth4
10.0.6 .0
10.0.7.3
255 . 255.255.0
u
0
0
0 eth4
10.0.7 .0
*
2
0 eth1
0
10.0.0 .0
10.0.2.3
255.255.255.0
UG
255.255.255.0
UG
2
0
0 eth1
10.0.1 .0
10.0.2.3
255.255.255.0
u
0
0 eth1
10.0.2 . 0
9
*
0 eth4
UG
2
0
10.0.7.3
255.255.255.0
10.0.3 .0
Q
255.255.255.0
UG
3
0 eth4
10.0.1 2.0
10.0.7.3
4
0
0 eth4
255.255.255.0
UG
10.0.1 4.0
10.0.7.3
255.255.255.0
UG
0
0 eth4
10.0.1 1.0
10.0.7.3
3
Figure 3.10: Case 2 entries
Routing table for R2 before network restabilized I
For explanation purposes, we selected one of the links -
link R2-R4 -
12 routing
to illustrate
the outcome of our single-link failure analysis. For further analysis, we can select any
other link in the network to explain effects of single-link failures.
When the virtual network was fully functional in case one, the expected outcome is to
obtain nineteen routing entries at the head node of the link from this network. However, when we experimented with the removal of the selected link R2-HA , the number
of routing entries changed from nineteen to eleven. This result clearly shows that the
removal of link R2-R4 leads to a 37% decrease in the number of routil1g entries. The
summarized results of these experiments are shown graphically in Figure 3.11. In
Figure 3.11, the links of the virtual network are shown on the x-axis while the head
node routing changes in the network are shown on the y-axis.
In most instances, it is observed that a single-link failure caused many changes in the
routing tables. About 50% of the links in this fifteen-node network will cause about
28
-
RIP: Single link failure analysis
---------------------------------------~--~
18
i;'
16
I·
I c FULL LINKS I
I•LESS 1 LINK I
R1-R2
R1 -R3
R2-R3
R2-R4
R3-RS
R4-R6
R4-R7
R5-R8
R5-R9
R6-R7
R7-R8
R6-R9
Links
Figure 3.11 : Single link removal analysis for RIP
a 42% decrease in the number of routing changes. These observations demonstrate
that when the link connectivity of the network goes down, RIP will modify the local
routing table and record the routing entries for the routers that are still connected
in the network. This result conforms to the concepts explained regarding intra-AS
routing and matches our hypothesis in Section 2.5.1.
3.4.2
Single-route r failures with RIP
In this experiment, we investigated the impact of single-router failures in the virtual
network. We recorded routing changes in the network as shown in the routing tables
for single-router failures.
The concepts and implementations of RIP routing protocol explain that if a router
29
does not hear from its neighbor at least once in every 180 seconds, that neighbor
is considered to be no longer reachable [15]. This outcome means the neighbor is
dead or the connectivity is lost. Therefore, RIP modifies the local routing table and
propagates this information by sending advertisements to neighboring routes that are
still reachable.
Firstly, we recorded the number of routing entries when all the routers were in place
in the network and the network was fully functional. Secondly, we removed each
router in the network, sequentially, then recorded the resulting routing changes for
each router. When a particular router was missing, we collected and recorded the
routing entries from each of the routers that were still active in the network. We
compared the routing entries collected from the routing tables of the original network,
with routing entries of the modified network, and reported changes observed for each
missing router.
For our original and unmodified network, each router had nineteen routing entries in
their routing table. When we modified the network by removing each router sequentially, each router produced had eighteen or nineteen routing entries depending on
whether the router is a cut point. We now provide more detail.
Firstly, for routers -
R3, R4, and R7 -
there was no difference in their routing
tables both before and after their removal. This outcome occurred because each
router considers its neighbor dead after 180 seconds and adjusts its routes based on
the next available router. As an example, we show the results for R5 in Figure 3.12
for when the virtual network is fully functioning and in Figure 3.13 for when router
R3 was missing from the network.
Secondly, we observed only slight difference of one missing routing entry in these
routers: R1, R2, R5, R6, R8, and R9. This difference occurred because these routers
30
were cut points of t he network topology shown in Figure 3.1. As an example, we show
t he results for R5 in Figure 3.12 for when the virt ual network is fully functioning and
in Figure 3.14 for when router R1 was missing from t he network. The loss of one
rout ing ent ry was as a result of network non-reachability due to a cut point on t he
network topology. Ot herwise, t he behaviors of all t hese routers were t he same, and
it reflected t hat effects of missing routers were not important in t he RIP routers.
R5:-# route
Kernel IP routing table
Destination
Gateway
192.168.0.24
*
10.0.12.5
10.0.4.0
10.0.12 . 5
10.0.5.0
10.0.6.0
*
10.0.3.3
10.0.7.0
10.0.16.0
10 . 0 . 12.5
10 . 0.0.0
10 . 0.3.3
10.0.12.5
10.0 . 17.0
10.0.1.0
10 . 0.3.3
10.0.2.0
10.0.3.3
10.0.3.0
*
10.0.12.0
*
10.0.13.0
10.0 . 11.5
10 . 0.14.0
10.0.11.5
10.0.15.0
10.0.12.5
10.0.8.0
10.0.12.5
10.0.9 . 0
10.0.12.5
10.0.12 . 5
10.0.10.0
10.0.11.0
*
R5 console
Flags Metric Ref Use If ace
Genmask
255.255.255.252 u
0
0
0 ethO
255.255.255 . 0
UG
5
0
0 eth4
4
0
0 eth4
255.255.255.0
UG
255.255.255.0
u
0
0
0 eth2
255 . 255.255.0
2
0 eth1
0
UG
255.255.255.0
UG
2
0
0 eth4
255 . 255.255.0
UG
0 eth1
3
0
255.255.255.0
UG
3
0
0 eth4
255.255.255 . 0
UG
2
0
0 eth1
255.255.255.0
UG
3
0
0 eth1
255.255.255.0
u
0
0
0 eth1
255.255.255.0
u
0
0
0 eth4
255.255.255 . 0
UG
2
0
0 eth3
255.255.255.0
2
0
0 eth3
UG
255.255.255 . 0
UG
2
0
0 eth4
255.255.255.0
4
0
0 eth4
UG
255.255.255.0
UG
3
0
0 eth4
255.255.255 . 0
UG
4
0
0 eth4
255.255.255.0
u
0
0
0 eth3
Figure 3. 12: Routing table for the router-R5 with fully functional network
In summary, we obtained results obtained for t he removal of each router. T hese results
are summarized graphically in Figure 3.15. In Figure 3. 15, the routers of the virtual
network are shown on t he x-axis while t he number of routing changes in the network
are shown on t hey-axis. From t he obtained results for a single-router failure analysis,
we observed t hat a network configured wit h RIP always produced t he same number
of rout ing table ent ries at each router when a particular router was removed. T he
31
----
R5 console
R5:-# route (missing router R3)
Kernel IP routing table
Destination
Flags Metric Ref Use !face
Genmask
Gateway
255.255.255.252 u
0
0
0 ethO
192.168.0.20
*
10.0.4.0
10.0.12.5
255.255.255.0
UG
5
0
0 eth4
255.255.255.0
4
0 eth4
0
10.0 . 5 . 0
10.0.12.5
UG
10.0.6.0
255.255.255.0
u
0
0
0 eth2
*
10.0.7.0
10.0.12.5
255.255.255.0
UG
5
0
0 eth4
10.0.16.0
10.0.12.5
255.255.255 . 0
UG
2
0
0 eth4
10.0.0.0
10.0.12.5
255.255.255.0
UG
6
0
0 eth4
255.255.255.0
0
0 eth4
10.0.17.0
10.0.12.5
UG
3
10.0.1.0
10.0.12.5
255.255.255 . 0
UG
6
0
0 eth4
255.255.255.0
0 eth4
10.0.2.0
10.0.12.5
UG
5
0
255.255.255.0
u
0
0
0 eth1
10.0.3.0
*
255.255.255.0
0 eth4
10.0.12.0
u
0
0
*
2
0
0 eth4
10.0.13.0
10 . 0 . 12.5
255.255.255.0
UG
10.0.11.5
255.255.255 . 0
UG
2
0
0 eth3
10.0.14.0
10.0.15.0
10.0.12.5
255.255.255.0
UG
2
0
0 eth4
255.255.255.0
UG
4
0
0 eth4
10.0.8.0
10.0.12.5
10.0.9.0
10.0.12.5
255.255.255.0
UG
3
0
0 eth4
255.255.255.0
10.0.10.0
10.0.12.5
UG
4
0
0 eth4
255.255.255.0
0
0 eth3
0
10 . 0.11.0
u
*
Figure 3.13: Routing tables when the router-R3 was missing
slight difference in the routing table entries was because that particular router was a
cut point of the network graph. This observation conforms to the concepts explained
regarding dead neighbors or routers, and matches our hypothesis in Section 2.5.1.
3.5
Conclusions
In this section, we explain our conclusions for the experiments conducted in the small
network with RIP configurations. We use the two experiments that we performed on
single-link and single-router failures in Section 3.4.
The first experiment enabled us to study the routing changes caused by the removal
of links in a virtual network configured with RIP routing protocol. When we exper32
R5 console
)
R5:-# route (missing router R1)
Kernel IP routing table
Destination
Gateway
Genmask
Flags Metric Ref Use If ace
192.168.0.20
255.255.255.252
u
0
0
0 ethO
*
10.0.4.0
10.0.3.3
255.255.255.0
UG
3
0
0 eth1
10.0.5.0
10.0.3.3
255.255.255.0
UG
3
0
0 eth1
10.0.6.0
255.255.255.0
u
0
0
0 eth2
*
10.0.7.0
10.0.3.3
255.255.255.0
UG
2
0
0 eth1
10.0.16.0
10.0.12.5
255.255.255.0
2
0
0 eth4
UG
10.0.17.0
10.0.12.5
255.255.255.0
UG
3
0
0 eth4
10.0.1.0
10.0.3.3
255.255.255.0
UG
2
0
0 eth1
10.0.2.0
10.0.3.3
255.255.255.0
UG
3
0
0 eth1
10.0.3.0
255.255.255.0
u
0
0
0 eth1
*
10.0.12.0
255.255.255.0
u
0
0 eth4
0
*
10.0.13.0
10.0.12.5
255.255.255.0
UG
2
0
0 eth4
10.0.14.0
10.0.11.5
255.255.255.0
UG
2
0
0 eth3
10.0.15.0
10.0.12.5
255.255.255.0
UG
2
0
0 eth4
10.0.8.0
10.0.3.3
255.255.255.0
UG
4
0
0 eth1
10.0.9.0
10.0.12.5
255.255.255.0
UG
3
0
0 eth4
10.0.10.0
10.0.12.5
255.255.255.0
UG
4
0 eth4
0
10.0.11.0
255.255.255.0
u
0
0
0 eth3
*
Figure 3.14: Routing tables when the router-R1 was missing
imented with the removal of links as shown in Figure 3.11, it shows that there was
a reduction of an average of 32%, with a range of 11% to 53% in the number of
routing entries in the routing tables. In addition, the removal of different links leads
to different routing changes in the network. We were able to identify links that had
the most effects on network when they were removed. These experiments explain the
importance of certain links to the routing changes in the network. Failure of each link
has corresponding effects on the routing information of the network and, eventually,
the routing changes. As an example, four links R6-R7 -
R1-R3 , R2-R4, R3-R5 , R4-R7 and
in Figure 3.11 are critical for the day-to-day running of the network; any
failure of such links has considerable effects on the routing changes.
The second experiment evaluated the effects of router failures in a virtual network
configured with RIP routing protocol. We observed there was no difference in the
33
-
RIP: Single Router Failure Analysis
20
18
-
-
-
r
r-
'
16
14
:
..
s
~
.c
12
0
10
0
8
c
a:
0
'
I,
IOFuiiRouters 1
~
'
'-
~-
~
-,.-'"~
~
'--r- ~
~
'--r- ~
~
~
~
~
-
'-- -
~
~
-,.-C....
~
~
Routers
Figure 3.15: Single router removal analysis for RIP
number of routing changes with single-router failures points -
except for those with cut
as shown in Figure 3.15. This evaluation confirmed the true behavior of
RIP [15]: After waiting for 180 seconds, a router considers its immediate neighbor to be
dead and takes the next available router as its next hop and adjusts its routes. When
there was a single-router failure in the network, it was equivalent to the simultaneous
failure of all their connecting links. The example in Figure 3.13 shows that when
router R3 was missing from the network, there was simultaneous failure of all the
three connecting links to this router. These single-router failure experiments show
that they are quite different from single-link failure experiments. In the case of singlelink failure experiments, the network waited for some time to stabilize while in the
case of single-router failure experiment the network did not wait for stabilization; the
next available router immediately became its next hop and adjusted the routes.
34
Finally, we are able to use this protocol in the virtual network to understand the
routing changes caused by the link and router removal in a small network. But for
large network and complex networks RIP is probably wholly inadequate. This routing
protocol does compute new routes after any change in the network topology, but in
some cases it does so very slowly, by counting to infinity. RIP prevents routing loops
from continuing indefinitely by implementing a hop count limit. This limit ensures
that anything more than fifteen hops away is considered unreachable by RIP. The
drawback of RIP in [15] explains the choice of network operators and researchers for
improved routing protocols from the link state family that can detect and correct
router failures in their network.
35
Chapter 4
Modelling the Routing of an
Autonomous System
This chapter contains the experimental work and results of modelling the routing of
an AS, more specifically the GEANT network. The GEANT network was a pan-European
backbone that connects Europe's national research and education networks.
We present the goals of this chapter in Section 4.1. In Section 4.2, we explain our
network topology and describe how we modelled the GEANT network using an emulation method, and validated the functionality of this virtual network. We conducted
two case studies in the network, which are described in Section 4.3. The first case
study investigated the effects of single-link failures in the virtual network, and the
second case study examined the effects of single-router failures . In Section 4.4, we
compare our emulation studies and a simulation work by Quoitin et al. in [20, 21] .
Both simulation and emulation methods are used to study the routing changes in the
GEANT network. Lastly, in Section 4.5 we present our conclusions drawn from the two
case studies.
36
4.1
Introduction
In this chapter, we aim to use emulation techniques to design a model of the GEANT
network. This emulated network topology is similar to that of the simulated model
of the GEANT network investigated in [20, 21] . Our first goal in this study was to use
emulation techniques to design a model of the GEANT network testbed. The second
goal was to use a routing protocol, OSPF, to configure a large scale network on the
testbed, referred to as the GEANT. The third goal was to use our virtual GEANT network
testbed to study routing changes caused by link and router failures. Lastly, the final
goal was to demonstrate that emulation techniques produce reasonable results for a
large scale network, which are consistent with the results obtained by Quoitin et al.
in [20, 21]. The results collected from our emulated study are directly compared to
the results from Quoitin et al. simulation work.
4.2
Modelling of the GEANT network
In this section, we describe how to use an emulation technique to model the GEANT
network. The GEANT was a transit network: a pan-European computer network for
research and education. The GEANT network is the large-scale network we used for our
investigations. The GEANT network was a multi-gigabit European computer network
project for research and education. Maintaining the GEANT network project involved
network testing and development of new technologies and networking research. Figure 4.1 shows the overview of the GEANT backbone network. GEANT2 is the successor
to GEANT, and its development began in November 2000 and officially ended in April
2005. See more details regarding GEANT and GEANT2 projects in [4, 31] . Later in the
sub-sections, we briefly describe the implementation and configuration of this network
with OSPF routing protocol.
37
..""'
GEANT
*
DANTE
The world's most advanced
international research network
Providing pan-European and international connectivity for research and education
,
.........•
§]
-LSGba/1
···· · 'UMbll/1
····· :M·ISSMblt/1
~ cl1
·---I. GlANT Is ~
by DANTE on beh.Wof[uropn rewltdllll'ld tduc.Mion ~
~ ~
~
-
Bl __.
B a.....
B ......,
l!J - ·
EJ •
----~-
~
..
---- "@]
-- ~
'.
..
1.1
'-·· ······:::·.-::::·.-.. ····-······=··-:::.-:.:.·.::·:·
...: .. L.!:J
EJ -....
~
~
; ::;~--- -., ...........................
~ ---
~
..
~
G£ANT Is co-funded by The European Commission within its
sth R&D Framework programme
~ -!!l
IS El -
t;l -lrl-¥[•1
..........
~--~
Figure 4.1: The GEANT backbone network: Courtesy of DANTE
4.2.1
Topology of the GEANT network
We modelled a network with similar topology to that used in [20, 21], which was the
three-layer topology of the GEANT captured from a one-day IS-IS trace of 24 November ,
2004. Our model of the GEANT network has a complex topology which includes the
twenty-three routers, thirty-eight links, and two hosts. These two hosts are used
for testing and validation purposes, they are not routers and have no effect on the
topology.
The emulated GEANT network model was designed for our investigations of the impact
of router and link failures in a large scale network. The network graph of the emulation
38
w
(.0
>-rj
s
~
~
'Tj
'"0
0
Cll
~
c:+
0..
~
.......
(1)
E=i
(Jq
::n
i:j
0
(")
'-<
(Jq
0
.........
0
'0
0
c:+
~
tv
(1)
'"1
~
(1)
.:::
'"1
>-rj
.......
(Jq
i:j
.......
~i:j
::r
r:Jl
r:Jl
.......
~
'"1
0
c:+
i:j
(1)
0..
0
i:j
(1)
<
::n
'-
(1)
:><:
i:j
~
'0
(i)
0
(")
tv
(1)
.:::
'"1
oq"
Cii"
::r
c:+
0
......,
4.2.2
Implementation and configuration with OSPF
In this section, we implemented the network topology described in Section 4.2. 1 using
VNUML on an UBUNTU Linux machine. The methodologies for design and implementa-
tion of this virtual network testbed are similar to the approach used in Section 3.3.2.
The major difference between them is the size of the GEANT network; we needed to
patch the UBUNTU host machine with SKAS3 features [1] in order to enhance performance to meet the larger resource requirements of the GEANT network emulation.
Below is a set of basic steps for the implementation and configuration of the virtual
GEANT network:
1. We patched the default Linux kernel of the UBUNTU host machine [12] with
SKAS3 obtained from [1].
We downloaded these patches and compiled a Linux
kernel on UBUNTU systems to include SKAS3 features.
2. We then installed VNUML tool in the Linux environment of the host machine.
This tool and the installation procedures can be downloaded from [8]. The
VNUML tool is designed to easily create simple and complex network emulation
scenarios.
3. Next, we installed Quagga in the system-wide /etc/ directory of the host machine. Quagga is a routing software package that provides TCP /IP-based routing services and protocol dcemons. A machine installed with Quagga serves as
a dedicated router.
4. We wrote implementation code for the network topology specified in Figure 4.2
in an XML file. The purpose of this file was to include specifications for creating the virtual GEANT network. We ensured that the XML file specifications
conformed to the VNUML DTD [8] that came with the VNUML tool. Details of the
XML specifications are included in Appendix C.
40
5. We then created the VNUML session and individual machines by running the
commands in Figure 4.3. When we were finished with the networking scenario,
we killed the scenario processes by running the commands specified in Figure 4.4.
A screen shot of the virtual GEANT network testbed is shown in Figure 4.5.
Each of the windows or machines in Figure 3.4 represents a node on the virtual
network testbed.
6. The network created in step five had strictly local connectivity, but this network ignored the global network topology. This type of connectivity means that
only adjacent routers could communicate with each other. To enable network
connectivity, we then configured each router in the network with OSPF by creating these files: zebra.conf , ospfd.conf and vtysh.conf in /etc/quagga
directory. These three configuration files were created and designated for each
router. A sample of each configuration file for the router-Rl is included in
Appendix D. See Appendix D for more details.
7. We then started and stopped OSPF daemon by running commands as shown in
Figure 4.6. We included a piece of code from XML specifications for starting and
stopping ospfd daemon as shown in Figure 4. 7. See the XML file in Appendix C
for more details.
vnumlparser.pl -t /usr/share/vnuml/NIYiospf.xml -v -u root
Figure 4.3: Commands for creating virtual GEANT network
vnumlparser.pl -d /usr/share/vnuml/NIYiospf.xml -v
Figure 4.4: Commands for killing virtual GEANT network
41
Figure 4.5: A screen shot of the virtual GEANT network testbed
sudo vnumlparser.pl -x
sudo vnumlparser.pl -x
~
~
#Starting the daemons
#Stopping the daemons
Figure 4.6: Commands for starting and stopping the OSPF dremons
Rl
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
Figure 4. 7: XML code for starting and stopping the zebra and ospfd dremon
4.2.3
Validating a model of the GEANT network
In this section, we performed validation tests on the virtual GEANT network. The
results of these validation tests from our emulated network confirmed the expected
42
behavior as observed on the physical topology of the GEANT network. The validation
tests were carried out in three different ways: testing the network for reachability,
computing routing tables for each router, and tracing the route of packets in the
network. These three tests assured us that our virtual representation of the GEANT
network was functional and reliable for our case studies in Section 4.3.
4.2.3.1
Testing the network reachability
The first validation test was to check for connectivity in this complex virtual network.
When there is no routing protocol in the network, routers have local connectivity,
that is, they are only connected to immediate neighbors. When there is a routing
protocol such as OSPF in the network, OSPF routers flood the network with link state
information. All routers will receive updates and re-compute their routing tables.
We used the ping command for this test. For instance, we tried from console R5
to reach Host A on the virtual network. We obtained the result: "Network is unreachable"; this is due to lack of a routing protocol in the network. After we had
successfully configured the network with the OSPF dremon, the connectivity confirmed
the protocol was working correctly. The result of a ping command on the router-R5
console to reach Host A is shown in Figure 4.8. This testing confirmed that the OSPF
protocol was working correctly, and we could proceed to the next test.
4.2.3.2
Managing the routing information with OSPF
The second validation test was to compute the routing tables for each router. We
wanted to display summary information about all routes for the OSPF protocol.
We ran the route command directly from the console of each router. After executing
the command, we obtained results -
routing tables - that indicated routing entries:
43
----------------------------------------------------~
.------------------------ - - - - pinging ------------------------------.
R5:-# ping 10.0.10 . 8 -c2 #Host A without OSPF daemon
connect: Network is unreachable
R5:-# ping 10.0.10.8 -c2 #Host A with OSPF daemon
PING 10.0.10.8 (10.0.10.8) 56(84) bytes of data.
64 bytes from 10.0.10 . 8: icmp_seq=1 ttl=62 time=60.7 ms
64 bytes from 10.0.10.8: icmp_seq=2 ttl=62 time=0.647 ms
--- 10.0.10.8 ping statistics--2 packets transmitted, 2 received, 0% packet loss, time 1012ms
rtt min/avg/max/mdev = 0.647/30.719/60.791/30.072 ms
Figure 4.8: Pinging from R5 to Host B
destinations, gateway or path to different destinations, metric -
cost, interfaces and
flags. These results showed the routing information of the virtual network and they
are shown in Figure 4.9 for t he router-R5 console. This test also confirmed that the
OSPF protocol was working correctly and we could proceed to the third test.
4.2.3.3
Tracing packets in the virtual GEANT network
The third validation was to ascertain how packets travel in our virtual network. This
validation test showed a list of routes traversed, and allowed us to identify t he path
taken to reach a particular destination in the network.
We used the traceroute command to investigate the route taken by packets across
the virtual GEANT network. The result showed paths that were taken by the packets
and the corresponding time spent in milliseconds. Figure 4.10 shows a session t hrough
the router-R5 console. The result showed how a packet would travel on the router
and the respective t imes in milliseconds. This test also confirmed that both t he network and OSPF protocol were functioning properly. From the topology in Figure 4.2,
we could use a physical examination of the network to obtain the computation of
shorthest paths/hops for tracing the packets from R5 to Host B. Both the physical
44
R5: -# route
Kernel IP routing table
Destination
Gateway
192.168.0.16
*
10.0.12.8
10 . 0 . 20 . 0
10 . 0.21.0
10.0.6 . 8
10.0.22.0
10.0.12 . 8
10.0.12.8
10.0 . 23 . 0
10 . 0 . 12 . 8
10.0.16.0
10 . 0 . 5.4
10.0.17.0
10.0.12.8
10.0.18.0
10.0.12.8
10.0.19 . 0
10 . 0.12.8
10 . 0.28.0
10 . 0.12.8
10.0.29.0
10 . 0.12 . 8
10.0.30 . 0
10.0.31.0
10.0.12 . 8
10 . 0 . 6.8
10.0.24.0
10.0.12 . 8
10.0.25.0
10 . 0 . 6.8
10.0.26.0
10.0.12.8
10.0.27.0
10.0.5 . 4
10.0.4.0
10.0.5.0
*
10.0.6.0
*
10.0.6.8
10.0.7.0
10.0.5.4
10 . 0.1.0
10.0 . 5.4
10.0 . 2.0
10.0.5.4
10.0.3.0
10.0.12.0
*
10.0.12.8
10.0.13 . 0
10.0.6.8
10.0 . 14.0
10.0.5.4
10 . 0 . 15.0
10.0.6.8
10.0.8.0
10.0.9.0
10.0.6 . 8
10.0.6.8
10.0 . 10.0
10.0.11.0
10 . 0 . 6 . 8
10.0.12.8
10.0 . 37 . 0
10 . 0.36.0
10.0.12 . 8
10.0.12.8
10.0.38.0
10.0.33.0
10 . 0 . 12.8
10 . 0 . 6.8
10.0 . 32 . 0
10.0.35.0
10 . 0 . 12.8
10.0 . 34.0
10.0.12.8
R5 console
Flags Metric Ref Use Iface
Genmask
0 ethO
0
0
255.255.255.252 u
0 eth3
UG
30
0
255.255.255 . 0
0 eth2
UG
40
0
255.255.255.0
0 eth3
255 . 255.255.0
UG
40
0
eth3
0
0
255.255.255 . 0
UG
30
0 eth3
20
0
UG
255 . 255.255 . 0
0 eth1
30
0
255 . 255.255 . 0
UG
0 eth3
255.255.255 . 0
UG
20
0
0 eth3
30
0
255.255.255.0
UG
40
0 eth3
0
255.255.255.0
UG
0 eth3
UG
50
0
255 . 255.255 . 0
UG
50
0
0 eth3
255 . 255.255.0
0 eth3
255.255.255 . 0
UG
60
0
0 eth2
UG
40
0
255.255.255 . 0
UG
50
0
0 eth3
255.255.255 . 0
0 eth2
UG
40
0
255.255.255 . 0
0
0 eth3
255.255.255.0
UG
30
30
0
0 eth1
255.255.255.0
UG
0 eth1
255.255.255.0
u
0
0
0 eth2
255.255.255.0
u
0
0
0 eth2
UG
20
0
255 . 255.255.0
30
0
0 eth1
255.255.255.0
UG
20
0
0 eth1
255.255 . 255.0
UG
40
0 eth1
0
255.255.255.0
UG
0 eth3
255.255 . 255.0
u
0
0
40
0
0 eth3
255.255.255.0
UG
40
0 eth2
0
255.255.255.0
UG
0 eth1
255 . 255.255.0
UG
20
0
20
0 eth2
0
255.255.255.0
UG
0 eth2
0
255.255.255.0
UG
30
0 eth2
UG
30
0
255.255.255.0
0 eth2
255.255.255.0
UG
30
0
255.255.255 . 0
UG
60
0
0 eth3
255.255.255.0
UG
20
0
0 eth3
255.255.255.0
UG
50
0
0 eth3
255 . 255.255.0
UG
60
0
0 eth3
0 eth2
255 . 255.255.0
UG
30
0
255.255.255.0
UG
60
0
0 eth3
255 . 255.255 . 0
0 eth3
UG
60
0
Figure 4.9: An example of OSPF rout ing table for R5 console
45
examination and emulation results produced the same number of shortest hops/paths,
i.e., six hops to the final destination.
We could confirm by using the above three validation tests that our emulated GEANT
network was functional and the OSPF protocol was running accordingly in the network. Because the network topology has been validated, next we proceed with our
networking experiments.
traceroute
R5 :-# traceroute -n 10.0.37.8 #Host B
tr aceroute to 10.0.37.8 (10.0.37.8), 30 hops max, 40 byte packets
1 10.0.12.8 33.176 ms 0.366 ms 0.255 ms
2 10.0.18.8 47.148 ms 0.728 ms 0.587 ms
3 10.0.27.4 50.409 ms 0.995 ms 0.914 ms
4 10.0.28.8 41.210 ms 0.676 ms 0.484 ms
5 10.0.29.8 46.643 ms 0.651 ms 0.555 ms
6 10.0.37.8 46.573 ms 0.952 ms 0.659 ms
Figure 4.10: Traceroute from R5 to Host B
4.3
Case studies in the virtual GEANT network
In this section, we present two case studies investigated in the virtual GEANT network.
The experimental set-up is similar to that detailed in Section 3.2, but slightly modified
as explained in Section 4.2.2 to conform with the requirements necessary for a large
scale networking scenario.
In the first case study, we examined the impact of removing links as detected in the
total routing cost. In the second case study, we investigated the effects of the routers'
removal and the corresponding total routing cost . The two case studies helped us to
understand the impact of link and router failures in the virtual GEANT network and
their resulting total costs.
46
In [20, 21], Quoitin et al. partitioned the routing changes into four different classes:
Peer change, Egress change, Intra cost change and Intra path change. Each of these
changes was described in their work. In our emulated GEANT network, we focus on
Intra cost change as the routing changes, because other changes cannot be measured
in this network. This particular change occurs when there is no egress change except
for the change in the IGP cost of the ingress-egress path in the network. Our emulated
GEANT network assumed that the link weights were constant -
ten units, and link
weights reflected the cost of using a link. To minimize the overall cost, OSPF routing
protocol runs Dijkstra's algorithm to determine the shortest path -
least-cost path
in our case studies.
We also used the case studies to identify the links whose loss produces higher routing
costs and the routers whose loss yields the largest routing costs. Our goal in these
case studies was to demonstrate that the emulation method produces useful results
for understanding intra-domain routing. In the next section, Section 4.4, we use this
information to make a comparison of emulation and simulation techniques.
4.3.1
Single-link failures with OSPF
Most network operators do not have a sufficient understanding of the effect of link
failures in the network. Evaluating and determining which link failures will change
the outcome of the route selection in a large network configured with OSPF is a difficult
problem. Understanding and evaluating effects of link failures are important because
routing changes often lead to traffic shifts and traffic congestion. For a network operator, it is important to determine whether the network will be able to accommodate
the traffic load when single link failures occur. In addition, it is also necessary to
identify t he links in the network whose loss would cause increases in the total routing
cost (i.e., sum of Intra cost change) and protect such links by the addition of parallel
47
links to mitigate the impact of link failures in the network.
In this case study, we experimented with the removal of links in the emulated GEANT
network. The first objective was to compute the head node routing costs of each link
for a fully functional network and compare it with that of a missing-link networking
scenario. We aimed to identify the links most affected by the single link failures in
the network. The second objective was to relate our results from the emulated GEANT
network with those of simulations carried out by Quoitin et al. in [20, 21].
We used our virtual GEANT network that was validated in the last section for these
experiments. Firstly, we conducted the experiments as described in steps five to seven
of Section 4.2.2 for separate single link failures for each router. When the network
was functioning, we recorded the routing tables for each router. Secondly, we removed
each link from the virtual network, one at a time , and recorded the corresponding
routing tables for the router at the beginning of the link. This removal of link was
done for all the links in the network, and each time we computed the total cost of
routings and re-routings of these links. In this experiment, we selected cost as an
index for measuring routing changes. Data collection was done by running the route
command directly from the console of the beginning router of the removed link. The
beginning of a link is the starting router and the ending of the link is the ending
router for any link in the network. An example of data collection for a removed link
R1-R2 is as follows. The sum of total cost for routings and re-routings was collected
from the console of the router-R1 before and after link R1-R2 was removed.
The resulting changes in the routing tables for each link removed are shown graphically in Figure 4.11. In this figure, we show results for both conditions, that is, when
the network was fully functional and when there was removal of individual links. In
Figure 4.11, the links of the virtual network are shown on the x-axis while the head
node routing costs are shown on the y-axis.
48
-
OSPF: Single link failure analysis
2000
1800
1600
ll
IS 14oo
0
"'
~ 1200
i jc Full links: Cosl l
L•Less 11ink:
~
~
-3 1000
0
z
:
::t:
BOO
600
400
200
Figure 4.11: Single link failure analysis for OSPF
For each link removed, we observed the total routing cost at the head nodes of the
links - the sum of Intra cost changes. There were remarkable changes in the number
of routers that increased their cost (metric) as a result of single-link failures in the
virtual GEANT network. We observed about 12% variation in the total routing cost
for all the links, and an average of 13%, with a range of 5% to 20% increases in the
cost of re-routings of all the links in the emulated GEANT network. We also observed
remarkable increases in the total routing cost at the link head nodes of an average
of 18%, with a range of 16% to 20% in the following links: R1-R2, R1-R3, R2-R5,
R3-R13, R5-R10 , R5-R6, R7-R9, R9-R17, R10-R12 , R12-R16 , R13-R14, R18-R23 ,
R21-R23, and R22-R23. From Figure 4.2, these links show that their failures would
49
increase the total cost of re-routings in the network. In this project, we selected any
link from 10% increases to constitute a potential major change in the total cost of
routing. The increase in the total cost accounted for the fact that those links were
important for routing in the network; removal of such links would have moderate
effects in the network.
In our experiments, we also observed small increases in the total routing cost at the
head nodes of these links: R2-R4, R4-R10 , R15-R22, R17-R19 , and R18-Rl9. These
routing costs, the sum of Intra cost change, are of an average of 6%, with a range of
5% to 7% for their re-routings in the network. The slight increases in the total routing
cost at the head nodes of these links demonstrate that their effect in the network is
of lesser importance. The links with higher routing costs in the network are more
important , and their removal or breakage moderately affected the routing costs.
Without missing links, the network functioned smoothly; and we recorded the head
node routing costs from the routing tables. However, we observed differences in the
the head node routing costs when we conducted experiments with link failures in the
same network. Link failures clearly resulted in moderate changes in the total routing
cost at the head nodes of these links in the network, and such identified links need to
be protected to prevent traffic congestion.
In these experiments, we observed a change in the total routing cost at the node at
the head of a link that was removed. This result is qualitatively consistent with the
description of OSPF given in Section 2.5.2.
4.3.2
Single-router failures with OSPF
Most network operators and ISPs desire to understand the impact of router failures
in the network. Evaluating and determining which router failures will change the
50
outcome of the route selection is a difficult problem in a large network configured
with OSPF. Understanding and evaluating effects of router failures are important,
because routing changes often lead to traffic shifts and traffic congestion.
For a
network operator, it is often important to predict whether the network will be able
to accommodate the traffic load when single-router failures occur. In addition, it is
necessary to identify which of the routers in the network should be protected against
router failures by the addition of parallel routers.
In this case study, we experimented with the removal of routers in the emulated
GEANT network. In a large AS, it is often difficult to predict which router failures will
most affect the total routing cost -
sum of Intra cost change. Our first objective
was to collect the routing information for the fully functional network and compare
the results with routing information for a missing-routers networking scenario. We
sought to identify which routers are most affected by the single router failures in
the network. Our second objective is to relate our results from the emulated GEANT
network with the simulation results reported by Quoitin et al. in [20, 21]. We discuss
the comparison in Section 4.4.
As explained in Section 2.5.2, an OSPF router typically runs Dijkstra's shortest path
algorithm to determine a shortest path tree to all subnets, with itself as the root
node. In this network, we assumed that each link has a cost of ten , and consequently
the least-cost path is the same as the shortest path.
For these experiments, we used the virtual GEANT network that had already been
validated as described in the last section. Firstly, we ran the experiments as described
in steps five to seven of Section 4.2.2 for a total of twenty-three times, that is, each
time for each router. When the network was fully functional , we recorded the total
routing cost from the routing tables for each router. Secondly, we removed one router
for each experiment using our XML specifications in Appendix C. Emulations of the
51
network were performed sequentially until the effects of a single router failure for each
router in the network were recorded. For each router disabled, the data were collected
by running the route command directly from the console of each remaining router,
and summing the costs. These experiments consumed a lot of time and resources
because we had to collate the resulting data for each of the twenty-three routers that
was disabled and their remaining routers on each occasion.
Normally, a router broadcasts link state information whenever there is a change in the
network. The OSPF routers periodically flood the network with their advertisements,
thereby adjusting their routing tables and letting other routers know that they are
still functional.
The resulting changes in the number of routing entries for each
router removal are shown graphically in Figure 4.12. The results for both conditions
are displayed: when the network was fully functional and after the removal of each
router. Each router in the virtual network is shown on the x-axis, and the total
routing cost in the network is shown on they-axis.
When each router was removed, we observed moderate increases in the total routing
cost for some routers. These variations were of an average of 7.5%, with a range of 5%
to 10% in the total routing cost (i.e., the sum of Intra cost change) for these routers:
R5, R6, RlO, R12, and R14. In this project, any router with more than 5% increases
was considered to have a potential major change in the total cost of routing. These
results confirmed that the removal of any of these routers in the network would lead
to increases in the total routing cost. However, failure of such routers could lead to
increasing cost of reaching some destinations in the network.
We also observed that routers: R7, R17, R19, R20, R21 , and R22 have least effects
because their removal would lead to a shortfall of an average of 4.5%, with a range
of 2% to 7% in the total routing cost (i.e. , the sum of Intra cost change), not the
individual cost for a each router in the network. This shortfall means that their
52
OSPF: Single router failure analysis
36000
-- ----------~~-~-----
......
---
-
--
- - --- --~
34000
32000
30000
28000
26000
24000
~ 22000
0
'; 20000
fC ~ routers
1:
'$ 18000
0
0::
~
16000
1
~ 14000
12000
10000
8000
6000
4000
2000
~
~~~~~~
~~~~
~~~
~~~~
~~
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Routers
Figure 4.12: Single router failure analysis for OSPF
removal could reduce the total routing cost (i.e. , the sum of Intra cost change) in
the network. It would lower the total cost for traversing the whole network. The
fluctuation in the total routing cost gave us the hint on the possible changes in the
cost of maintaining traffic flows in the network.
Essentially from the data collected, we observed that there is a fluctuation in the
total routing cost for the missing routers networking scenario when compared with the
original, fully functioning network. These results confirm the concepts in Section 2.5.2
that there are changes in the routing tables when each router is removed from the
network.
53
-1
-~
4.4
Comparison of emulation and simulation results for the GEANT network
In this section, we compare the results obtained from our emulations studies of the
GEANT network with the simulation studies for this same network carried out by Bruno
Quoitin in [20, 21]. We briefly discuss the main difference of these two experimental
techniques.
The emulation and simulation techniques take complementary approaches toward
computing routing. Typically, the goal of emulation techniques is to closely reproduce
features and behaviors of real world devices while the goal of simulation techniques
is to predict outcomes of running a set of network devices in a network based on
the internal model of the specified simulator. In our emulation studies, we obtained
results that show similar pattern to that of Quoitin et al. [20, 21] regarding the change
in network conditions as caused by link and router failures in the network.
From our emulation studies of single link failures , we observed that a single link failure
often leads to noticeable changes in the total routing cost at the head nodes of the
links in the network. All of the links in our virtual GEANT network indicate variation
in the head node routing costs as shown graphically in Figure 4.11. For instance
on Figure 4.2, links: R1-R2 , R1-R3 , R2-R5 , R3-R13 , R5-R10 , R5-R6 , R7-R9 , R9R17, R10-R12, R12-R16, R13-R14, R18-R23 , R21-R23, and R22-R23 represent some
increase in the cost while links: R2-R4, R4-R10, R4-Rll, R6-R7, R8-R9, R10-Rll ,
R15-R20, R17-R19, and R18-R19 show slight increases for re-routings when a link
removal occurred in the network. This is similar to the results obtained in [20, 21] ;
the changes in the routing updates for simulation work can be obtained from page
87 of [20]. From our investigations, we observed that all of our virtual GEANT links
caused nearly 20% fluctuations in the head node routing costs when they failed for
54
our emulated network which was primarily intra-AS. On the other hand, Quoitin et
al. observed that about 60% of the GEANT links caused more than 100,000 routing
changes when they failed in their simulated network. The differences in percentages
obtained for our emulated network and that of the simulated can be explained by the
fact that our emulated network percentage is only for the total head node routing
costs while the simulated network percentage is for all the four classes of routing
changes as described in their work. It can be seen clearly that both experimental
techniques identify important links that are most affected by single link failures in
their model of the GEANT network.
We also agree with Quoitin et al. that the number of intra domain re-routings is
few. We recorded a relatively low number of routing changes in our virtual network
because our network design mainly focused on an intra-AS, that is, we concentrate
on purely intra domain re-routing. In [20, 21], Quoitin et al. also remarked that there
are few routing changes in the intra-cost change (that is, change in IGP cost without
egress change) and intra-path change (that is, same IGP cost for an ingress-egress)
classes. This is because models of the GEANT would not capture all the changes in the
routing updates of single link failures for a transit network like the GEANT.
For our emulation studies of single router failures, we observed that failures of GEANT
routers often lead to changes in the total routing cost. The failure of a single router is
also equivalent to the failure of all links that are attached to this router. In [20, 21],
it was observed that failure of some routers could lead to the unreachability of some
destinations. These routers: R5 , R6, RIO, R12, and R14 accounted for moderate
increases in the total routing cost; as their failures also affected the most critical links
that were connected to these routers. These links and routers are also identified in
Figures 4.11 and 4.12. The changes in the routing changes for simulation work can
be obtained from page 88 of [20]. Our emulation result is consistent with that of
55
simulation work in [20, 21]. It can be seen clearly that both experimental techniques
identify important routers that are most affected by single-router failures in their
model of the GEANT network.
There is a noticeable difference in the number ofrouting changes (i.e., the routing cost)
in our emulation studies when compare to that of simulation work. This difference
occurred because Quoitin et al. included BGP routes in his simulation studies while our
emulation studies focused purely on intra-AS routes. This explains the huge number
of routing changes recorded in his experiments. However, the pattern of the routing
costs reflects similar behavior for cases with link and router failures in the network.
4.5
Conclusions
In this section, we provide discussions and experimental conclusions. We used emulation technique to model the GEANT network , carried out validation tests and conducted
two experimental case studies for intra-domain routing. We used the two emulation
case studies do a comparison with the simulation work and infer the following conclusions.
In the first case study, we used emulation techniques to examine the impact of link
failures on the head node routing costs in the network using the OSPF routing protocol.
From our results, we inferred that single-link failures in the virtual network account
for less than 20% difference in the total head node routing costs. Our emulation result
shows similar patterns with that of simulation work in [20, 21]. We inferred that both
emulation and simulation studies identified important links that were important in the
models of GEANT network. Such links were not exactly the same because of different
routing data used in the two different experimental techniques.
56
In the second case study, we used emulation techniques to understand the impact of
router failures on total routing cost using the OSPF routing protocol. We used the
single-router failure analysis to identify heavily-used routers in the network and also
to understand their behavior using the OSPF routing protocol. Such routers include
R5, R6, RlO, R12, and R14, and they accounted for moderate increases in the total
routing cost. Our results are similar to the simulation work in [20, 21] which also
observed that failures of single routers, that is, the GEANT routers often cause different
classes of routing changes and lead to traffic congestion. Finally, we were able to use
the emulation technique and OSPF routing protocol to understand changes in the
routing costs of our emulated GEANT network as caused by link and router failures.
57
Chapter 5
Conclusions and Future Work
This chapter concludes the project report. Section 5.1 is a summary of the work
done, and the main conclusions are presented in Section 5.2. Lastly, we discuss future
directions of this research in Section 5.3.
5.1
P roje ct Summary
We used emulation techniques to design and implement two virtual network testbeds
in this project. In this work, we emulated simple and complex networks; these networks were validated and used for our investigations. We implemented a fifteen-node
virtual network testbed and configured it with RIP routing protocol. We also modelled a complex network, the GEANT and configured it with a more powerful routing
protocol, OSPF. These testbeds were used to explore experiments on routing changes
caused by link or router failures in the two networks.
With our simple network testbed, we were able to use emulation techniques to produce
meaningful results that are comparable to the expected results for a small network 58
a fifteen node network. This testbed provided us with an environment for testing RIP
protocol while it works dynamically to re-configure the network against fluctuations
or changes of conditions in the network. In our experiments, RIP protocol was useful
for identifying missing links and critical links in the network: this result confirmed our
theoretical expectations of RIP. In addition, we observed that when there was a singlerouter failure in the RIP configured network, it was equivalent to the simultaneous
failure of all the connecting links. During the single-router failure experiment the
network, there was no need to wait for network stabilization; the next available router
immediately became the next hop and adjusted the routes accordingly.
We carried out two case studies in the virtual GEANT network testbed to investigate
routing changes. The first case study, an evaluation of the impact of missing links on a
complex network, we observed the network behavior for missing-link scenarios. These
scenarios revealed critical links that were important for operations of the network. For
network operations, link failures accounted for changes in the total routing costs on the
routing tables. The second case study, an evaluation of the impact of missing routers
in the GEANT network testbed, we observed that the failures of certain routers could
lead to increase in the cost of reaching some destinations. The testbed enabled us to
study the behavior of the network when it was used for an intra-domain routing. From
the results collected, we observed that OSPF protocol was efficient at re-computing
the routing tables in the case of missing routers. The protocol flooded the network
with routing information updates and adjusted quickly to new conditions. Our results
are consistent with those obtained when similar experiments were performed on the
simulation of the GEANT network for the missing routers.
Lastly, we used emulation techniques to gain invaluable experience creating, configuring, and managing virtual networks that are similar to live networks. This practical
knowledge and understanding provided us insights for the deployment of physical net-
59
works when needs arise. We gained much-needed knowledge and hands-on experience
to building and maintaining small and large scale networks.
In this project, there were two major limitations to conducting our experiments.
The first limitation was our inability to obtain the network graph for the simulated
GEANT network of Quoitin et al. This limitation prevented us from having the exact
representation of links and routers in the design of our emulated networking scenarios.
Though, the GEANT network had ceased to exist, but it would be nice if GEANT2
operators can produce a network graph of their new network for future research
purposes.
The second limitation was our inability to obtain and use the same routing data that
Quoitin et al. collected on November 24th, 2004 [20, 21]. In our experiments, we had
to generate our routing data from our emulation of the GEANT network. This lack of
routing data accounted for non-replicate references to individual links and routers in
the graphical presentation of our results. However, we observed a similar pattern of
link and router behaviors as recorded in the simulation experiments.
5.2
Conclusions
There are five main conclusions from this project:
1. Our experiments in Chapters 3 and 4 enable us to develop virtual network
testbeds that are re-usable and re-configurable by users. These testbeds will
enhance learning and testing of network applications and services by students
and network administrators. Network configurations and training can be provided to students without requiring a real network. Our testbeds can be used
as templates for practising and learning network configurations.
60
2. Our experiments in Chapters 3 and 4 show that emulation-based experiments
demonstrate typical behaviors of both RIP and OSPF protocols in any network.
Both dynamic routing protocols are able to quickly re-compute routing tables
when there are missing-links, and the OSPF protocol effectively handles missingrouters scenarios by flooding the network with new routing information.
3. Our experiments in Chapters 3 and 4 confirm that emulation-based experiments
can help ISP operators to understand routing changes and assess the total routing costs of traversing the network respectively. Virtual network testbeds can
be used to study missing routers and links in simple and complex networks.
This information is useful because emulation environments closely reproduce
features and behaviors of real world devices. Emulated networks undergo the
same packet exchanges and state changes that occur in real world.
4. Our experiments in Chapters 3 and 4 confirm that emulation techniques produce reasonable results that are consistent with simulation techniques. Our
emulation results are consistent with simulation results in identifying critical
links and routers that can influence routing changes and traffic distributions in
the network. The experimental work in Chapters 3 and 4 evaluated the impact of link and router failures in the network, achieved comparable patterns of
network behavior when there were link and router failures in the network, and
identified network links and routers that needed to be protected.
5. Our experiments in Chapters 3 and 4 affirm the cheap and fast ways to model
complex networks. To conduct emulation of networks, we simply obtain free
download of the VNUML tool and install it on a Linux machine for easy creation of
networks. Network analysts and ISP operators can easily use this fast approach
to investigate their desired networks.
61
5.3
Future work
In this project, our focus was to investigate routing changes and total routing costs
for intra-AS. It would be interesting to expand these techniques and explore network
behaviors for inter-AS routing changes and traffic distributions. For future work,
we recommend the use of VNUML tool to study inter-domain routing changes. This
work will involve using exterior gateway protocols (EGP) for interconnecting different
autonomous systems ASs . The study of EGP will help to understand the operations
of the Internet and the collection of ASs that make up the Internet.
The GEANT network had ceased to exist, and has since been replaced with the GEANT2
network. Another avenue of investigations is to conduct similar experiments in the
new network and compare the effects of missing links and routers on routing changes in
the network. The results obtained will be more relevant and provide useful suggestions
for ISP operators based in the new network.
It is also worth studying the use of combined experimental techniques for studying
routing changes and total routing costs. Applying these techniques: simulation, emulation and live testing will allow researchers to determine which combination of two
or three techniques will improve network tests and experiments.
62
Bibliography
[1] Paolo Giarrusso a .k. a. BlaisorBlade.
Skas patches:.
user-mode-linux.org/-blaisorblade/uml-utilities.
http://www.
[2] Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho,
Rolf Neugebauery, Ian Pratt, and Andrew Warfield. Xen and the Art of Virtualization. In 19th A CM Symposium on Operating Systems principles, Bolton
Landing, NY, USA, 2003.
[3] James H. Cowie. Scalable simulation framework api reference manual. http:
//www.ssfnet.org.
[4] DANTE. An overview map of GEANT network.
upload/pdf/Topology_Oct_2004 . pdf/.
http: I /www. geant. net/
[5] Jeff Dike. User Mode Linux. Pearson Education, Inc, 1st edition, 2006.
[6] Kevin Fall and Kannan Varadhan.
//www.isi.edu/nsnam/ns/.
The Network Simulator Manual.
http:
[7] Tony Dongliang Feng, Rob Ballantyne, and Ljiljana Trajkovic. Implementation
of BGP in a network simulator. Technical report, Simon Fraser University, 2004.
[8] Fermin Gallin and David Fernandez. VNUML tutorial. http: I /www. di t. upm.
es/vnumlwiki/index.php/Tutorial.
[9] Fermin Galan, E. Garcia, C. Chavarri, D. Fernandez, and M. Gomez. Design
and implementation of an IP multimedia subsystem emulator using virtualization techniques. Technical report, Centre Tecnologic de Telecomunicacions de
Catalunya (CTTC) Av. Canal Olimpic, s/n Castelldefels, Spain, May 2006.
[10] Christian Huitema. Routing in the Internet. Prentice Hall, 2nd edition, 2000.
[11] Kunihiro Ishiguro. Quagga Routing Software Suite. http: I /www. quagga. net .
[12] Linux Kernel. The vanilla kernel: . http: I /www. kernel. org.
[13] Bruce Kneale and Ilona Box. A virtual learning environment for real-world
networking. Informing Science Journal, June 2003.
63
[14] Bruce Kneale, Ain Y. De Horta, and Ilona Box. VELNET: Virtual environment
for learning networking. In 6th A ustraliasian Computing Education Conference
(ACE2004), Dunedin, New Zealand, 2004. Australian Computer Science Society.
[15] James F. Kurose and Keith W. Ross. Computer Networking: A Top-Down Approach Featuring the Internet. Pearson-Addison Wesley, 3rd edition, 2005.
[16] Steve Liu, Willis Marti, and Wei Zhao. Virtual Networking Lab (VNL): its
concepts and implementation. In Proceedings of the 2001 American Society for
Engineering Annual Conference fj Exposition, 2001.
[17] Kuthonuzo Luruo and Shashank Khanvikar. Virtual networking with user-mode
linux. Linux For You Magazine, January 2003.
[18] Luca Mottola. Overlay Management for Publish-Subscribe in mobile Environments. PhD thesis, Politecnico Di Milano, 2003-2004.
[19] Wendell Odom. CCNA Intra: Exam Certification Guide. Cisco Systems, 2004.
[20] Bruno Quoitin. BGP-based Interdomain Traffic Engineering. PhD thesis, Universite Catholique de Louvain, Belgium, August 2006.
[21] Bruno Quoitin and Steve Uhlig. Modeling the Routing of an Autonomous System
with C-BGP. In IEEE Network Magazine. IEEE, November/December 2005.
[22] Massimo Rimondini. Emulation of computer network with Netkit. Technical
report, Universita Degli Studi Di Roma Tre, 2007.
[23] Massimo Rimondini. Interdomain Routing Policies in the Internet: Inference
and Analysis. Computer science and Engineering, Roma Tre University, 2007.
[24] Andreas Steffen, Eric Marchionni, and Parik Rayo. Advanced network simulation
under user-mode linux. Gesellschaft fiir Informatik, 2005.
[25] AndrewS. Tanenbaum. Computer Networks. Prentice Hall, Upper Saddle River,
NJ, 3rd edition, 1996.
[26] Yuichiro Tateiwa, Takami Yasuda, and Shigeki Yokoi. Virtual Environment
Based Learning for Network Administration Beginner. In ABR fj TLC Conference Proceedings, Hawaii USA, 2007.
[27] OPNET Technologies. OPNET. http: I /www. opnet. com.
[28] Jeroen van der Ham and Gert Jan Verhoog. Virtual environments for networking
experiments. Technical report, University of Amsterdam, the Netherlands, 2004.
[29] David Watson, Farnam Jahanian, and Craig Labovitz. Experiences with monitoring ospf on a regional service provider network. In 23rd International Conference on Distributed Computing Systems (ICDCS 2003), Ann Arbor, Michigan,
US, 2003. Computer Science Society.
64
[30] Klaus Wehrle, Frank Pahlke, Hartmut Ritter , Daniel Muller, and Marc Bechler.
Th e LINUX Networking Architecture: Design and Implementation of Network
Protocols in the Linux K ernel. Pearson-Prentice Hall, 1st edition, 2005.
[31] Wikipedia. An overview of GEANT network. http: I /en. wikipedia. org/wiki/
GEANT.
[32] Chris Wright. Virtually Linux: Virtualization techniques in linux. In Proceedings
of the Linux Symposium, Volume Two , Ottawa, Ontario, Canada, 2004.
65
Appendix A
The XML file for a test bed with RIP
A.l
A fifteen-node virtual network testbed
The following XML file describes a sample scenario of fifteen nodes to be used with UML
and VNUML parser to set up a virtual network testbed. This testbed is configured
with RIP to verify whether or not this intra-domain routing protocol is functioning
correctly. We also use the testbed to study routing instability in the network.
The XML file is stored in /usr/share/vnuml/RIP15nodes . xml directory of a host
machine, and a copy of this XML specification is included in the report as follows.
1
2
3
4
5
6
7
s
9
10
u
1.8
RIP15nodes
/usr/share/vnuml/filesystems
/root_fs_tutorial
66
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
/usr/share/vnuml/kernels/linux
xterm
34
35
36
37
38
39
40
10.0.0.3
default
41
42
43
44
45
46
47
48
49
5o
51
52
53
54
10. 0 .1. 3
10. 0. 0 .1
10. 0. 2. 3
r1
hostname
67
55
56
57
ss
59
60
61
62
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
63
64
65
66
67
68
69
70
n
12
73
74
~
76
11
1s
79
so
s1
s2
s3
s4
s5
s6
s1
10. 0. 2. 5
10. 0. 4. 3
10. 0. 5. 3
10.0.7.5
r2 f iletree>
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
88
s9
9o
91
92
93
94
10.0.4.5
default
95
96
97
68
~
99
100
101
102
1o3
1o4
105
1o6
101
1o8
1o9
110
111
112
113
114
115
116
10.0.3.3
10. 0 .1. 5
10. 0. 7. 3
r3
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
117
137
10.0.5.5
10. 0. 8. 3
10. 0. 9. 3
r4
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname exec>
killall zebra
killall ripd
138
118
119
1w
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
139
140
69
14 1
u2
143
144
145
146
147
148
149
15o
151
152
153
154
155
156
157
158
159
160
161
162
163
10.0.3.5
10. 0. 6. 5
10. 0 .11. 3
10. 0.12. 3
r5
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
164
165
166
167
168
169
110
10 . 0. 6. 3
default
171
112
173
174
175
176
m
178
179
18o
181
182
183
10.0.8.5
10.0 . 10.3
10. 0.17. 5
r6
70
184
185
186
187
188
189
19o
191
192
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
193
194
195
196
197
198
199
10. 0. 0. 5
default
200
201
202
2o3
204
2o5
2o6
207
2o8
2o9
210
10. 0. 9. 5
10. 0.16. 5
10. 0.17. 3
211
212
213
214
215
216
211
218
219
220
221
222
r7
hostname exec>
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
223
224
225
226
10. 0.12. 5
71
227
228
229
230
23 1
232
10 . 0. 13. 5
10.0.15.3
235
10. 0 . 16. 3
236
233
234
237
238
239
24o
24 1
242
243
244
245
246
24 7
248
r8
hostname
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
249
25o
25 1
252
253
254
255
10. 0.15 . 5
default
256
25 7
258
259
260
26 1
262
263
264
265
266
267
268
269
10. 0 .11. 5
10 . 0 . 13.3
10 . 0.14. 3
r9
hostname
72
2 70
271
272
273
274
275
276
/usr/lib/
quagga/zebra -d
/usr/lib/
quagga/ripd -d
hostname
killall zebra
killall ripd
277
278
279
28o
281
282
283
284
285
286
287
10. 0.14. 5
default
\end{Verbatim}
73
-
Appendix B
Configuration files for Zebra, RIP
and vtysh
In these configuration files, you can specify the debugging options, a vty's password,
the RIP routing dremon configurations, a log file name, and so forth.
We wrote three configuration files for each router configured with RIP . These files
are zebra.conf , ripd.conf and vtysh.conf , and are described below. These brief
descriptions are as follows:
• The first file is a default configuration file, and it is called zebra. conf. This
file, zebra, is an IP routing manager and is used to provide kernel routing
updates, interface lookups, and the redistribution of routes between different
routing protocols [11].
• The second file is a default configuration file , and it is called ri pd . conf. This
configuration file contains a ripd dremon that implements the RIP protocol.
This RIP protocol requires interface information maintained by zebra dremon.
It is mandatory to run zebra before running ripd dremon, and zebra must be
74
invoked before we use an ripd dremon.
• The third file is vtysh. conf file and it configures the virtual terminal -
(vty).
The vty is a command line interface (CLI) for user interaction with the routing
daemon. Users can connect to the dremons via the telnet protocol. To enable a
vty interface, users have to setup a vty password.
These files are usually kept in /etc/quagga directory of a computer machine. For
ease of reference, we will upload all the configuration files for this project to this
website: http: I /web. unbc . ca/ rvbankole/ after the project defense.
Below is a set of sample files for router, Rl, regarding the configuration files explained
above.
B.l
zebra.conf
2
3
zebra sample configuration file
4
5
$Id: zebra.conf.sample,17:26:38 developer Exp $
6
1
8
9
hostname Rl
password xxxx
enable password zebra
10
11
Interface's description.
12
13
u
! interface lo
! description test of desc.
15
16
11
!interface sitO
multicast
18
19
75
20
! Static default route sample.
21
n
!ip route 0.0.0.0/0 203.181.89.241
23
24
25
26
! log file zebra.log
log file /var/log/zebra/zebra.log
B.2
ripd.conf
2
3
RIPd sample configuration file
4
5
$Id: ripd . conf.sample, 17:28:42 developer Exp $
6
1
s
hostname ripd
password zebra
9
10
11
debug rip events
debug rip packet
12
13
14
router rip
networ k 10.0.0.0/8
B.3
vtysh.conf
! vtysh sample configuration file
2
3
4
!username niyibank nopassword
log file /var/log/zebra/vtysh.log
76
Appendix C
The XML file for the virtual GEANT
network
In order to set up dynamic routing with the DSPF routing protocol, we configure the
virtual network testbed with OSPF. This protocol is widely used in large networks such
as enterprise networks and ISPs because it converges very quickly. By convergence,
we refer to the time it takes to respond to changes in the network. These changes
could occur due to link and router failures.
We need three separate configuration files- zebra. conf , ospfd . conf and vtysh. conf
for each of the twenty-three routers. In the ospfd. conf file, each router defines
the subnets and the OSPF areas that make up the network. Both zebra. conf and
vtysh. conf resemble equivalent files that we already explained in Subsection 3.3.2.
In the OSPF configuration file, we specify the debugging options, routing dremon
configurations and the name of the log file. We write three configuration files for each
of the twenty-three routers. We use these configuration files in the XML specification
files to create the virtual network.
See sample of the three configuration files in
Appendix E.
77
On t he host machine, we locate t he XML file and then start or stop t hese routing
dremons by specifying t he necessary commands.
The X M L file is st ored in /usr/share/vnuml/NIYiospf .xml directory of a host rnachine, and a copy of t his XML specification is included in t he report as follows.
C.l
A twenty-three node virtual network testbed
The following XML file describes a scenario of twenty-t hree nodes to be used wit h UML
and V NUML parser to set up a virt ual network testbed. This testbed is configured
wit h OSPF to verify whether or not t his int ra-domain rout ing protocol is functioning
correctly. We also use the testbed to study rout ing instability in t he network. Below
is the script for t he XML specifications.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1.8
newGEANT
/usr/share/vnuml/filesystems/
root _fs_tutorial
/usr/share/vnuml/kernels/linux
xterm
16
17
18
19
20
21
22
mode="uml_switch" />
mode="uml_switch" />
mode="uml_switch" />
mode="uml_switch" />
mode="uml_switch" />
78
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
ss
59
6o
61
62
63
64
65
10 . 0 .1. 4
10.0.2.4
R1
for f in /proc/sys/net/ipv4/
79
66
67
68
69
10
11
12
conf/*/rp_filter; do echo 0 > $f; done
hostname exec>
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
73
74
75
76
77
78
79
80
81
82
83
10. 0. 2. 8
10. 0 . 5. 4
10.0.15.8
84
85
86
87
88
89
9o
91
92
93
R2
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
94
95
96
97
98
99
1oo
101
102
1o3
104
1o5
1o6
101
10s
10. 0 .1. 8
10. 0 . 4. 4
10. 0. 3 .4
R3
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do
~ 0 > $f; done
80
1o9
110
111
112
113
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
10.0.4.8
10.0.15.4
10. 0.16. 4
10. 0.17. 4
129
130
131
~
133
134
135
136
137
R4
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
138
139
14o
141
142
143
144
145
146
147
148
149
10. 0. 5. 8
10 . 0 . 6. 4
10 . 0.12 .4
150
151
R5
81
152
153
154
155
156
157
158
for f in /proc/sys/net/
ipv4/conf/•/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
159
160
161
162
163
164
165
166
167
168
169
110
111
112
~
174
175
176
111
178
10. 0. 6 . 8
10 . 0. 7. 4
10. 0. 8. 4
R6
for f in /proc/sys/net/ipv4/
conf/•/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
179
180
181
182
183
184
185
186
187
188
189
19o
191
1n
~
194
10. 0. 7. 8
10. 0. 10. 4
10. 0 .11. 4
R7
for f in /proc/sys/net/ipv4/
conf/•/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
82
195
196
197
198
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
199
2oo
201
202
203
2o4
2o5
2o6
10. 0.10. 8
default
207
2o8
2o9
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
10. 0. 8. 8
10. 0. 9 .4
10. 0. 32 . 4
R8
for f in /proc/sys/net/
ipv4/conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
10. 0. 9. 8
10. 0. 26. 4
10. 0. 14. 8
83
238
239
240
241
242
243
244
245
246
247
248
249
25o
10. 0 .11. 8
R9
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done exec>
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
210
211
212
273
10. 0.16. 8
10. 0.18. 4
10. 0. 36. 4
10. 0.12. 8
R10
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
274
275
276
211
21s
279
28o
10. 0. 21. 8
84
28 1
282
283
284
285
286
287
288
289
29o
291
292
293
294
295
296
297
10. 0. 24. 4
10. 0.14. 4
10. 0. 32. 8
R17
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done exec>
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
298
299
3oo
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
10. 0. 24. 8
10. 0. 25. 8
10. 0. 26. 8
R19 f iletree>
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
3 18
319
320
321
322
323
10. 0. 17. 8
85
324
325
326
~
328
329
33o
331
332
333
334
335
336
337
338
339
340
10. 0.19 .4
10.0.23.8
10. 0. 36. 8
R11
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
~
359
36o
361
362
363
10. 0.18. 8
10. 0.19. 8
10. 0 . 20. 4
10. 0. 27. 8
R12
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
364
365
366
86
367
368
369
37o
371
372
373
374
375
376
377
~
379
380
381
382
383
10. 0. 3. 8
10. 0. 13. 4
10.0.23.4
R13
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
384
385
386
387
388
389
39o
391
392
393
394
395
396
397
39s
399
4oo
401
402
403
10. 0.13. 8
10. 0. 28. 4
10. 0. 27. 4
R14
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
404
405
406
407
408
409
10. 0. 28. 8
87
410
411
412
4 13
414
415
416
417
418
419
420
421
422
423
10. 0. 29. 4
10 . 0.30.4
R15
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
44 2
443
10. 0. 20. 8
10. 0. 21. 4
10. 0. 22. 4
R16
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done exec>
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
444
445
446
447
448
449
450
451
452
10. 0. 22. 8
10. 0. 25 .4
88
453
454
455
456
457
458
459
460
461
462
10 . 0. 38. 4
R18
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
463
464
465
466
467
468
469
470
471
472
473
474
475
4 76
477
478
4N
480
481
482
483
484
10. 0. 30. 8
10.0.31.4
10. 0. 34. 4
R20
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
485
486
487
488
489
49o
491
492
493
494
495
10. 0. 31. 8
10. 0. 33. 4
R21
for f in /proc/sys/net/ipv4/
89
496
497
498
499
5oo
50 1
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
502
503
504
505
506
5o1
508
509
510
511
512
5 13
514
5 15
516
517
518
519
520
521
522
523
524
10.0.34.8
10. 0. 35. 4
10.0.37.4
10. 0. 29. 8
R22
for f in /proc/sys/net/ipv4/
conf/*/rp_filter; do echo 0 > $f; done exec>
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
525
526
527
528
529
53o
53 1
532
533
534
535
536
53 7
538
10. 0. 33. 8
10. 0. 35. 8
10. 0. 38. 8
R23
for f in /proc/sys/net/ipv4/
90
539
540
541
542
543
544
conf/*/rp_filter; do echo 0 > $f; done
/usr/lib/quagga/zebra -d
/usr/lib/quagga/ospfd -d
killall zebra
killall ospfd
545
546
547
548
549
55o
551
552
10. 0. 37. 8
default
553
554
91
Appendix D
Configuration files for Zebra,
Ospfd and V tysh
In these configuration files, you can specify the debugging options, a vty 's password,
the ospfd routing dremon configurations, a log file name, and so forth.
We describe the three configuration files: zebra. conf , ospfd. conf and vtysh. conf.
• The default configuration file name is zebra. conf . This file, zebra, is an
IP routing manager and is used to provide kernel routing updates, interface
lookups, and the redistribution of routes between different routing protocols
[11].
• The default configuration file name is ospfd. conf . The ospfd dremon implements the OSPF protocol which supports OSPF version 2. This OSPF protocol
requires interface information maintained by zebra dremon. Running zebra is
mandatory before running ospfd, so zebra must be invoked before ospfd.
• The vtysh. conf file configures the virtual terminal -
92
(vty). The vty is a
command line interface (CLI) for user interaction with t he routing daemon.
Users can connect to the dremons via t he telnet protocol. To enable a vty
interface, users have to setup a vty password.
These files are usually kept in /etc/quagga directory of a host machine. For ease
of reference, we will upload all the configuration files for this project to this website:
http: I /web. unbc. ca/ rvbankole/ after the project defense.
Below is a set of sample files for router, Rl, regarding the configuration files explained
above.
D.l
zebra.conf
2
3
zebra sample configuration file
4
5
6
1
hostname R1
password xxxx
! enable password zebra
8
9
Interface's description.
10
n
12
interface lo
description test of desc.
13
14
15
interface sitO
multicast
16
11
Static default route sample.
18
19
ip route 0.0.0.0/0
20
21
22
log file zebra . log
log file /tmp/zebra.log
23
93
D.2
2
Ospfd.conf
Config by Julius
OSPF configuration
3
4
5
6
1
hostname R1
password xxxx
log file /tmp/ospfd.log
log stdout
8
9
debug ospf packet all send
10
11
interface dummyO
12
13
14
interface eth1
ip ospf cost 10
15
16
11
interface eth2
ip ospf cost 10
18
19
interface eth3
20
21
interface greO
22
23
interface lo
24
~
interface sitO
26
21
interface teqlO
28
29
interface tunlO
30
31
32
33
34
35
36
router ospf
!ospf router-id 10 . 0.0 . 255
!ospf rfc1583compatibility
!network 10.0 . 0.0/24 area 0.0.0.0
network 10.0.1.0/24 area 0.0 . 0 . 0
network 10.0.2.0/24 area 0.0.0 . 0
37
38
line vty
94
D.3
2
vtysh.conf
! vtysh sample configuration file
3
4
5
!username niyibank nopassword
log file /var/log/zebra/vtysh.log
95
Appendix E
Electronic version of my Project
Report
I had promised in my project report, to make available to prospective users and
students; my two virtual network testbeds that were used for experiments in my
research. Students are allowed to copy, modify and re-configure both testbeds for
their use and education. The configuration files will be available on my personal
homepage.
For ease of reference, we will upload an electronic version of this project report to
this website: http : I /web. unbc. ca/ rvbankole/ after the project defense.
96