Changes for page Introduction

Last modified by Kevin Austin on 2018/11/27 04:10

From version < 16.1 >
edited by Kevin Austin
on 2018/11/27 04:07
To version < 17.1
edited by Kevin Austin
on 2018/11/27 04:10
<
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -25,7 +25,6 @@
25 25  1. Storm passes the message for each of the datapoints for every port back to Kafka
26 26  1. Another topology pulls those messages back out of Kafka into a cache for OpenTDSB. OpenTDSB does a compare and if the data hasn’t changed in 10 minutes it discards the data, if the data has changed within the past 10 minutes it writes the data via a micro-cache mechanism into hbase (for use with OpenTSDB).
27 27  
28 -(% class="wikigeneratedid" %)
29 29  == ==
30 30  
31 31  == Storm Topologies ==
... ... @@ -39,7 +39,6 @@
39 39  
40 40  The state for Flow Modifications and ISL Discovery are always kept in Storm for OpenKilda. Currently, OpenKilda allows you to turn ports on and off, the switch controller topology will allow you to set port speeds as well, give a list of every switch and an inventory of ports for each switch.
41 41  
42 -(% class="wikigeneratedid" %)
43 43  == ==
44 44  
45 45  == Floodlight – OpenFlow Speaker ==
... ... @@ -60,7 +60,6 @@
60 60  * topology discovery
61 61  * northbound interface queue
62 62  
63 -(% class="wikigeneratedid" %)
64 64  == ==
65 65  
66 66  == OpenTDSB/HBase ==
... ... @@ -72,7 +72,6 @@
72 72  
73 73  To scale the process, you can have multiples of the same bolts or workers. The incoming data and will be sharded across the workers using the hash that is created in the DatapointParseBolt. If one of the workers is lost, Storm will automatically start a new worker and assign that hash to the new worker. If a new worker is spawned for scaling purposes new incoming hashes will be assigned to that new worker.
74 74  
75 -(% class="wikigeneratedid" %)
76 76  == ==
77 77  
78 78  == Proactive OpenFlow Model ==
... ... @@ -79,7 +79,6 @@
79 79  
80 80  In the event that the controller loses connectivity with the network, the flows within the data-plane still continue to operate. This is the same model of traditional switches and routers losing the host control processor. When the controller reconnects with the switches, the flow information within the switches is not erased, the controller uses the switches to learn the state of the network and reconciles inconsistencies. This is especially useful for catastrophic situations where the Controller has to restart from with no network state in its’ database.
81 81  
82 -(% class="wikigeneratedid" %)
83 83  == ==
84 84  
85 85  == Neo4j ==
... ... @@ -89,7 +89,6 @@
89 89  
90 90  Neo4j is not used as the path computation engine since its shortest path algorithm does not deal with islands of switches.
91 91  
92 -(% class="wikigeneratedid" %)
93 93  == ==
94 94  
95 95  == Path Computation Engine ==
... ... @@ -99,7 +99,6 @@
99 99  
100 100  The PCE is a breadth first algorithm written in java that is called from the flow topology. A breadth first algorithm can deal with negative cost and OpenKilda can pass in other variables to create a path. For instance, one of the features in OpenKilda is “negative affinity” where you can specify a switch that should not be considered in the path of the flow.
101 101  
102 -(% class="wikigeneratedid" %)
103 103  == ==
104 104  
105 105  == Flow path creation ==
Screen Shot 2018-11-12 at 8.11.55 PM.png
Size
... ... @@ -1,1 +1,1 @@
1 -0 bytes
1 +237.4 KB
Content
Screen Shot 2018-11-13 at 8.52.01 PM.png
Size
... ... @@ -1,1 +1,1 @@
1 -0 bytes
1 +139.7 KB
Content
flow_path_timing_diagram.png
Size
... ... @@ -1,1 +1,1 @@
1 -0 bytes
1 +26.7 KB
Content
kilda_block_diagram.png
Size
... ... @@ -1,1 +1,1 @@
1 -0 bytes
1 +152.0 KB
Content
kilda_block_diagram_002.png
Size
... ... @@ -1,1 +1,1 @@
1 -0 bytes
1 +170.8 KB
Content
©2018 OpenKilda