Archive+for+the+%26%238216%3Bprogramming%26%238217%3B+Category

En route to CAIDA topology map

Wednesday, March 30th, 2011

In my current endeavor to make the BGP-Graph comprehensible a few huge leaps have been made. If you are familiar with other work in this field you may know the following image. (source)

Internet topology map from CAIDA.org

Internet topology map from CAIDA.org

Although this rendering does convey a few insights that are not plainly visible looking at the graph data, it still has some serious drawbacks. My main criticism is that it is  fixed image that does not allow easy reuse of the underlying data or any interactivity such as querying and highlighting certain parts of the graph. Also it is not possible to watch changes in this graph in relation to events like 2008′s severing of a FLAG submarine cable leaving much of the middle east with poor Internet speeds. (BBC article)

As you could see in previous articles, it is not trivial to turn such a huge graph into something meaningful. A main problem has been that the force-based graph layout that I employed takes very long to converge if I impose too many restrictions that limit the complexity of the resulting layout. My new approach is to help the force-based algorithm by providing it with a meaningful initialization of the graph layout. Using the CAIDA as-rank dataset I determined a circle radius on which each Autonomous System is placed. The initial angle at which the AS is placed was then calculated using the country code in the WHOIS information obtained from APNIC and the other RIRs.

After this initial placement the movement of the graph nodes was restricted to the angle. The force-based layout algorithm then spread the ASes and also moved a fair few of them away from the United States, where many of them are registered although their main business is done elsewhere. The resulting positions can then be rendered and update in realtime using any decent graphics card.

Longitude based layout of graph

Longitude based layout of graph

There is still a lot of work left but this seems to me like a rather significant breakthrough. Next I will try to integrate my clustering algorithms into the rendering, stay tuned for more…

progress@thesis

Wednesday, February 16th, 2011

Due to some exams at uni my thesis has been on low priority recently. Since last Thursday however these exams belong to the past and my thesis is humming along once more.

Right now I’m focusing on several aspects such as the rendering (note how the lines between nodes have been replaced by textured quads), user interface (descriptions are displayed for selected AS). Once I have improved the still rudimentary 3D navigation I will try to get the whole graph to render at once using an improved layout and a clustering algorithm.

Other interesting sub-projects are accessing the UPDATE messages from bgp-mon in realtime and integrating further information such as the as_rel dataset from CAIDA.

Stay tuned for more screenshots soon!

Next up: Bachelor Thesis

Friday, January 28th, 2011

It looks like I have finally gotten myself a subject for my bachelors thesis:

Interactive 3D routing graph visualization (working title)
My supervisors are Dirk Haage and Johann Schlamp from the chair for “Network Architectures and Services” at TU München

What I’m trying to accomplish is to build a tool to visualize the data collected in the routeviews project. This project hosts dumps of the routing tables from several locations around the world. This data contains information about paths that can be used if you want to contact a certain IP-address. Such a table tells an AS (Autonomous System) which remote AS brings it closer to a certain ip address. After several such hops you get a route to your target address.

The data contained in these tables is quite massive (full dump means ~1GB) and requires extensive caching and preparsing. But also if you go a step further and consider the graph that is defined through hops from AS to AS you are looking at ~30.000 nodes and ~70.000 individual links (each of which may be used in thousands of routes!) The main goal of my thesis is to render this graph in such a way that is not just visually pleasing but actually enables the user to explore and understand the complex structure of the graph. To avoid ending up with a hairball (which you can see below) I intend to try several graph clustering and layout algorithms (spring based layoutsLGL, …)

Another main aspect will be to integrate data from various sources, so a user is not forced to copy & paste data all the time when he just wants to look up some additional information.

In order to work with such complex information I will build an extensive GUI interface that enables panning, zooming and rotation and selection through mouse and keyboard controls. To complement this, another fundamental part will be an extensive console interface that allows for complex queries. If I find the time (thumbs pressed) I will post updates on my progress from time to time.

Early screenshot of my bachelor thesis

Early screenshot of my bachelor thesis

shannon coding experiments

Friday, October 2nd, 2009

I just came back from today’s exercise in information theoretic modelling (so far I would call it redundancy and data compression, but that doesn’t sound anywhere as cool) with a question that had formed itself during the exercise.

We had the task to implement a Shannon-Fano encoding (a very simple data compression that uses variable length prefix codes) as homework. Basically the algorithm creates a binary tree so that the more frequent characters are higher up than the less frequent ones. This tree is then used to create binary prefix codes. The tree is constructed from a sorted table with the frequencies by subdividing it into 2 halfs that are as equal as possible.

This is the part where I didn’t pay too much attention and thought the equality was the most important goal when dividing. In fact the dividing step is quite simple as it is supposed to be a simple split at the right point. Instead I tried a brute force approach to find the best 1:1 sized  division. Later I came up with an approximation that is quite close to the brute force results and is much faster for bigger frequency tables. O(2^n) -> O(n)

In the exercise i found out how it was supposed to work and I started comparing the two approaches (brute force/approximation vs real shannon)

My understanding of the problem was that a very equal division means that the length of the codewords will be close to the length that is most suitable for the entropy – and thus would minimize average code length.

After trying it out I was proved wrong. I used a few pieces from the English wikipedia to test. And all of them turned out to be slightly longer using my encoding (difference between brute force and the approximation was negligible). On average Shannon was 3% better than my approach.

After thinking about it for a while I think I found the problem. It seems that my approach favours equal code length a little too much.

The following dataset illustrates the difference

char: C B A D E F
freq: 25% 22% 20% 15% 13% 5%

The two algorithms then give the following output
Shannon tree vs equal-split tree
As you can see the tree is more balanced for the equal-split but if you compare the differences in code length it is obvious that the second is worse for encoding.

char shannon equal-split len*probability (shannon) len*probability (equal-split) contribution to entropy
a 10 011 0.4 0.6 0.4644
b 01 10 0.44 0.44 0.4806
c 00 00 0.5 0.5 0.5000
d 110 111 0.45 0.45 0.4105
e 1110 110 0.52 0.39 0.3826
f 1111 010 0.2 0.15 0.2161
2.51 2.53 2.4542

The leftmost column represents the ideal ratio of this letter for the total encoding and the 2 columns right of this one show, how well the two algorithms work for that specific char. Most interesting is the last line where you can see that both algorithms are still a bit away from the lower bound imposed by the entropy. So the actual difference between the two algorithms is very slim. The effect of course varies with the dataset but I didn’t find any instances where the equal-split was closer to the entropy than shannon.
So far I wasn’t able to come up with a mathematical proof if shannon is always better, but maybe I’ll have the right idea one of these days…

You can also download the source from my experiments, but don’t expect too much as I have only just started with python ;)
soucecode (.py, 7.5kb)