Posts+Tagged+%26%238216%3Bprogramming%26%238217%3B

shannon coding experiments

Friday, October 2nd, 2009

I just came back from today’s exercise in information theoretic modelling (so far I would call it redundancy and data compression, but that doesn’t sound anywhere as cool) with a question that had formed itself during the exercise.

We had the task to implement a Shannon-Fano encoding (a very simple data compression that uses variable length prefix codes) as homework. Basically the algorithm creates a binary tree so that the more frequent characters are higher up than the less frequent ones. This tree is then used to create binary prefix codes. The tree is constructed from a sorted table with the frequencies by subdividing it into 2 halfs that are as equal as possible.

This is the part where I didn’t pay too much attention and thought the equality was the most important goal when dividing. In fact the dividing step is quite simple as it is supposed to be a simple split at the right point. Instead I tried a brute force approach to find the best 1:1 sized  division. Later I came up with an approximation that is quite close to the brute force results and is much faster for bigger frequency tables. O(2^n) -> O(n)

In the exercise i found out how it was supposed to work and I started comparing the two approaches (brute force/approximation vs real shannon)

My understanding of the problem was that a very equal division means that the length of the codewords will be close to the length that is most suitable for the entropy – and thus would minimize average code length.

After trying it out I was proved wrong. I used a few pieces from the English wikipedia to test. And all of them turned out to be slightly longer using my encoding (difference between brute force and the approximation was negligible). On average Shannon was 3% better than my approach.

After thinking about it for a while I think I found the problem. It seems that my approach favours equal code length a little too much.

The following dataset illustrates the difference

char: C B A D E F
freq: 25% 22% 20% 15% 13% 5%

The two algorithms then give the following output
Shannon tree vs equal-split tree
As you can see the tree is more balanced for the equal-split but if you compare the differences in code length it is obvious that the second is worse for encoding.

char shannon equal-split len*probability (shannon) len*probability (equal-split) contribution to entropy
a 10 011 0.4 0.6 0.4644
b 01 10 0.44 0.44 0.4806
c 00 00 0.5 0.5 0.5000
d 110 111 0.45 0.45 0.4105
e 1110 110 0.52 0.39 0.3826
f 1111 010 0.2 0.15 0.2161
2.51 2.53 2.4542

The leftmost column represents the ideal ratio of this letter for the total encoding and the 2 columns right of this one show, how well the two algorithms work for that specific char. Most interesting is the last line where you can see that both algorithms are still a bit away from the lower bound imposed by the entropy. So the actual difference between the two algorithms is very slim. The effect of course varies with the dataset but I didn’t find any instances where the equal-split was closer to the entropy than shannon.
So far I wasn’t able to come up with a mathematical proof if shannon is always better, but maybe I’ll have the right idea one of these days…

You can also download the source from my experiments, but don’t expect too much as I have only just started with python ;)
soucecode (.py, 7.5kb)

Minkä maalainen sinä olet?

Tuesday, September 29th, 2009

*Which country are you from?

Wow time passes quick. It’s already one month since i arrived here in Helsinki. On the one hand it feels like yesterday as I get to see new stuff every day but on the other hand it’s like I’ve been around for ages ;)

Yesterday I went to Nuuksio Nat. park for a second time, but this time there was eight of us and one even had GPS on his mobile phone … Of course that doesnt mean that we were able to follow a straight path and once again, the last thing we found was the lake, but we found a lot of other interesting things on the way. Most notably we found a trampoline that led to a fair bit of fun in the forest (until we were chased away by the owner of the trampoline ;)

[paragraph only suitable for computer science students, others might wanna skip ;þ ] Although some of you might not believe that, I’m actually learning some things at uni here, too. On of them is Python. So far the language has impressed me with a few nice features such as self managing lists, operator overloading and polymorphy, but I think it also has some serious drawbacks. One of them is how iterators are done. Why cant python have something like hasNext() instead of that stupid exception that you have to catch if you do anything else than the standard for-each loop? Also i don’t like using “self.” at least once per line, feels like python had added object orientation by a little child that is determined to get the rectangle block through the triangle hole of his toy… And never try to collaborate with ppl  in python unless everyone uses the same kind of indentation. As of now I’d say python is nice for a quick tryout, but thats it!

Apart from computer science I’m still learning Finnish here. That language also has some curious features.

By omitting one little “a”, you can turn a meeting into a murder: tapaan means I meet, whereas tapan means I kill, dangerous language. Also we learned that one reason for Finns having a less favourable image abroad is that they don’t use Anteeksi, “excuse me/please” very much, however they use Kiitos, thank you a lot. If you know that, they appear quite a lot less rude :)

But this weekend we wont need Finnish so much. We found a quite cheap overnight cruise to Stockholm that we will be taking from sunantai (Sunday) evening to tiistai (Tuesday) morning. Quite cheap means that we will have 4m² for 3 ppl but I don’t think we will need our beds all that much. We’re not quite sure yet what we are going to do in Stockholm, but i don’t expect that to be a problem, when Erasmus students go somewhere, the plan is made when they get there!

That’s it for now, more will follow sometime after Stockholm