There's a new paper with bounds on the capacity of the deletion channel on the arxiv by Fertonani and Duman. The main results are some new upper bounds which generally are better than the upper bounds found in a paper by Diggavi, Mitzenmacher, and Pfister, which was the first look at the problem in a long while.
The bounds achieved are very nice. They use a simpler approach than our earlier paper, essentially dividing the input or output into fixed sized blocks and numerically calculating the capacity per block. With clever manipulation, they can extend the bounds in various ways -- the nicest result (in my mind) being that, in the limit as the deletion probability d goes to 1, the capacity is upper bounded by 0.49(1-d). [Our previous result gave a bound of 0.79(1-d), and some of my work on lower bounds showed that the capacity is at least 0.11(1-d).]
It seems to me their work raises some interesting questions. One question is the following: at some point, they have to rely on numerical calculations to determine their upper bounds, using the Blahut-Arimoto algorithm to calculate the capacity of a fixed finite channel. Their approach is limited by this calculation; for instance, if one "chops" the input into blocks of 10 bits, then there are 1024 possible input sequences and 2048 possible output sequences for the deletion channel, and essentially they have to compute the capacity of this fixed channel in a "brute force" way, which requires keeping track of the probability that each input results in each output. Bigger blocks means better bounds but bigger calculations. (Technically one can find ways to avoid keeping the entire 1024 by 2048 probability matrix, since it's fairly sparse, but the calculations still grow big very fast.)
Is there any hope for an approximate version of the Blahut-Arimoto algorithm that would give provable upper bounds on the capacity but could take smaller time/space? It would seem there might be other uses for an approximate expectation-maximization procedure, although perhaps generally one wants such good approximations that there's no real benefit in practice. Any thoughts?
Monday, January 12, 2009
Subscribe to:
Post Comments (Atom)
2 comments:
That was a fascinating read. As usual I learnt something new from you Prof. Mitzenmacher. Love your blog.
BTW, an unrelated question (perhaps) how easy or difficult is to get NSF (or any other) grants for this sort of work. I am a doctoral student and love theory but keep getting told that there is no money in it.
For the deletion work in particular, I received an NSF grant from the communication theory side of NSF (after multiple submissions).
In general, these days, I think it is hard to get funding for theory work, as the funding for the theory CS and EE divisions of the NSF are pretty low. I personally have found it helps if you can tie your work into other areas -- like networking -- but then you have to make a commitment to that area.
Post a Comment