Posting will be light over winter break, in part because so little is going on, but more because I'm busy working on papers for upcoming deadlines. I'll describe a fun little nugget, which is being submitted to ISIT, joint work with Adam Kalai and Madhu Sudan. (The starting point for the result was my giving my survey talk on deletion channels at MSRNE... always nice when something like that works out!) The goal was to get better bounds on the capacity of the deletion channel. (If you haven't been reading the blog long enough: In a binary deletion channel, n bits are sent, and the channel deletes each bit independently with probability p. So, for example, the message sent might be 00110011 and the received message could be 010011 if the 2nd and 4th bits were deleted.) It was known that for deletion probability p the capacity was at least 1-H(p). We show an essentially tight upper bound of 1-(1-o(1))H(p), where the o(1) term goes to 0 as p goes to 0. Here's the draft (a full two weeks before the submission deadline!).

In English, the binary deletion channel looks very much like a standard binary symmetric error channel when p is small. This is not the case when p is larger. (Here's a link to my survey on deletion channels and related channels for more info.)

Here's an attempt at describing the intuition. Let's first look back at the error channel. Suppose we had a code of N codewords each with n bits and a perfect decoder for at most pn errors. Then here's a funny way I could store data -- instead of storing n bits directly, I could store a codeword with pn errors that I introduce into it. To get back my data, I decode. Notice that when I decode, I automatically also determine the locations where the errors were introduced. This gives me N*{n choose pn} \approx N2^{nH(p)} possibilities, each of which I can use to represent a different data sequence. Since I'm only storing n bits, I better have N2^{nH(p)} <= 2^n, or else I've found a way to store more than n bits of data into n bits. So (log N)/n, or the rate, satisfies (log N)/n <= (1-H(p)). This is a different way of thinking about the Shannon upper bound on capacity. Of course, it's sweeping away details -- like what if you don't have a perfect decoder -- but it gives the right sort of insight into the bound. Now consider the deletion channel, and apply the same sort of reasoning. Suppose that we had a decoder for the deletion channel, and further, we had a method of determining which bits were deleted given the received string. Then we could use it to store data in the same way as above and obtain a similar (1-H(p)) upper bound on the rate. Now we have to worry about the details -- like what to do when you don't have a perfect decoder. But more importantly, we have to show that, most of the time with non-trivial probability, you can use the decoder to guess which bits were deleted. (This is where we use the fact that p is going to 0.) The details work out. Surprisingly, this doesn't seem to have been known before. The best (only) upper bound I know for this case previously was the work by Fertonani and Duman, mentioned in this blog post. Their upper bound as p goes to 0 was of the form 1 - cp for some constant c, so it was different in kind.

Slowly but surely, the mysteries of the deletion channel become, well, less mysterious.

Subscribe to:
Post Comments (Atom)

## 2 comments:

Hi Michael,

I thought that readers of your blog might also be interested in the paper that, as you know, we coincidentally just finished on the same topic

pdf

Cheers,

A

The new draft of the paper references the new paper now.

Post a Comment