I became more interested in (and knowledgeable about) heuristic methods when I was working on a related project at the nearby Mistubishi Electric Research Laboratory (MERL) some years ago. The project on Human-Guided Search studied the benefits of having a human dynamically interact with heuristic optimization algorithms, and along the way we did some work just on the heuristics themselves.

Here's a simple heuristic that can be taught, I think, quite easily and productively to undergraduates. Many hard problems -- including most scheduling, packing, and coloring problems -- have natural greedy heuristics, whereby the items are ordered according to some criterion (work, size, degree), and "placed" one at a time according to that ordering. For example, many standard bin packing algorithms such as first fit decreasing and best fit decreasing fit this type. Given time, a natural way to extend such greedy heuristics is to try additional orderings. While we can't hope to consider all orderings, we could certainly try more than one. Of course, intuition tells us we should prefer orderings close to the greedy ordering.

There are a variety of ways one could do this. A historically popular way is to sequentially choose an item uniformly at random from the top k of the greedy ordering, place it, and remove it from the list. One has to choose the parameter k. A negative feature of this approach is that there are many orderings that will never even be considered under this approach.

We suggest what we call BubbleSearch, which makes use of the Kendall-tau distance. More clearly, the Kendall-tau distance between an ordering A and the greedy ordering B is the number of transpositions of adjacent items you would have to make to get from A to B, which corresponds to the number of swaps you would make using BubbleSort if the sorted order was just the greedy ordering. (Hence the name.)

You could just go through all the permutations of items in order by Kendall-tau distance from the sorted order. Most small perturbations of the greedy ordering, however, give very similar results, leading to little or no improvement. A better way for most problems is a variation of the top k approach. To create a new ordering A, we start with a base ordering B (the greedy ordering). We pick the first item of A as follows: choose the first item of B with probability p, and if it isn't selected, choose the next item of B with probability p, and so on down the list (starting at the beginning again if necessary). Once an item is selected, it become the first element of A, and is removed from B. We continue choosing subsequent items for A the same way with the remaining list from B, starting from the beginning of the remaining list. The probability of obtaining an ordering A is then proportional to (1-p)^d(A,B). Here p is the algorithm parameter, determining how close to the base ordering you are likely to be. To me, this approach is much more intuitive than the top-k approach, and in our experiments appeared to do at least marginally better.

A further improvement is to change the base ordering to be the best ordering you have seen so far. Once you've beaten the greedy ordering, there's no reason to keep it as the base.

A motivation for simple heuristics like BubbleSearch is exactly their simplicity. They are easy to code and rely on essentially no problem-dependent knowledge. If coding time matters, something like BubbleSearch is the way to go.

Randomized extensions to greedy algorithms also give rise to interesting theoretical questions, related to work on "priority algorithms". I don't know of any results bounding the performance of random-top-k, BubbleSearch, or similar randomized greedy variations.

## Friday, August 10, 2007

Subscribe to:
Post Comments (Atom)

## 3 comments:

This heuristic is pretty neat. I generally like theorems that are simple to describe (e.g., how to do bubble sort) and whose result is simple to describe (e.g., the transition probability depends only on distance) even if the intermediate steps are messy (although I would guess they aren't here).

Do you think this could be used to get some sort of useful Markov chain on constrained ordering problems?

Also - where did you get the clones? I see two MMs contributing to the blog...

Clones -- I set up the blog for my "old" gmail account, but recently set up a second (to help remove spam from my Harvard mail account). So I gave myself access from both accounts. Sadly (for me), I find there's still just one of me...

Thanks, this is neat. This is a nice way to produce a random perturbation of a given ordering. I wonder if any "smoothed complexity" type of result can be proved with this perturbation model.

Post a Comment