I recently had a paper with Salil Vahdan (in SODA 2008) explaining why simple hash functions "work" for so many stream-related problems where data is hashed to be placed in a hash table or a Bloom filter or a similar structure. Here "work" means "behave essentially the way you would predict if the hash function was perfectly random." The basic reason, we suggested, was that if the data in the stream had enough randomness, it would combine with the randomness from the choice of hash function to produce value that looked independent and uniform.
Our analysis of how much randomness the data needed to have seemed a little loose. Salil and his student, Kai-Min Chung, have recently improved the analysis; here's the arxiv link, the paper will be appearing in RANDOM '08. There's some really nice and somewhat subtle arguments that I imagine will find some uses in other areas. While it "only" improves the constant factor (in terms of number of bits of randomness needed), this is certainly a case where constant factors are important.
The result further reinforces my belief that this is the right high-order explanation for why, whenever anyone actually does an experiment involving hashing, you get the right results no matter what hash function you use.
Subscribe to:
Post Comments (Atom)
1 comment:
Don't forget to scite the paper on scirate if you like it!
Post a Comment