I very much enjoyed the Allerton Conference again this year.
The highlight for me was the sessions on Networking Games and Algorithms. Bruce Hajek has been organizing excellent networking sessions for years, and I think this year the focus on network games really paid off and led to some very compelling sessions. I arrived to hear Ramesh Johari, Vijay Vazirani, Tim Roughgarden, and David Kempe all give great talks back-to-back. I'm expecting they'll have a similar set of sessions next year, hopefully expanded!
A small disappointment was that there seemed to be fewer talks on coding theory this year. Plenty on network coding, but very little on standard coding theory. (This may have to do with Ralf Koetter recently leaving UIUC; in any case, many people I usually see there, like Muriel Medard, Ralf Koetter, Michelle Effros, Alexander Vardy, and so on didn't seem to be there.) A top coding theorist who was there independently confirmed this impression; hopefully, things will be back to normal next year. This year, I'm glad the network games sessions kept me so busy.
Thursday, September 27, 2007
Wednesday, September 26, 2007
Allerton Conference, Part II
At Allerton, I'll be talking about some follow-on work to some previous work my graduate student Adam and I did on cuckoo-like (or better said cuckoo-light) hashing schemes. In that work, we focused on schemes that allowed only one additional move per insert, and used a CAM to store the rare element that couldn't be placed. Besides simplicity, an advantage of such an approach is that we can analyze many basic one-move schemes.
In our new paper, we look at a different approach; here we use the CAM as a queue, so the CAM holds elements that are being moved but have not reached a resting place. Using the CAM as a queue is somewhat more complicated in terms of an implementation, but it allows one to essentially implement a full cuckoo hashing scheme. The queue is used to de-amortize the amortized performance of cuckoo hashing. That is, on average inserting an item in the queue take a constant number of moves, but in the worst case it can take Omega(log n) moves (where n is the number of items being stored). If we allow the queue a constant number of moves per insert, then the queue can simply buffer the insertions that take too long. The result is even better space utilization (naturally, since more moves are used per insert). The paper and talk slides are available.
Sadly, this ended up being essentially an experimental paper. The problem is that there are too many open questions in the setting of cuckoo hashing. For example, in the practical implementations I know of, for cuckoo hashing with d > 2 choices of locations for an element, when an element has to be kicked out, one chooses the element randomly. The analyses I know of are based on doing a breadth-first-search to find an element to kick out, which isn't really practical (certainly not in hardware). As a result, I believe it's still open to prove things like constant average moves/worst-case logarithmic moves for standard cuckoo hashing when d > 2. To really do an analysis of the queueing process, we'd need even more detailed information about the distribution of the number of moves required per inserted element. Lots of good open questions left here...
In our new paper, we look at a different approach; here we use the CAM as a queue, so the CAM holds elements that are being moved but have not reached a resting place. Using the CAM as a queue is somewhat more complicated in terms of an implementation, but it allows one to essentially implement a full cuckoo hashing scheme. The queue is used to de-amortize the amortized performance of cuckoo hashing. That is, on average inserting an item in the queue take a constant number of moves, but in the worst case it can take Omega(log n) moves (where n is the number of items being stored). If we allow the queue a constant number of moves per insert, then the queue can simply buffer the insertions that take too long. The result is even better space utilization (naturally, since more moves are used per insert). The paper and talk slides are available.
Sadly, this ended up being essentially an experimental paper. The problem is that there are too many open questions in the setting of cuckoo hashing. For example, in the practical implementations I know of, for cuckoo hashing with d > 2 choices of locations for an element, when an element has to be kicked out, one chooses the element randomly. The analyses I know of are based on doing a breadth-first-search to find an element to kick out, which isn't really practical (certainly not in hardware). As a result, I believe it's still open to prove things like constant average moves/worst-case logarithmic moves for standard cuckoo hashing when d > 2. To really do an analysis of the queueing process, we'd need even more detailed information about the distribution of the number of moves required per inserted element. Lots of good open questions left here...
Monday, September 24, 2007
Allerton Conference, Part I
This week I'll be headed to the 45th Annual Allerton Conference on Communication, Control, and Computing. For those who aren't familiar with it, the Allerton conference is held every year at an out-of-the-way spot nearby the University of Illinois at Urbana-Champaign. (Yes, that's purposely redundant.) The focus is on what CS people think of as EE theory, but as I so often claim, that line is really quite blurry. Lots of information theory and control theory, but there are also sessions on network coding, networking games and algorithms, learning theory, distributed computing, scheduling and queueing, and so on. The organizers are actually very keen on getting CS people to come; offhand I notice that Tim Roughgarden, Vijay Vazirani, Andrew McGregor, and I'm sure several other "CS people" will have papers there. The conference is always extremely well organized, thanks to the fact that it's in the same place every year, and organized by the same people. (I personally have always found the UIUC staff that run the conference extremely helpful. They do an outstanding job.)
One of the interesting things about Allerton is that it's a mixed of invited and submitted papers. The understanding is that if you're invited to give a paper, you're supposed to go and give the talk, not a student. So there are a lot of big-name people giving talks at the conference. Also, invited papers have a lot more leeway. Sometimes, a talk will mostly be a survey, or a discussion of preliminary work on a problem. Some of the best talks I've seen there have been these kinds! Attendance the past few years has been over 300, so it's a nice-sized conference.
I've heard some complaints about Allerton. First, UIUC is out of the way, and the Allerton site is about 20-30 minutes from the main town. If you don't know the place, it's easy to get bored post-conference. Usually the various UIUC hosts have some sort of dinner, and the town is actually a nice small college town. So if you get to know the place and the people, it's not really a problem. (Also, it means the conference is a great place to start a new collaboration, after the talks.) Second, generally there are 6 parallel sessions going on. Many people don't enjoy that. I usually find I'm interested in 2 (coding something and network something) at the same time and have to hop around. Third, I've heard CS people say they don't know which talks they should go listen to -- who are the really good speakers and what are the interesting topics. CS and EE people don't really know each other as well as they should, and there is some cost in getting to know a slightly different area and group of researchers. Fourth, the quality of papers is mixed. Some blame this on the "invited papers", but I think it's just true anywhere, especially for large conferences. Fifth, they don't have a published proceedings in the way that other conferences do. I think that's changing from what I've heard, and these days people just post papers on arxiv or their web page anyway.
The biggest complaint that I would agree with is the conference has grown beyond its location. Some rooms are small and easily get too noisy, making it hard to hear the speaker. The conference has just proven too successful!
I really enjoy the Allerton conference. It's a different set of people than I usually see, but work I'm very interested in. It was at Allerton that I first heard about the coding problem for deletion channels, which I've since worked on quite a lot. I've had multiple collaborations come out of the Allerton conference. I'd really encourage people who don't know about it to take a look.
In the other posts this week, I'll talk about what I'm presenting there, and about interesting things I see at the conference.
One of the interesting things about Allerton is that it's a mixed of invited and submitted papers. The understanding is that if you're invited to give a paper, you're supposed to go and give the talk, not a student. So there are a lot of big-name people giving talks at the conference. Also, invited papers have a lot more leeway. Sometimes, a talk will mostly be a survey, or a discussion of preliminary work on a problem. Some of the best talks I've seen there have been these kinds! Attendance the past few years has been over 300, so it's a nice-sized conference.
I've heard some complaints about Allerton. First, UIUC is out of the way, and the Allerton site is about 20-30 minutes from the main town. If you don't know the place, it's easy to get bored post-conference. Usually the various UIUC hosts have some sort of dinner, and the town is actually a nice small college town. So if you get to know the place and the people, it's not really a problem. (Also, it means the conference is a great place to start a new collaboration, after the talks.) Second, generally there are 6 parallel sessions going on. Many people don't enjoy that. I usually find I'm interested in 2 (coding something and network something) at the same time and have to hop around. Third, I've heard CS people say they don't know which talks they should go listen to -- who are the really good speakers and what are the interesting topics. CS and EE people don't really know each other as well as they should, and there is some cost in getting to know a slightly different area and group of researchers. Fourth, the quality of papers is mixed. Some blame this on the "invited papers", but I think it's just true anywhere, especially for large conferences. Fifth, they don't have a published proceedings in the way that other conferences do. I think that's changing from what I've heard, and these days people just post papers on arxiv or their web page anyway.
The biggest complaint that I would agree with is the conference has grown beyond its location. Some rooms are small and easily get too noisy, making it hard to hear the speaker. The conference has just proven too successful!
I really enjoy the Allerton conference. It's a different set of people than I usually see, but work I'm very interested in. It was at Allerton that I first heard about the coding problem for deletion channels, which I've since worked on quite a lot. I've had multiple collaborations come out of the Allerton conference. I'd really encourage people who don't know about it to take a look.
In the other posts this week, I'll talk about what I'm presenting there, and about interesting things I see at the conference.
Friday, September 21, 2007
Opinion Poll: Star Simpson, Airport Scare
I don't usually cover "current events" in this blog, but I decided this one was suitable, being both local and technical. By now I'd imagine almost everyone has heard of MIT student Star Simpson's near-fatal fashion faux pas at Logan Airport. For background and two opposing views on the subject, see the Machinist blog entry and this other random blog entry.
I imagine the crowd reading this blog is more "technical" than the average, so I'm curious what you all think. Is Star out of touch with reality, are the police in a state of extreme over-paranoia, or both?
I imagine the crowd reading this blog is more "technical" than the average, so I'm curious what you all think. Is Star out of touch with reality, are the police in a state of extreme over-paranoia, or both?
Thursday, September 20, 2007
What Do Professors Do? Part II
Following up on my last post... in one respect I know I'm spending more time on administrative issues recently. Last semester I started serving as a faculty member on the Harvard "Ad Board" -- literally, the Administrative Board. This is the committee that handles the "application and enforcement of undergraduate academic regulations and standards of social conduct". If you're in trouble for bad grades, cheating, plagiarizing, fighting, alcohol abuse, or anything else that's mentioned in the Harvard rules, your case gets brought to the Ad Board. We also handle several more positive and pleasant administrative duties, of course, but the bulk of the actual work involves the cases of students experiencing serious academic problems or accused of violating university rules.
This is a remarkably time-intensive committee. We meet weekly, and it's not at all unusual for the meetings to last 2-3 hours. Plus there's prep time involved in reading the cases before the meeting. There are only a few faculty members on the committee at a time. Most of the committee is made up of administrators and the "resident deans" -- people who live at the Harvard dorms (or "houses" in Harvard-speak) and are responsible for the care of students.
Several of the resident deans have asked me what I did to get stuck on the committee. I generally answer that I didn't realize it was a punishment! (In fact, then-Dean of Students Dick Gross asked me to serve. As he and Harry Lewis were my senior thesis advisors as an undergraduate, I found it hard to say no. I've found this sort of thing is a problem about going back as a professor where you were an undergraduate; people are more familiar with your name and stick you on committees.)
It is indeed a very challenging committee. I've learned some important things, including all about the infrastructure for helping Harvard students with problems and how I can help lead students to this infrastructure. One perhaps obvious lesson is that most often students get into academic trouble -- in particular, with plagiarism -- because they leave assignments until the night before and then take shortcuts to get them in. I've always been quite strict with undergraduates about homework deadlines -- I think overall students need better time management skills and try to do too much -- but I admit I've softened up since being on the committee. Contrary to many students' belief, I don't want my class to spur a breakdown. But remind your students over and over -- the night before is no time to start a theory problem set (or any other assignment)!
This is a remarkably time-intensive committee. We meet weekly, and it's not at all unusual for the meetings to last 2-3 hours. Plus there's prep time involved in reading the cases before the meeting. There are only a few faculty members on the committee at a time. Most of the committee is made up of administrators and the "resident deans" -- people who live at the Harvard dorms (or "houses" in Harvard-speak) and are responsible for the care of students.
Several of the resident deans have asked me what I did to get stuck on the committee. I generally answer that I didn't realize it was a punishment! (In fact, then-Dean of Students Dick Gross asked me to serve. As he and Harry Lewis were my senior thesis advisors as an undergraduate, I found it hard to say no. I've found this sort of thing is a problem about going back as a professor where you were an undergraduate; people are more familiar with your name and stick you on committees.)
It is indeed a very challenging committee. I've learned some important things, including all about the infrastructure for helping Harvard students with problems and how I can help lead students to this infrastructure. One perhaps obvious lesson is that most often students get into academic trouble -- in particular, with plagiarism -- because they leave assignments until the night before and then take shortcuts to get them in. I've always been quite strict with undergraduates about homework deadlines -- I think overall students need better time management skills and try to do too much -- but I admit I've softened up since being on the committee. Contrary to many students' belief, I don't want my class to spur a breakdown. But remind your students over and over -- the night before is no time to start a theory problem set (or any other assignment)!
Labels:
administration,
Harvard,
undergraduate students
Monday, September 17, 2007
What Do Professors Do?
A number of times I've had undergraduates ask me, essentially, "What else do you do?", which sometimes is an honest interest in what else my job entails and sometimes appears to be a concern that when I'm not teaching them, I'm just sitting around my office thinking up new ways to torture them via problem sets or soaking up tuition money.
At a high level, I know what else I do: research, write papers, give talks, go to conferences, manage and work with graduate students, advise undergraduates, seek funding, sit on department and university and theory-community committees, review papers and grant proposals, act as editor for a few journals, serve on program committees, write letters of recommendation, deal with standard administrative paperwork, and... well, I'm sure there are a few other things as well that I've forgotten. Let's count blogging as part of my job, too. And I consult on the side. Come to think of it, it's a wonder I have time to teach. (That probably explains the students' question. They're wondering if I have 40 working hours a week, why isn't this class better?) Of course, I'm know I'm not unusual; all this seems like standard faculty stuff.
What I've been realizing lately I don't have a good handle on is how much time I spend on these various activities. It has seemed to me that in the last few years -- since getting tenure -- a lot more of my time is going to administrative duties than to research-style activities. If true, that fact along with my poor time-management skills seems like a recipe for disaster. So this semester, I'm going to try a little experiment, and try to track my time in a spreadsheet somewhere, to start getting a better handle on what's going on. And figure out what activities to cut.
Of course, I'm curious -- what are others experiencing? How much time do you think you spend on various activities, and is there a pre-post tenure change? What's the biggest impediment to research time?
At a high level, I know what else I do: research, write papers, give talks, go to conferences, manage and work with graduate students, advise undergraduates, seek funding, sit on department and university and theory-community committees, review papers and grant proposals, act as editor for a few journals, serve on program committees, write letters of recommendation, deal with standard administrative paperwork, and... well, I'm sure there are a few other things as well that I've forgotten. Let's count blogging as part of my job, too. And I consult on the side. Come to think of it, it's a wonder I have time to teach. (That probably explains the students' question. They're wondering if I have 40 working hours a week, why isn't this class better?) Of course, I'm know I'm not unusual; all this seems like standard faculty stuff.
What I've been realizing lately I don't have a good handle on is how much time I spend on these various activities. It has seemed to me that in the last few years -- since getting tenure -- a lot more of my time is going to administrative duties than to research-style activities. If true, that fact along with my poor time-management skills seems like a recipe for disaster. So this semester, I'm going to try a little experiment, and try to track my time in a spreadsheet somewhere, to start getting a better handle on what's going on. And figure out what activities to cut.
Of course, I'm curious -- what are others experiencing? How much time do you think you spend on various activities, and is there a pre-post tenure change? What's the biggest impediment to research time?
Thursday, September 13, 2007
How to Handle Bugs in Others' Papers?
Some colleagues and I are working on extending a certain known result. There's a journal paper with a proof of the result that seemed like it would allow a reasonably straightforward extension. Except that, now as we go through it carefully to figure out how to prove our extension, we have found what seem to be at least one and possibly two fatal bugs in the argument. The result is almost certainly true -- there's an earlier paper with a much more complicated proof that seems much less amenable to our extension -- but it seems unlikely that the approach taken for the simpler proof works without some very major changes.
How best to handle such a situation? The authors are good researchers, not hacks. It may be that they already know there's a problem, although we haven't seen any sort of retraction or explanatory note anywhere. At some point, I imagine we send a polite note to them, explaining what we found, although for now we'd like to try to fix the argument (and prove our extension!) ourselves. It seemed like it must be a common enough situation -- in the past I've found major and even unfixable bugs in papers -- that people might have an opinion on how to best handle the problem.
How best to handle such a situation? The authors are good researchers, not hacks. It may be that they already know there's a problem, although we haven't seen any sort of retraction or explanatory note anywhere. At some point, I imagine we send a polite note to them, explaining what we found, although for now we'd like to try to fix the argument (and prove our extension!) ourselves. It seemed like it must be a common enough situation -- in the past I've found major and even unfixable bugs in papers -- that people might have an opinion on how to best handle the problem.
Monday, September 10, 2007
The Hiring Problem and Lake Wobegon Strategies
Continuing the "and now a word from our sponsors" riff, I'll describe a paper (to appear at SODA) that's joint with several people at Yahoo! Research, and started on a visit there.
We're looking at a situation similar to the well-known "secretary problem". For those unaware of the secretary problem, it goes like this: you need to hire a new secretary. There are n applicants, and you will interview them in a random order. The result of each interview is the relative rank of the applicant against those you have already interviewed. ("This applicant was the 3rd best so far.") After each interview, you have to decide whether to hire that applicant before the next interview; once you pass on an applicant, they're gone. Your goal is to maximize the probability that you hire the best of the n applicants. What's your optimal strategy? (I won't give away the answer for those who haven't seen the problem; it's easily found online.)
In our setting, we are considering growth strategies for a small startup company. Each applicant has a quality score, which we take to be uniform on [0,1], that we learn in the course of the interview. Unlike the secretary problem setting, we want more than one employee, although there is not a fixed number of employees we are aiming for. We simply want to grow, balancing the tradeoff between the speed of growth and the quality of new employees.
In the paper, we consider the strategies where we choose to hire an applicant only if their quality score is above the average of the current employees. (We study both the mean average and the median average.) We call these strategies Lake Wobegon strategies, after the fictional town of Lake Wobegon, where
I should hasten to emphasize that (despite the Google claim) we feel that the hiring problem is about hiring employees just as much as the secretary problem is about hiring a secretary: very little. Rather, it's an abstraction, leading to problems about random processes and decision-making in the context of incomplete information. I've given the talk about this work a few times, and some people get hung up on setting it up as an optimization problem and determining the optimal strategy. You could, I suppose, but you'd have to be precise about the tradeoff about growth speed vs. quality, and the abstraction is already removed quite a bit from reality. I personally think it's simply interesting to consider how growth processes like this work under Lake Wobegon strategies; does anyone know of any related examples in nature?
If you're a puzzle person, then you might consider the following: suppose we start with a single person, of quality 1/2, and then interview until we hire n people. Which strategy will lead to higher average quality -- hiring above the median, or hiring above the mean? (Or are they asymptotically the same?)
The hiring problem leads to a great deal of curious mathematical fun : lognormal distributions, the Berry-Esseen inequality, the difference between hiring above the median and hiring above the mean (yes, they're very different!), and potentially lots of open problems and further variations to look at. Hopefully, the problem will become at least a fraction as popular as the secretary problem has.
I'm not sure why people at Yahoo were interested in this. In this case, I think they too got caught up in the mathematics of the problem. I'm glad that in the research labs, people still just follow their curiosity from time to time, even if the immediate application isn't obvious.
We're looking at a situation similar to the well-known "secretary problem". For those unaware of the secretary problem, it goes like this: you need to hire a new secretary. There are n applicants, and you will interview them in a random order. The result of each interview is the relative rank of the applicant against those you have already interviewed. ("This applicant was the 3rd best so far.") After each interview, you have to decide whether to hire that applicant before the next interview; once you pass on an applicant, they're gone. Your goal is to maximize the probability that you hire the best of the n applicants. What's your optimal strategy? (I won't give away the answer for those who haven't seen the problem; it's easily found online.)
In our setting, we are considering growth strategies for a small startup company. Each applicant has a quality score, which we take to be uniform on [0,1], that we learn in the course of the interview. Unlike the secretary problem setting, we want more than one employee, although there is not a fixed number of employees we are aiming for. We simply want to grow, balancing the tradeoff between the speed of growth and the quality of new employees.
In the paper, we consider the strategies where we choose to hire an applicant only if their quality score is above the average of the current employees. (We study both the mean average and the median average.) We call these strategies Lake Wobegon strategies, after the fictional town of Lake Wobegon, where
all the women are strong, all the men are good looking, and all the children are above average.Here, everyone we hire is above average (at least when we hire them). Google claims to use this strategy.
I should hasten to emphasize that (despite the Google claim) we feel that the hiring problem is about hiring employees just as much as the secretary problem is about hiring a secretary: very little. Rather, it's an abstraction, leading to problems about random processes and decision-making in the context of incomplete information. I've given the talk about this work a few times, and some people get hung up on setting it up as an optimization problem and determining the optimal strategy. You could, I suppose, but you'd have to be precise about the tradeoff about growth speed vs. quality, and the abstraction is already removed quite a bit from reality. I personally think it's simply interesting to consider how growth processes like this work under Lake Wobegon strategies; does anyone know of any related examples in nature?
If you're a puzzle person, then you might consider the following: suppose we start with a single person, of quality 1/2, and then interview until we hire n people. Which strategy will lead to higher average quality -- hiring above the median, or hiring above the mean? (Or are they asymptotically the same?)
The hiring problem leads to a great deal of curious mathematical fun : lognormal distributions, the Berry-Esseen inequality, the difference between hiring above the median and hiring above the mean (yes, they're very different!), and potentially lots of open problems and further variations to look at. Hopefully, the problem will become at least a fraction as popular as the secretary problem has.
I'm not sure why people at Yahoo were interested in this. In this case, I think they too got caught up in the mathematics of the problem. I'm glad that in the research labs, people still just follow their curiosity from time to time, even if the immediate application isn't obvious.
Labels:
papers,
randomness,
research,
research labs
Sunday, September 09, 2007
Thursday, September 06, 2007
Negative Dependence
I lost sleep last night trying to prove a collection of random variables were negatively associated.
Negative dependence is one of those nice tricks that hardly ever gets used because it usually ends up being harder than it should be. While there are many flavors of negative dependence, the most natural is probably "negative assocation". The intuition is simple -- given a collection of random variables, if when one goes up that means the others should go down, then they are negatively associated. More formally, given any monotone non-decreasing function f of a subset of the variables, and a monotone non-decreasing function g of another disjoint subset of the variables, if f goes up, g should go down. Proving this holds formally is often more difficult than one would expect.
Why should we care? Well, it turns out that if a collection of random variables are negatively associated, then even though they are dependent, you can just apply your standard Chernoff bounds to them, without a care. Chernoff bounds (and other tail probability bounds) pop up in many arguments, but they can be very difficult to deal with when the random variables are dependent. Usually, you have to switch to a martingale argument, since standard Chernoff bounds apply only to independent random variables. If you can get negative dependence, it's much cleaner.
The best introduction to negative dependence is probably this surveyish article by Dubhashi and Ranjan. The focus is on how balls and bins problems are a natural example of negative dependence -- if a ball lands in one bin, it can't be in any other! Naturally, this explains my interest.
For example, the following problem came up for me some years ago in this paper. Suppose we thrown n points uniformly at random on the boundary of the unit circle. We want bounds on the number of arcs of length larger than say c/n for some constant c. If arc lengths were independent, we could just apply a Chernoff bound, easy enough. But of course they're dependent -- the sum of the arc lengths is 1! Intuitively, though, if one arc gets longer, then the others must get shorter, so there should be negative dependence. We proved what we needed to get the Chernoff bound, but it wasn't pretty. (I've since seen that a stronger version of this result is given as an exercise on negative association in the very nice-looking draft monograph by Dubhashi and Panconesi; to tell the truth, I'd like to see it worked out, as it seems to me that the conditioning is a bit subtle, but then again, geometry confuses me.)
In fact, in that paper we actually wanted a similar result in the following setting. Suppose one throws n points uniformly at random into a fixed region (say, for example, the unit square, or to avoid boundary issues, the 2-d unit torus). We want bounds on the number of Voronoi cells of size larger than say c/n for some constant c. If Voronoi cell sizes were independent, we could just apply a Chernoff bound, easy enough. But of course they're dependent! Intuitively, though, if one cell gets bigger, then the others must get smaller, so there should be negative dependence. Or maybe that isn't the case. Now I can't remember if I ever found an example that led me to believe it wasn't the case... Anyhow, we couldn't prove negative dependence easily, so we ended up using an uglier martingale argument that sufficed.
I was surprised at the time that I couldn't find any reference to type of this problem in the geometric literature. If negative dependence of Voronoi regions in random settings is still open (and true!), I'd be happy to ponder it with anyone who has a better sense of geometry than I do. In general, the area of negative dependence seems like a promising area for additional results.
Negative dependence is one of those nice tricks that hardly ever gets used because it usually ends up being harder than it should be. While there are many flavors of negative dependence, the most natural is probably "negative assocation". The intuition is simple -- given a collection of random variables, if when one goes up that means the others should go down, then they are negatively associated. More formally, given any monotone non-decreasing function f of a subset of the variables, and a monotone non-decreasing function g of another disjoint subset of the variables, if f goes up, g should go down. Proving this holds formally is often more difficult than one would expect.
Why should we care? Well, it turns out that if a collection of random variables are negatively associated, then even though they are dependent, you can just apply your standard Chernoff bounds to them, without a care. Chernoff bounds (and other tail probability bounds) pop up in many arguments, but they can be very difficult to deal with when the random variables are dependent. Usually, you have to switch to a martingale argument, since standard Chernoff bounds apply only to independent random variables. If you can get negative dependence, it's much cleaner.
The best introduction to negative dependence is probably this surveyish article by Dubhashi and Ranjan. The focus is on how balls and bins problems are a natural example of negative dependence -- if a ball lands in one bin, it can't be in any other! Naturally, this explains my interest.
For example, the following problem came up for me some years ago in this paper. Suppose we thrown n points uniformly at random on the boundary of the unit circle. We want bounds on the number of arcs of length larger than say c/n for some constant c. If arc lengths were independent, we could just apply a Chernoff bound, easy enough. But of course they're dependent -- the sum of the arc lengths is 1! Intuitively, though, if one arc gets longer, then the others must get shorter, so there should be negative dependence. We proved what we needed to get the Chernoff bound, but it wasn't pretty. (I've since seen that a stronger version of this result is given as an exercise on negative association in the very nice-looking draft monograph by Dubhashi and Panconesi; to tell the truth, I'd like to see it worked out, as it seems to me that the conditioning is a bit subtle, but then again, geometry confuses me.)
In fact, in that paper we actually wanted a similar result in the following setting. Suppose one throws n points uniformly at random into a fixed region (say, for example, the unit square, or to avoid boundary issues, the 2-d unit torus). We want bounds on the number of Voronoi cells of size larger than say c/n for some constant c. If Voronoi cell sizes were independent, we could just apply a Chernoff bound, easy enough. But of course they're dependent! Intuitively, though, if one cell gets bigger, then the others must get smaller, so there should be negative dependence. Or maybe that isn't the case. Now I can't remember if I ever found an example that led me to believe it wasn't the case... Anyhow, we couldn't prove negative dependence easily, so we ended up using an uglier martingale argument that sufficed.
I was surprised at the time that I couldn't find any reference to type of this problem in the geometric literature. If negative dependence of Voronoi regions in random settings is still open (and true!), I'd be happy to ponder it with anyone who has a better sense of geometry than I do. In general, the area of negative dependence seems like a promising area for additional results.
Monday, September 03, 2007
The Argument for a Tuition-Free Harvard Education
After being subjected to the uncompelling arguments of those who think my videotaped lectures should be freely available, I thought I'd present my own wacky idea, backed up by an actual argument: that Harvard should stop charging tuition for undergraduates.
Strangely, this idea (which hit me back in '99, when I came back to Harvard, and which at least one Dean told me was unrealistic) does not seem so wacky today. Harvard has devoted significant new resources to financial aid over the last decade, so college is essentially free for families earning less than $60,000, as outlined here and here.
While some would view this in the mystical light of commitment to the educational mission, social good, etc., I'll be a bit more cynical and actually try to back up why this has been a good idea with an economic argument. More financial aid is just good business. Harvard's cost was limiting its access to the talent pool of graduating seniors; moreover, the resulting debt was at least a potential and probably real barrier to the future success of many graduates. Since Harvard's greatest asset (after its remarkably talented, and modest, faculty, and maybe also its endowment) is its reputation, losing top students because of cost or limiting their success through debt simply cannot be Harvard's optimal strategy. Increasing financial aid just makes good sense, especially given Harvard's resources -- it can take the short-term cost for the long-term good. (The fact that all this coincides with altruism is certainly fortunate, as naturally pretty much all the faculty really do have a mystical commitment to the educational mission, social good, etc.)
This argument, however, doesn't justify making Harvard free for all accepted, as I have proposed. In fact, one might think the other way -- that the rich should be soaked for as much as they can to help pay for the not-rich. My cynical but rational argument for a free Harvard undergraduate education for all is that, if tuition was free, but Harvard then encouraged people to donate what they thought their education was worth -- say, perhaps, a small percentage of their annual income for life -- in the long run, they'd more than make up the tuition loss with increased funding of the endowment (thanks to the power law that is the wealth distribution). This is not a tax, but it is based on the idea that your college education is related to your future earnings, and giving back a percentage of those earnings is an arguably fair way to pay for that education.
Harvard might not even have to wait for the long-term to get the benefit. Indeed, imagine Harvard's next fund-raising campaign beginning with the announcement that after the campaign, as long as their goals were met, Harvard would be free for all undergraduates! What PR! What kind of donations would that bring in immediately!
Longer term, I would suspect the benefit would be even more. The pay-what-you-think-is-fair approach has been tried in some restaurants (see the SAME cafe, or the One World Cafe), and many museums have no fee but "suggested donations" at least part of the time, so this approach is not entirely unprecedented. In Harvard's case, I think the mix of guilt, altruism, and competition would push wealthy alumni (and wealthy parents of students) to give much more. (So you see, I am soaking the rich -- I just think you should wait to soak people until after they are rich, and inspire them to give generously, rather than bill them for a tuition payment.)
There are all sorts of possible side-benefits. It could work out that by moving from a system of tuition to voluntary donations, there would be an immediate jump because the donations, as opposed to tuition, could be made with pre-tax instead of post-tax money. (Note: I am not a tax attorney...) Students not laden with debt may prove more entrepreneurial (which besides helping the economy might lead to bigger donations down the line), or perhaps might take on more public-service oriented employment.
I can see why this might not have been tried elsewhere -- there's a lot of up-front cost while waiting for future payoff. Even for Harvard, perhaps the risk of too many free-riders is too large, although I doubt it. Harvard could also label the plan an experiment, suggesting that after some time tuition would have to be re-instituted if donations didn't keep pace with projections, to limit the risk. Perhaps the biggest negative is if Harvard tried this unilaterally, it would be seen as an unfriendly neighbor to all the other colleges. Still, I can't help but wish this idea would be given a try. Perhaps it could change the way we talk about funding education in this country, moving increased resources to our education system. I'd like to see Harvard lead the way in this regard.
Strangely, this idea (which hit me back in '99, when I came back to Harvard, and which at least one Dean told me was unrealistic) does not seem so wacky today. Harvard has devoted significant new resources to financial aid over the last decade, so college is essentially free for families earning less than $60,000, as outlined here and here.
While some would view this in the mystical light of commitment to the educational mission, social good, etc., I'll be a bit more cynical and actually try to back up why this has been a good idea with an economic argument. More financial aid is just good business. Harvard's cost was limiting its access to the talent pool of graduating seniors; moreover, the resulting debt was at least a potential and probably real barrier to the future success of many graduates. Since Harvard's greatest asset (after its remarkably talented, and modest, faculty, and maybe also its endowment) is its reputation, losing top students because of cost or limiting their success through debt simply cannot be Harvard's optimal strategy. Increasing financial aid just makes good sense, especially given Harvard's resources -- it can take the short-term cost for the long-term good. (The fact that all this coincides with altruism is certainly fortunate, as naturally pretty much all the faculty really do have a mystical commitment to the educational mission, social good, etc.)
This argument, however, doesn't justify making Harvard free for all accepted, as I have proposed. In fact, one might think the other way -- that the rich should be soaked for as much as they can to help pay for the not-rich. My cynical but rational argument for a free Harvard undergraduate education for all is that, if tuition was free, but Harvard then encouraged people to donate what they thought their education was worth -- say, perhaps, a small percentage of their annual income for life -- in the long run, they'd more than make up the tuition loss with increased funding of the endowment (thanks to the power law that is the wealth distribution). This is not a tax, but it is based on the idea that your college education is related to your future earnings, and giving back a percentage of those earnings is an arguably fair way to pay for that education.
Harvard might not even have to wait for the long-term to get the benefit. Indeed, imagine Harvard's next fund-raising campaign beginning with the announcement that after the campaign, as long as their goals were met, Harvard would be free for all undergraduates! What PR! What kind of donations would that bring in immediately!
Longer term, I would suspect the benefit would be even more. The pay-what-you-think-is-fair approach has been tried in some restaurants (see the SAME cafe, or the One World Cafe), and many museums have no fee but "suggested donations" at least part of the time, so this approach is not entirely unprecedented. In Harvard's case, I think the mix of guilt, altruism, and competition would push wealthy alumni (and wealthy parents of students) to give much more. (So you see, I am soaking the rich -- I just think you should wait to soak people until after they are rich, and inspire them to give generously, rather than bill them for a tuition payment.)
There are all sorts of possible side-benefits. It could work out that by moving from a system of tuition to voluntary donations, there would be an immediate jump because the donations, as opposed to tuition, could be made with pre-tax instead of post-tax money. (Note: I am not a tax attorney...) Students not laden with debt may prove more entrepreneurial (which besides helping the economy might lead to bigger donations down the line), or perhaps might take on more public-service oriented employment.
I can see why this might not have been tried elsewhere -- there's a lot of up-front cost while waiting for future payoff. Even for Harvard, perhaps the risk of too many free-riders is too large, although I doubt it. Harvard could also label the plan an experiment, suggesting that after some time tuition would have to be re-instituted if donations didn't keep pace with projections, to limit the risk. Perhaps the biggest negative is if Harvard tried this unilaterally, it would be seen as an unfriendly neighbor to all the other colleges. Still, I can't help but wish this idea would be given a try. Perhaps it could change the way we talk about funding education in this country, moving increased resources to our education system. I'd like to see Harvard lead the way in this regard.
Labels:
Harvard,
teaching,
undergraduate students
Subscribe to:
Posts (Atom)