Wednesday afternoon I went to a panel organized by the Harvard Undergraduate Mathematics Association for a "Gender Gap on Math Discussion". I went both to hear what was going on (as there's still that gender gap in CS, and we're always eager to hear ways that we might do things better that might reduce that), and for moral support for some people I know who were involved. The panel was co-organized by Meena Boppana, an undergraduate who did research with me the summer before last and is currently a star in my graduate class, and one of the panelists was Hilary Finucane, a graduate student who I advised on her senior thesis and collaborated with on multiple papers with when she was an undergraduate at Harvard. I should note that Meena and several others had done a survey of Harvard math undergraduates which had highlighted some issues that would be a starting point for discussion.

I could only stay for the first half or so, but it seemed very positive. A number of faculty showed up, which was promising. My take on the panel's take was was that they were interested in how to make improvements in the culture, and the goal was to try to start figuring out how that could happen, in part by sharing their experiences. The discussion was both
balanced and thoughtful, presented positives with negatives, but focused on how to improve things. There's a writeup in the Crimson with more details. The main point that came out in the first half was something I've seen and heard before: the importance of having a community, including (but not necessarily limited to) a community of women that can offer support, guidance, and mentorship, but also just so you don't continually feel like the only woman in the room.

And as long as we're on the subject, there's been a number of recent stories (or older stories where I've recently seen the links) on women in math and computer science. Focusing on Harvard to start, there's a nice writeup about Harvard's Women in Computer Science group, which has helped provide that community that encourages women to take classes in and concentrate in computer science. An article from last year discusses progress at Harvard in closing the gender gap in computer science. There was even an article in the Harvard Political Review covering gender gap issues at Harvard.

Outside of Harvard, from sources on Google+ I've seen a blog post with an interesting slide deck from one Katie Cunningham that provides a great starting point of discussion about the culture and women in computer science. And, finally, a link to something simultaneously sad and funny (things-male-tech-colleagues-have-actually-said-annotated) that reminds us why we have to keep trying to improve the culture.

## Friday, April 03, 2015

## Thursday, April 02, 2015

### On the Shannon Centennial

I found in my snail mail mailbox my paper copy of the IEEE Information Theory Society Newsletter. First, I was delighted by the news that Michelle Effros (of Cal Tech) is the new President of the IEEE Information Theory Society. Michelle has a long history of service (as well as, it goes without saying, outstanding research) in the information theory community, and is a great selection for the job.

In her opening column, Michelle discusses the importance of letting people outside of their community know what the information theory research community is doing, especially with the Shannon Centennial (April 30, 2016 will be the 100th anniversary of his birth) coming up. The IT Society will be spearheading outreach efforts as part of the Centennial. As Michelle says,

I have always thought that the Information Theory community and the computer science community -- particularly on the theory side -- should interact and communicate more, as there are huge overlaps in the problems being studied and still significant differences in techniques used (although there's more and more crossover in this regard). Perhaps the Shannon Centennial will provide some grand opportunities for the two communities to come together, to promote the Shannon legacy, and as a side benefit to learn more from and about each other.

In her opening column, Michelle discusses the importance of letting people outside of their community know what the information theory research community is doing, especially with the Shannon Centennial (April 30, 2016 will be the 100th anniversary of his birth) coming up. The IT Society will be spearheading outreach efforts as part of the Centennial. As Michelle says,

Every school child learns the name of Albert Einstein; his most famous equation has somehow entered the realm of popular culture. Why is it that so few people know the name or have heard about the contributions of Claude Elwood Shannon?In Computer Science, Turing is our "guiding light", and we had a very successful centenary celebration -- as well as a recent popular movie The Imitation Game -- to make Turing's life and work as well as the importance of computer science as a scientific discipline more well known and understood to the rest of the world. But Shannon, too, is one of the guiding lights of computer science; it is hard to imagine large parts of computer science theory and networking, for example, without the foundations laid out by Shannon in developing the theory of communications.

I have always thought that the Information Theory community and the computer science community -- particularly on the theory side -- should interact and communicate more, as there are huge overlaps in the problems being studied and still significant differences in techniques used (although there's more and more crossover in this regard). Perhaps the Shannon Centennial will provide some grand opportunities for the two communities to come together, to promote the Shannon legacy, and as a side benefit to learn more from and about each other.

## Saturday, March 28, 2015

### Links: HBR article on Women in STEM and AAUW Report

A latest article on bias issues for women in STEM by Harvard Business Review.

A slide presentation by AAUW related to their report: Solving the Equation: The Variables for Women’s Success in Engineering and Computing. The full report can be downloaded for free here.

A slide presentation by AAUW related to their report: Solving the Equation: The Variables for Women’s Success in Engineering and Computing. The full report can be downloaded for free here.

## Wednesday, March 18, 2015

### Double Hashing (Lueker and Molodowitch)

A subject I've grown interested in, related to multiple-choice hashing schemes, is when (and why) double hashing can be used in place of "random hashing" with an asymptotically negligible difference in performance.

One early, useful work on this subject is by Lueker and Molodowitch. They provide a very nice coupling argument between double hashing and random hashing in the setting of open address hashing in their paper More Analysis of Double Hashing. (The original paper appeared in STOC 1988.) In this post I'll summarize their argument. I apologize that both the text and my exposition might be a little rough.

They work in the open address hashing setting; each key runs through a permutation of the table locations when it is being placed, and it placed in the first empty location, with each location holding a single key. When searching for a key, we run sequentially through its permutation; we either eventually find the element or we find an empty slot, in which case we know the key was not in the table, and the search was unsuccessful. We measure the expected time for an unsuccessful search when a table with m slots is loaded with pm keys for a constant fraction p. For convenience we will have m be prime, as this will simplify matters when we consider double hashing. If each key's permutation is completely uniform over all permutations, we call this random hashing, and the expected time to search for key not in the table is 1/(1-p) + o(1); with some work you can get that it is 1/(1-p)+O(1/m), but we will not concern ourselves so much with the low order terms here.

With double hashing, for a key x, the permutation is given by h_1(x)+ j h_2(x) mod m for hash functions h_1 and h_2, where h_1(x) is uniform over the range [0,m-1], h_2(x) is uniform over the range [1,m-1], and the permutation takes the values in the order j=0,1,2,... This gives a permutation (because m is prime), and with double hashing, you just need two random hash values, which from a theoretical standpoint is "much less randomness" than a fully random permutation, and from a practical standpoint is easier to implement.

What Lueker and Molodowitch show is that for any (constant) load factor p, with double hashing, the expected time for an unsuccessful search remains 1/(1-p) + o(1). They show this through a coupling, which shows that double hashing and random hashing can be coupled so the "the same thing happens" -- that is, the key goes into the same slot -- under both double hashing and random hashing most of the time. Unfortunately, it doesn't happen all the time; the coupling is not strong enough to say that all the keys are placed the same with high probability. But they show that they can arrange the coupling so that thing work out nicely just the same.

To start, let us start with a setting where we have loaded our tables with n keys using random hashing, and now take two copies of our state, and consider a single step of random hashing in one copy and double hashing in the other copy, side by side. Clearly, for random hashing, the probability that a key is placed in any empty slot is 1/(m-n) for each slot. In expectation (over the random past), by symmetry, for double hashing the expected probability that a key is placed in any empty is 1/(m-n), but the actual probability for each slot will depend on the configuration. But what they show, using Chernoff bounds, is the the actual probability the key is placed in each slot is at most q/(m-n) for some q that is (1+o(1)), with high probability over the past random placements of the n keys.

Now for the coupling. Starting from empty, at each step we use double hashing in both of our copies with probability 1/q = 1- o(1). Note that this ensures that the probability a key is placed in the "random hashing" copy of the process is at most 1/(m-n), so far. So with probability 1/q, we have placed the key in the same slot in both tables, and so it is as though we've done random hashing for this step.

But what about what happens with probability 1-1/q? Maybe we could ignore it, if 1/q was 1-o(1/n) for example, as a low probability event; unfortunately, that's not the case. In particular, we actually expect that the coupling will fail for some smallish (polylogarithmic) number of steps.

Instead, with probability 1-1/q we place the key so that the step follows random hashing in total. I'm not saying with probability 1-1/q we place the key at random; I'm saying we place the key so that, in total (including the 1-1/q probability first step where they key was placed by double hashing) we place the key so that, overall, the probability any empty slot obtains the key is 1/(m-n). Another way of thinking about this is in the other direction; my coupling always placed the key according the random hashing, and with probability 1/q (which again is very close to 1) that matches what would be done with double hashing.

So in our random hashing copy of the table, we just placed a key according to random hashing. How should we think of what is happening over in the double hashing copy? For that table, with probability 1/q all went fine -- a key was placed by double hashing -- and with probability 1-1/q some key just dropped into the table that wasn't placed by double hashing. It's like an extra present from above. But it's not a key placed by double hashing.

The next part of the argument is to recognize that that's OK, in the following sense. If you simply add a key anywhere is an open addressed hash table, you just make things worse, in a very specific way. Any slot in the table that would have been filled if you hadn't put in that key will still be filled at the end of the process even when you add that key. That is, if S is the set of slots that would contain a key if no extra keys get placed, and S' is the set of slots that contain a key if you, at various points in the process, just add some extra keys anywhere at any point, then a simple induction gives S is a subset S'.

So now let's consider multiple steps of this coupling. At each step, the ball is actually placed according to random hashing, so at every point in the process, the "state" is that of a random hashing process. On the double hashing side of the coupling, with probability 1/q a ball was placed by double hashing, and with probability 1-1/q an extra ball was just placed. So if we count the number of balls placed by double hashing, when we reach the time when n keys have been placed by double hashing in this process, on average n/(1-1/q) = n(1+o(1)) keys (in expectation -- by Chernoff bounds one can get a high probability result) have been placed overall.

The result: placing n keys by double hashing is stochastically dominated (in terms of the keys that have been placed) by placing n(1+o(1)) keys by random hashing. In particular, after we place n=pm keys using double hashing, the expected time for an unsuccessful search is bounded above the expected time for unsuccessful search after putting in pm+o(pm) keys using random hashing, which is 1/(1-p) + o(1). You can do a similar sort of coupling to show that double hashing stochastically dominates placing n(1-o(1)) keys by random hashing. As a result, asymptotically, there's only an o(1) difference in terms of the expected time for unsuccessful search, a result which explains the negligible difference in performance one sees in implementation.

One early, useful work on this subject is by Lueker and Molodowitch. They provide a very nice coupling argument between double hashing and random hashing in the setting of open address hashing in their paper More Analysis of Double Hashing. (The original paper appeared in STOC 1988.) In this post I'll summarize their argument. I apologize that both the text and my exposition might be a little rough.

They work in the open address hashing setting; each key runs through a permutation of the table locations when it is being placed, and it placed in the first empty location, with each location holding a single key. When searching for a key, we run sequentially through its permutation; we either eventually find the element or we find an empty slot, in which case we know the key was not in the table, and the search was unsuccessful. We measure the expected time for an unsuccessful search when a table with m slots is loaded with pm keys for a constant fraction p. For convenience we will have m be prime, as this will simplify matters when we consider double hashing. If each key's permutation is completely uniform over all permutations, we call this random hashing, and the expected time to search for key not in the table is 1/(1-p) + o(1); with some work you can get that it is 1/(1-p)+O(1/m), but we will not concern ourselves so much with the low order terms here.

With double hashing, for a key x, the permutation is given by h_1(x)+ j h_2(x) mod m for hash functions h_1 and h_2, where h_1(x) is uniform over the range [0,m-1], h_2(x) is uniform over the range [1,m-1], and the permutation takes the values in the order j=0,1,2,... This gives a permutation (because m is prime), and with double hashing, you just need two random hash values, which from a theoretical standpoint is "much less randomness" than a fully random permutation, and from a practical standpoint is easier to implement.

What Lueker and Molodowitch show is that for any (constant) load factor p, with double hashing, the expected time for an unsuccessful search remains 1/(1-p) + o(1). They show this through a coupling, which shows that double hashing and random hashing can be coupled so the "the same thing happens" -- that is, the key goes into the same slot -- under both double hashing and random hashing most of the time. Unfortunately, it doesn't happen all the time; the coupling is not strong enough to say that all the keys are placed the same with high probability. But they show that they can arrange the coupling so that thing work out nicely just the same.

To start, let us start with a setting where we have loaded our tables with n keys using random hashing, and now take two copies of our state, and consider a single step of random hashing in one copy and double hashing in the other copy, side by side. Clearly, for random hashing, the probability that a key is placed in any empty slot is 1/(m-n) for each slot. In expectation (over the random past), by symmetry, for double hashing the expected probability that a key is placed in any empty is 1/(m-n), but the actual probability for each slot will depend on the configuration. But what they show, using Chernoff bounds, is the the actual probability the key is placed in each slot is at most q/(m-n) for some q that is (1+o(1)), with high probability over the past random placements of the n keys.

Now for the coupling. Starting from empty, at each step we use double hashing in both of our copies with probability 1/q = 1- o(1). Note that this ensures that the probability a key is placed in the "random hashing" copy of the process is at most 1/(m-n), so far. So with probability 1/q, we have placed the key in the same slot in both tables, and so it is as though we've done random hashing for this step.

But what about what happens with probability 1-1/q? Maybe we could ignore it, if 1/q was 1-o(1/n) for example, as a low probability event; unfortunately, that's not the case. In particular, we actually expect that the coupling will fail for some smallish (polylogarithmic) number of steps.

Instead, with probability 1-1/q we place the key so that the step follows random hashing in total. I'm not saying with probability 1-1/q we place the key at random; I'm saying we place the key so that, in total (including the 1-1/q probability first step where they key was placed by double hashing) we place the key so that, overall, the probability any empty slot obtains the key is 1/(m-n). Another way of thinking about this is in the other direction; my coupling always placed the key according the random hashing, and with probability 1/q (which again is very close to 1) that matches what would be done with double hashing.

So in our random hashing copy of the table, we just placed a key according to random hashing. How should we think of what is happening over in the double hashing copy? For that table, with probability 1/q all went fine -- a key was placed by double hashing -- and with probability 1-1/q some key just dropped into the table that wasn't placed by double hashing. It's like an extra present from above. But it's not a key placed by double hashing.

The next part of the argument is to recognize that that's OK, in the following sense. If you simply add a key anywhere is an open addressed hash table, you just make things worse, in a very specific way. Any slot in the table that would have been filled if you hadn't put in that key will still be filled at the end of the process even when you add that key. That is, if S is the set of slots that would contain a key if no extra keys get placed, and S' is the set of slots that contain a key if you, at various points in the process, just add some extra keys anywhere at any point, then a simple induction gives S is a subset S'.

So now let's consider multiple steps of this coupling. At each step, the ball is actually placed according to random hashing, so at every point in the process, the "state" is that of a random hashing process. On the double hashing side of the coupling, with probability 1/q a ball was placed by double hashing, and with probability 1-1/q an extra ball was just placed. So if we count the number of balls placed by double hashing, when we reach the time when n keys have been placed by double hashing in this process, on average n/(1-1/q) = n(1+o(1)) keys (in expectation -- by Chernoff bounds one can get a high probability result) have been placed overall.

The result: placing n keys by double hashing is stochastically dominated (in terms of the keys that have been placed) by placing n(1+o(1)) keys by random hashing. In particular, after we place n=pm keys using double hashing, the expected time for an unsuccessful search is bounded above the expected time for unsuccessful search after putting in pm+o(pm) keys using random hashing, which is 1/(1-p) + o(1). You can do a similar sort of coupling to show that double hashing stochastically dominates placing n(1-o(1)) keys by random hashing. As a result, asymptotically, there's only an o(1) difference in terms of the expected time for unsuccessful search, a result which explains the negligible difference in performance one sees in implementation.

## Monday, March 16, 2015

### Power of Randomness at Georgia Tech

I'm spending (part of) the week at "The Power of Randomness in Computation Workshop", an IMA (Institute for Mathematics and its Applications) and ARC (Georgia Tech Algorithm and Randomness Center) co-sponsored workshop at Georgia Tech. Here's the schedule. I'm told slides will eventually be put up somewhere on the IMA website for such things. Great organization at Georgia Tech -- a big crowd in a very nice room, lots of food and coffee, all very well organized. They even had Ben Affleck waiting in front of the building for us this morning. He seemed to be a little busy shooting a movie to greet us properly, but maybe he'll have a bit more time to chat tomorrow.

Besides Ben, a few other highlights:

Leslie Goldberg started things of talking about the complexity of approximating complex-valued Ising and Tutte partition functions. I remember the Ising/Tutte models (mostly from graduate school and shortly after); now there are connections between various problems in quantum computing and these functions on complex values, which (or course) I had not known.

Nike Sun gave a talk on the exact k-SAT threshold (for large k). It was very clearly presented and gave the argument at the intuitive level. I gained some insight into why the "locally random tree" type argument I've enjoyed in coding/belief propagation arguments breaks down in certain satisfiability problems, due to clustering of solutions and other challenging correlations, and how those issues can be handled. I started to understand (I think) the point of replica symmetry breaking arguments and how they were used to guide the analysis of the k-SAT problem.

Other talks from the day: Amin-Coja Oghlan also talked about replica symmetry techniques and their uses for random graph coloring problems, Eli Upfal talked about some new shuffling techniques for oblivious storage dubbed the Melbourne shuffle, Aravind Srinivasan gave a talk on the Lovasz Local Lemma (staring from the Moser-Tardos results and showing how these arguments carry forward and give greater power and insight into the use of the LLL for additional problems), and I talked about invertible Bloom lookup tables and briefly mentioned a few other unrelated things in progress.

Besides Ben, a few other highlights:

Leslie Goldberg started things of talking about the complexity of approximating complex-valued Ising and Tutte partition functions. I remember the Ising/Tutte models (mostly from graduate school and shortly after); now there are connections between various problems in quantum computing and these functions on complex values, which (or course) I had not known.

Nike Sun gave a talk on the exact k-SAT threshold (for large k). It was very clearly presented and gave the argument at the intuitive level. I gained some insight into why the "locally random tree" type argument I've enjoyed in coding/belief propagation arguments breaks down in certain satisfiability problems, due to clustering of solutions and other challenging correlations, and how those issues can be handled. I started to understand (I think) the point of replica symmetry breaking arguments and how they were used to guide the analysis of the k-SAT problem.

Other talks from the day: Amin-Coja Oghlan also talked about replica symmetry techniques and their uses for random graph coloring problems, Eli Upfal talked about some new shuffling techniques for oblivious storage dubbed the Melbourne shuffle, Aravind Srinivasan gave a talk on the Lovasz Local Lemma (staring from the Moser-Tardos results and showing how these arguments carry forward and give greater power and insight into the use of the LLL for additional problems), and I talked about invertible Bloom lookup tables and briefly mentioned a few other unrelated things in progress.

## Tuesday, March 03, 2015

### Hate EasyChair

I just typed in a nice long review on EasyChair. Yes, I prefer doing this with the online form when I'm just sitting around and have time to do a review.

Apparently I didn't hit one of the score buttons (although I'm pretty sure I did, let's give EasyChair the benefit of the doubt there) so EasyChair says there's an error and, of course, forgets my nice long typed review when it takes me back to the review page, so I'll get to re-do and re-type it later.

Sigh. I guess I'll go back to doing my reviews in a text file and cutting and pasting. No, this has not happened to me in recent memory in HotCRP. Put this down as one more reason (but not the only one) why I don't like EasyChair and would prefer a better designed system (like HotCRP...).

Apparently I didn't hit one of the score buttons (although I'm pretty sure I did, let's give EasyChair the benefit of the doubt there) so EasyChair says there's an error and, of course, forgets my nice long typed review when it takes me back to the review page, so I'll get to re-do and re-type it later.

Sigh. I guess I'll go back to doing my reviews in a text file and cutting and pasting. No, this has not happened to me in recent memory in HotCRP. Put this down as one more reason (but not the only one) why I don't like EasyChair and would prefer a better designed system (like HotCRP...).

## Wednesday, February 11, 2015

### New Heapable Subsequence Paper

In the "only a dozen people could care about this category"...

About 4 1/2 years ago, I posted about a paper we had put up on the arxiv about Heapable Sequences and Subsequences. The basic combinatorial structure we were looking at is a seemingly natural generalization of the idea of Longest Increasing Subsequences. Say that a sequence is heapable if you can sequentially place the items into a (binary, increasing) heap, so each new item is the child of some item already in the heap. So, for example, 1 4 2 3 5 is heapable, but 1 5 3 4 2 is not. Once you have this idea, you can ask about things like the Longest Heapable Subsequence of a sequence (algorithms for it, expected length with a random permutation, etc.). Our paper had some results and lots of open questions.

I admit, when we did this paper I was hoping that some combinatorialist(s) would find the notion compelling, take up the questions, and find some cool connections. Longest Increasing Subsequences are somehow related to Young tableaux, interacting particle systems, and all sorts of other cool things. So what about Longest Heapable Subsequences?

I had to wait a few years, but Gabriel Istrate and Cosmin Bonchis recently put a paper up on the arxiv that makes these connections. Here's the abstract:

I love the new conjecture that the expected minimal number of heapable subsequences a random sequence decomposes into is ((1+sqrt{5})/2) ln n. (It's clearly at least ln n, the expected number of minima in the sequence.)

There are still all sort of open questions, that seem surprisingly difficult; and I certainly can't claim I know of any important practical applications. But Longest Heapable Subsequences just appeal to me as a simple, straightforward mathematical object that I wish I understood more.

For simple-sounding but apparently difficult open questions, as far as I know, the answer to even the basic question of "What is the formula for how many sequences of length n are heapable?" is still not known. Similarly, I think the question of finding an efficient algorithm for determining the Longest Heapable Subsequence (or showing it is hard for some class) is open as well.

About 4 1/2 years ago, I posted about a paper we had put up on the arxiv about Heapable Sequences and Subsequences. The basic combinatorial structure we were looking at is a seemingly natural generalization of the idea of Longest Increasing Subsequences. Say that a sequence is heapable if you can sequentially place the items into a (binary, increasing) heap, so each new item is the child of some item already in the heap. So, for example, 1 4 2 3 5 is heapable, but 1 5 3 4 2 is not. Once you have this idea, you can ask about things like the Longest Heapable Subsequence of a sequence (algorithms for it, expected length with a random permutation, etc.). Our paper had some results and lots of open questions.

I admit, when we did this paper I was hoping that some combinatorialist(s) would find the notion compelling, take up the questions, and find some cool connections. Longest Increasing Subsequences are somehow related to Young tableaux, interacting particle systems, and all sorts of other cool things. So what about Longest Heapable Subsequences?

I had to wait a few years, but Gabriel Istrate and Cosmin Bonchis recently put a paper up on the arxiv that makes these connections. Here's the abstract:

We investigate partitioning of integer sequences into heapable subsequences (previously defined and established by Mitzenmacher et al). We show that an extension of patience sorting computes the decomposition into a minimal number of heapable subsequences (MHS). We connect this parameter to an interactive particle system, a multiset extension of Hammersley's process, and investigate its expected value on a random permutation. In contrast with the (well studied) case of the longest increasing subsequence, we bring experimental evidence that the correct asymptotic scaling is(Note, that should really be "Byers et al...")1+5√2⋅ln(n) . Finally we give a heap-based extension of Young tableaux, prove a hook inequality and an extension of the Robinson-Schensted correspondence.

I love the new conjecture that the expected minimal number of heapable subsequences a random sequence decomposes into is ((1+sqrt{5})/2) ln n. (It's clearly at least ln n, the expected number of minima in the sequence.)

There are still all sort of open questions, that seem surprisingly difficult; and I certainly can't claim I know of any important practical applications. But Longest Heapable Subsequences just appeal to me as a simple, straightforward mathematical object that I wish I understood more.

For simple-sounding but apparently difficult open questions, as far as I know, the answer to even the basic question of "What is the formula for how many sequences of length n are heapable?" is still not known. Similarly, I think the question of finding an efficient algorithm for determining the Longest Heapable Subsequence (or showing it is hard for some class) is open as well.

## Monday, December 15, 2014

### Stress: Competition and Ranking

One issue I keep seeing in comments here and elsewhere on this issue is that academia is very competitive, with everyone worried about their rank. In my last post, I admitted that it would be hard to completely deny that there is competition, particularly when people are younger, which tends to come out when jobs are at stake. But, for the most part, I think the role of competition is completely exaggerated, strangely so. Academia -- at least, certainly, my branch of it -- thrives on collaboration, and I believe people who go into it thinking that it is a big competition are going to lose out, both professionally and in their enjoyment of the profession. (Indeed, one of the reasons I write these posts is because I don't want undergraduate and graduate students getting what I think is a one-sided, incorrect view of how academics work.)

First, I would again like to compare academics with other professions. I've seen comments (including on this blog) that other professions are less competitive than academia, and people are less worried about rank. I think people who are making that suggestion need to check in with people working in those professions, because I think they're ridiculously wrong. Lawyers coming out of school to get jobs, doctors trying to get fellowships or residencies, consultants at consulting firms -- all very competitive. As you move up, this continues. For lawyers, there's who can bill the most hours, and the challenge to make partner; for doctors, who can get the best positions at hospitals; and for businesspeople, every promotion is a step up. Even for software engineering types, there's competition. When I was a graduate student, I recall visiting friends who had gone to a large well-known company, and for a large part of the evening all they talked about was what "level" they and others were in the company and the annual reviews and who was doing what project that might get them ahead. So let's not be delusional and start by acknowledging that there's competition everywhere, and that's unsurprising when jobs and money are at stake. While I'm not suggesting I have deep knowledge of all of these careers, I think academics have much less competition than most.

If academics appear like they're concerned about ranking, perhaps it's because they appear to be easy to rank. First, as I pointed out last post, there's not that many of us. Second, there are obvious metrics everyone can understand: number of papers published, number of papers appearing in "top" conferences, and h-index stand out. I'm not suggesting these are good metrics -- but they're easy and to a first order give (potentially) some useful information. They're a quick way of bucketing or sorting people, particularly those without an established track record and are therefore not necessarily widely known or visible in the field, and therefore have more of an impression and an impact on younger academics.

But very quickly after your PhD, this sort of ranking loses its importance, and the very idea of ranking starts to lose its value -- as many have noted in a variety of venues. In academia, there's lots of ways to be successful, many points on the Pareto frontier. There are many great results waiting to be found and many different subfields to work in. At the end of the day, a history of good work and specific achievements is what people look for; there's not really a finite pool of such things for which to compete. Indeed, I'm not sure how I would go about competing with the top people in the field, except to try to do interesting work, which is what I'm trying to do anyway. (A secondary way to compete is just to make sure people know about your work. But giving talks is less important to being successful than doing the work that goes into the talks in the first place; again, it can have a bigger impact for people in the early stages of their career.)

Against this idea of competition, just look at how people in academia work together. In computer science theory, in particular, most papers have several authors working together. In a number of cases these are students with their advisors, but a closer look reveals that in many cases, they are not. Credit can be shared easily in academia, and collaborations can lead to results that individuals could not get alone. Working in groups is a way for people to get more done. Instead of competition, collaboration often yields the path to having a greater impact on the field. Rather than being a "competitive game", research is more a "cooperative game". (As an aside, this is why theory's approach of alphabetical order for authors rather than some sort of implicit "credit scheme" based on author order makes such great sense.) In most areas of computer science that I've participated in, a similar spirit prevails.

I encourage graduate students to team up pick out projects to work on together (and have seen this at other places, also -- one of my best experiences as a graduate student was such a project). It gives them something to do and a sense of ownership outside of working with their advisor. And, importantly, in reinforces that these other students are their colleagues, and that working together is a great idea that can gain for everyone. Hopefully, they also learn that working together is more fun and generally more productive than working alone. When it comes to hiring time, it's nice to see students who have worked on such team projects, because I typically prefer colleagues with a track record of working well with others.

Sometimes small competitions break out, sure -- multiple groups are working on the same or similar problems. Often, though, this is a very healthy competition, pushing progress forward in an area over a series of papers. I remember in the past an argument with another group when we were working on similar problems and an issue of "credit" in the writeup of the various papers came up. A week later, we were starting collaborations together on new ideas. That's not exactly the sign of a super-competitive landscape.

It could be that I've just got a mistaken impression of the field. Harvard is a wonderfully collaborative place, and personally I've found overall I like working with others more than on my own. But when I think of cutthroat competition, I don't think of the job I'm in.

To conclude the post, I think what may be going on is people confuse "competition" with "drive". Most academics are smart, successful people, who are internally driven by a desire to do great work. To many, that must appear like "competition", but if so, it's internal competition -- you're not out to beat others, but to be our best. And I think it's very possible academia does have more "Type A" personalities that have this internal drive that is, surely, not always healthy. It's not clear to me that this academia's fault -- such people would be similarly driven in any career -- but, if it is true, then it suggests we might consider if this is best for our field, and how we might open up the field to a wider set of personalities or how we might make work in this field healthier for this type of personality.

First, I would again like to compare academics with other professions. I've seen comments (including on this blog) that other professions are less competitive than academia, and people are less worried about rank. I think people who are making that suggestion need to check in with people working in those professions, because I think they're ridiculously wrong. Lawyers coming out of school to get jobs, doctors trying to get fellowships or residencies, consultants at consulting firms -- all very competitive. As you move up, this continues. For lawyers, there's who can bill the most hours, and the challenge to make partner; for doctors, who can get the best positions at hospitals; and for businesspeople, every promotion is a step up. Even for software engineering types, there's competition. When I was a graduate student, I recall visiting friends who had gone to a large well-known company, and for a large part of the evening all they talked about was what "level" they and others were in the company and the annual reviews and who was doing what project that might get them ahead. So let's not be delusional and start by acknowledging that there's competition everywhere, and that's unsurprising when jobs and money are at stake. While I'm not suggesting I have deep knowledge of all of these careers, I think academics have much less competition than most.

If academics appear like they're concerned about ranking, perhaps it's because they appear to be easy to rank. First, as I pointed out last post, there's not that many of us. Second, there are obvious metrics everyone can understand: number of papers published, number of papers appearing in "top" conferences, and h-index stand out. I'm not suggesting these are good metrics -- but they're easy and to a first order give (potentially) some useful information. They're a quick way of bucketing or sorting people, particularly those without an established track record and are therefore not necessarily widely known or visible in the field, and therefore have more of an impression and an impact on younger academics.

But very quickly after your PhD, this sort of ranking loses its importance, and the very idea of ranking starts to lose its value -- as many have noted in a variety of venues. In academia, there's lots of ways to be successful, many points on the Pareto frontier. There are many great results waiting to be found and many different subfields to work in. At the end of the day, a history of good work and specific achievements is what people look for; there's not really a finite pool of such things for which to compete. Indeed, I'm not sure how I would go about competing with the top people in the field, except to try to do interesting work, which is what I'm trying to do anyway. (A secondary way to compete is just to make sure people know about your work. But giving talks is less important to being successful than doing the work that goes into the talks in the first place; again, it can have a bigger impact for people in the early stages of their career.)

Against this idea of competition, just look at how people in academia work together. In computer science theory, in particular, most papers have several authors working together. In a number of cases these are students with their advisors, but a closer look reveals that in many cases, they are not. Credit can be shared easily in academia, and collaborations can lead to results that individuals could not get alone. Working in groups is a way for people to get more done. Instead of competition, collaboration often yields the path to having a greater impact on the field. Rather than being a "competitive game", research is more a "cooperative game". (As an aside, this is why theory's approach of alphabetical order for authors rather than some sort of implicit "credit scheme" based on author order makes such great sense.) In most areas of computer science that I've participated in, a similar spirit prevails.

I encourage graduate students to team up pick out projects to work on together (and have seen this at other places, also -- one of my best experiences as a graduate student was such a project). It gives them something to do and a sense of ownership outside of working with their advisor. And, importantly, in reinforces that these other students are their colleagues, and that working together is a great idea that can gain for everyone. Hopefully, they also learn that working together is more fun and generally more productive than working alone. When it comes to hiring time, it's nice to see students who have worked on such team projects, because I typically prefer colleagues with a track record of working well with others.

Sometimes small competitions break out, sure -- multiple groups are working on the same or similar problems. Often, though, this is a very healthy competition, pushing progress forward in an area over a series of papers. I remember in the past an argument with another group when we were working on similar problems and an issue of "credit" in the writeup of the various papers came up. A week later, we were starting collaborations together on new ideas. That's not exactly the sign of a super-competitive landscape.

It could be that I've just got a mistaken impression of the field. Harvard is a wonderfully collaborative place, and personally I've found overall I like working with others more than on my own. But when I think of cutthroat competition, I don't think of the job I'm in.

To conclude the post, I think what may be going on is people confuse "competition" with "drive". Most academics are smart, successful people, who are internally driven by a desire to do great work. To many, that must appear like "competition", but if so, it's internal competition -- you're not out to beat others, but to be our best. And I think it's very possible academia does have more "Type A" personalities that have this internal drive that is, surely, not always healthy. It's not clear to me that this academia's fault -- such people would be similarly driven in any career -- but, if it is true, then it suggests we might consider if this is best for our field, and how we might open up the field to a wider set of personalities or how we might make work in this field healthier for this type of personality.

## Wednesday, December 10, 2014

### Stress, Continued: Jobs

Continuing from last post, I aim to focus on some of the issues related to stress in academia. Today's post will be related to stress and the nature of academic employment.

The most stressful issues I can think of in academia relate to finding or keeping your job. Graduating and finding the first job, especially when the job market is tight, cannot help but lead to stress for most people. (Indeed, these days, the process seems to be getting worse -- as postdocs become normalized in Computer Science, many people have to find the their "first job" two or more times.) Similarly, when you come up for tenure, it's obviously very stressful. Even if you think you deserve tenure, there is uncertainty because the process it outside your control, and the 1-bit decision can have a tremendous impact. While changing jobs later can also be stressful, and I'll discuss that, these two times seem to be the worst.

As a starting point, however, compared to other professions, the issues related to getting a first job and tenure are not especially unique to academia. Doctors have residency and specialization after medical school, lawyers wait years to become partner. Business people have it different, perhaps; tech startups notwithstanding, there are people who still climb up the corporate ladder. The tenure situation is, arguably, a more severe end goal than in other professions, but with what seems to be a commensurate reward; you don't really have to worry about a job after, if you are willing to stay at your institution. The framework and corresponding stress seem comparable to many other career paths, although there are career paths that avoid such poignant milestones. In particular, in computer science, many students from top institutions can quickly and readily find work at major companies or startups, and their stress seems to come from having a wealth rather than dearth of choices.

Would there be changes to the system that would help? I don't think so, and this requires some explanation. The issue of job stress, to me, seems fundamentally an issue of supply and demand. In the top 20 computer science departments there are approximately 1000 professor jobs. Heck, maybe I'm off by a factor of 2, depending on how you count (tenured vs all, maybe top 30 instead of 20). Positions turn over very slowly. In short, academia is a niche market, with a small supply of jobs and, generally, heavy demand for them. This creates a lot of friction in the system.

In years past, we've seen a slowdown in the CS academic job market. Even small slowdowns create tough situations with the small job supply. The solution was to introduce more postdocs. It was a working solution for the time, but with risks and downsides -- a de facto postdoc requirement added into the employment picture? -- that leaves us with as many questions as answers.

Similarly, tenure seems like such a huge deal in large because one cannot readily move to another position easily. Yes, there is the issue of being rejected, and the corresponding loss of prestige that goes with it, but even ignoring that, the challenge of where to go (in academia) if tenure is not granted looms large. The problem extends past tenure. Even very good people can find it hard to move, as the small number of jobs available make for an inflexible, challenging job market. If I was at Google and wanted to work at Facebook or some other company, such a move should not, generally, be difficult; people make such switches pretty regularly. If I walked over to MIT and told them I would like to move there, there are huge barriers, perhaps the most obvious of which is MIT has a wealthy choice of wonderful people it could choose to hire instead, and is careful in choosing who it hires to one of those limited slots. Indeed, at most other schools, the obvious issue is whether there would even be a senior position available, but MIT computer science always seems to have a position if it wants one.* The upshot is if later in life a professor wants to switch jobs for any number of personal reasons (dissatisfaction with the current location, divorce or family issues, etc.), it's not always easy or possible to do and stay in academia. The problem is again related to scarcity in jobs, and solving that problem seems out of reach, involving changes to institutional or societal structures. (That is, we have to convince our universities we need to hire more CS faculty, and/or we need to convince society to spend more on research/professors/education.)

The job stress issue is the most prominent stress issue I see in academia, and I think it underlies a lot of other stresses. When people say academics are over-concerned with and spend too much time jockeying for being ranked highly -- a point I have issues with and I'll return to in a later post -- to the extent that's true, I think some of it is inescapable in such a job market. When you graduate, you'll be compared to other graduates and that will have an effect on what jobs you can get. When you are up for tenure, you'll be compared to peers, implicitly or explicitly. If you want to switch jobs post-tenure, how you compare to people at the institution you wish to move to and to others in your field is important. All of these comparison points become more important in a tight, friction-filled job market. As much as I'd like that not to be the case or deny it, I think it's better to face the reality of it.

What possibility is there for a solution? The easiest I can think of is to expand the job market, which I think comes from industry jobs. As part of this, we have to help make sure industry sees the value and importance of research, and of having its own researchers. Some people have said that academics look down on "non-academic" employment for students and especially PhDs. I don't think that's generally true in CS, but to the extent it is, that's setting unrealistic expectations for outcomes for many -- or most -- of our students. The virtues of jobs in industry are well documented, as are the virtues of academic jobs; maintaining a culture where both are seen as positive outcomes seems healthiest for stress levels of individuals and for the health of the CS academic community generally.

Other solutions welcome and more to come.

* Just kidding around with you, MIT. But seriously, is there any upper bound on you all?

The most stressful issues I can think of in academia relate to finding or keeping your job. Graduating and finding the first job, especially when the job market is tight, cannot help but lead to stress for most people. (Indeed, these days, the process seems to be getting worse -- as postdocs become normalized in Computer Science, many people have to find the their "first job" two or more times.) Similarly, when you come up for tenure, it's obviously very stressful. Even if you think you deserve tenure, there is uncertainty because the process it outside your control, and the 1-bit decision can have a tremendous impact. While changing jobs later can also be stressful, and I'll discuss that, these two times seem to be the worst.

As a starting point, however, compared to other professions, the issues related to getting a first job and tenure are not especially unique to academia. Doctors have residency and specialization after medical school, lawyers wait years to become partner. Business people have it different, perhaps; tech startups notwithstanding, there are people who still climb up the corporate ladder. The tenure situation is, arguably, a more severe end goal than in other professions, but with what seems to be a commensurate reward; you don't really have to worry about a job after, if you are willing to stay at your institution. The framework and corresponding stress seem comparable to many other career paths, although there are career paths that avoid such poignant milestones. In particular, in computer science, many students from top institutions can quickly and readily find work at major companies or startups, and their stress seems to come from having a wealth rather than dearth of choices.

Would there be changes to the system that would help? I don't think so, and this requires some explanation. The issue of job stress, to me, seems fundamentally an issue of supply and demand. In the top 20 computer science departments there are approximately 1000 professor jobs. Heck, maybe I'm off by a factor of 2, depending on how you count (tenured vs all, maybe top 30 instead of 20). Positions turn over very slowly. In short, academia is a niche market, with a small supply of jobs and, generally, heavy demand for them. This creates a lot of friction in the system.

In years past, we've seen a slowdown in the CS academic job market. Even small slowdowns create tough situations with the small job supply. The solution was to introduce more postdocs. It was a working solution for the time, but with risks and downsides -- a de facto postdoc requirement added into the employment picture? -- that leaves us with as many questions as answers.

Similarly, tenure seems like such a huge deal in large because one cannot readily move to another position easily. Yes, there is the issue of being rejected, and the corresponding loss of prestige that goes with it, but even ignoring that, the challenge of where to go (in academia) if tenure is not granted looms large. The problem extends past tenure. Even very good people can find it hard to move, as the small number of jobs available make for an inflexible, challenging job market. If I was at Google and wanted to work at Facebook or some other company, such a move should not, generally, be difficult; people make such switches pretty regularly. If I walked over to MIT and told them I would like to move there, there are huge barriers, perhaps the most obvious of which is MIT has a wealthy choice of wonderful people it could choose to hire instead, and is careful in choosing who it hires to one of those limited slots. Indeed, at most other schools, the obvious issue is whether there would even be a senior position available, but MIT computer science always seems to have a position if it wants one.* The upshot is if later in life a professor wants to switch jobs for any number of personal reasons (dissatisfaction with the current location, divorce or family issues, etc.), it's not always easy or possible to do and stay in academia. The problem is again related to scarcity in jobs, and solving that problem seems out of reach, involving changes to institutional or societal structures. (That is, we have to convince our universities we need to hire more CS faculty, and/or we need to convince society to spend more on research/professors/education.)

The job stress issue is the most prominent stress issue I see in academia, and I think it underlies a lot of other stresses. When people say academics are over-concerned with and spend too much time jockeying for being ranked highly -- a point I have issues with and I'll return to in a later post -- to the extent that's true, I think some of it is inescapable in such a job market. When you graduate, you'll be compared to other graduates and that will have an effect on what jobs you can get. When you are up for tenure, you'll be compared to peers, implicitly or explicitly. If you want to switch jobs post-tenure, how you compare to people at the institution you wish to move to and to others in your field is important. All of these comparison points become more important in a tight, friction-filled job market. As much as I'd like that not to be the case or deny it, I think it's better to face the reality of it.

What possibility is there for a solution? The easiest I can think of is to expand the job market, which I think comes from industry jobs. As part of this, we have to help make sure industry sees the value and importance of research, and of having its own researchers. Some people have said that academics look down on "non-academic" employment for students and especially PhDs. I don't think that's generally true in CS, but to the extent it is, that's setting unrealistic expectations for outcomes for many -- or most -- of our students. The virtues of jobs in industry are well documented, as are the virtues of academic jobs; maintaining a culture where both are seen as positive outcomes seems healthiest for stress levels of individuals and for the health of the CS academic community generally.

Other solutions welcome and more to come.

* Just kidding around with you, MIT. But seriously, is there any upper bound on you all?

Subscribe to:
Posts (Atom)