I'm curious if various readers out there would be willing to offer their ranking of networking conferences. The issue has come up in some conversations recently, and I was wondering what other possibly more informed sources think.
Besides your ranking, of course, I'm interested in the reasons behind the rankings. Is it just acceptance rate? Do certain networking conferences specialize in subareas where they are remarkably strong? How does/did such a ranking get built and maintained; does it ever get lost?
Wednesday, October 28, 2009
Subscribe to:
Post Comments (Atom)
34 comments:
I think these sorts of rankings are very subjective, but I'd say SIGCOMM is a clear step above INFOCOM, which is number two, and after that there are number of conferences that have general networking papers that are reasonably strong (including NSDI, CoNEXT, and IMC).
It is odd that the CS-oriented conference (SIGCOMM) is less theoretical and the EE-oriented conference (INFOCOM) is more theoretical. I find that many INFOCOM papers have an underwhelming evaluation section, but often attempt to make up for that with some serious math.
As for specialties, INFOCOM has some very specific lower-layer work, as does ANCS, and MOBICOM and MobiSys (and to an extent, SenSys) have good work on mobile-specific networking.
I have found that many INFOCOM papers have bad theory, especially in algorithms related topics. Many times the results are easy consequences of existing results in main stream theory literature but this is "hidden" in various ways. And there is just plain junk for a supposedly prestigious networking conference.
I've heard that INFOCOM is regarded highly among networking researchers.
However, the quality of INFOCOM papers is extremely variable. In particular I am aware of some crypto-specific INFOCOM papers that are just plain wrong, and several other (general networking) papers that seem uninteresting or not very novel to me.
The quality of SIGCOMM papers seems significantly higher on average, and also much less variable.
The Australian Research Council is organizing such rankings via a professional CS body. This is for journals on all fields and includes CS conferences too. Try www.core.edu.au. The classification may be too crude but it is interesting to get a perspective from outside USA. A revision is under way for 2010.
In decreasing order:
1: SIGCOMM. It is the FOCS/STOC of networking, with the pros and cons. See Dina's report:
accepts 27 from 300. Reminds me of
"The Rise and Fall of Sharpie Cakes" by Murakami. The papers that make it past the Sharpie review board and win Sharpie contest get mangled by Sharpie researchers. :)
2. Mobicom.
Similar, for mobile networking, but perhaps less intense.
3. Infocom. Soup of a conf, stuff goes in and gets stirred around, no specific taste stands out.
4. IMC, NSDI, SIGMETRICS, ...
-- metoo
1. SIGCOMM and NSDI
2. IMC, CoNext, INFOCOM
3. ANCS, ICNP, ...
I don't have enough experience to rank all the mobi* and sensor papers, but I'd probably still go with my #1 as covering topics that include, but are not limited to, sensor/mobi papers.
I'm actually pretty surprised that people's view of NSDI is lower than I expected. Does that have anything to do with its distributed systems flavor as well?
I had forgotten about ICNP and USENIX technical, which both have a number of interesting papers. As does SIGMETRICS once in a while.
But to respond about NSDI. I find that NSDI papers often study contrived problems and often are lacking in the sort of implementation that you might find in a USENIX or SOSP paper. So as a middle ground between SIGCOMM and USENIX, NSDI actually tends to be significantly worse than either.
As Anon 7 says, I think NSDI has kind of a loose definition of what a networked system can be. That definition can cover anything from planetlab to a piece of javascript embedded in webpages.
Like in this paper which detects tampering with embedded ads.
Anyone knows it's the wrong problem since most ads are delivered by 3rd party servers.
http://www.cs.washington.edu/research/security/web-tripwire/nsdi-2008.pdf
Many times the results are easy consequences of existing results in main stream theory literature but this is "hidden" in various ways.
I hope you were complaining about the fact that it is hidden, not about the fact that someone was able to successfully apply theory to solve a practical problem.
Why am I not surprised that Muthu is also a Murakami fan?
How would you rank ASPLOS vs. other systems conferences?
So far I think the comments reflect conventional wisdom - SIGCOMM is the top, INFOCOM is "variable", with both good and bad papers. I think a lot of people avoid INFOCOM, especially for student papers, because of this; even if you have a good paper there, it can be hard to get it noticed, since there's so much there of varying quality. (I personally have a pretty high opinion of INFOCOM; just because it's an INFOCOM paper doesn't mean it's good, but there is a lot of good stuff there, and it is accepting of more theoretical stuff.)
I was hoping for more discussion about all the other conferences. NSDI? coNEXT? SIGMETRICS? Allerton? ANCS? Do all these conferences sort of merge into a "2nd tier", or are some significantly better than others? I've had papers in INFOCOM, coNEXT (this year), Allerton, Internet Measurement Conference, IPTPS, so for me these aren't just idle questions; I'm quite curious what others think.
I don't necessarily think SIGCOMM is the top conference. Only if you consider acceptance rates as a reflection of conference importance. Because SIGCOMM is extremely competitive, it often rejects really bold papers. The community can be a bit insular.
For example, they did not accept any of the early work on sensor networks.
The problem with this sort of exercise is that networking is a very broad field and the "best" conferences can depend fairly significantly on the sub-field one focuses. SIGCOMM is perhaps the best-known "large umbrella" conference, but it is not universally the best place for all networking work. Conferences build up communities around them this invariably this impacts scope (both on the submission side and on the selection side). Thus, SIGCOMM has not traditionally been a great place for optical networking papers. However, these kinds of things change (e.g., Mobicom and IMC were created, in part, due to a notion that SIGCOMM was not friendly to those regimes and yet wireless/mobile networking and measurement have become staples at SIGCOMM of late).
Thus, what conferences you think are "best" is going to depend a bit on one's interests. Michael Freedman identifies NSDI and SIGCOMM at the top because he is a systems builder and indeed those are probably the top two conferences -- for systems builders. For those focused on mobile networking, conferences such as mobicom, mobisys, etc will come up while they may not be on the radar of those who are Internet focused. The more theoretically inclined may find a more receptive audience at places like INFOCOM and Mobicom, but perhaps not at Mobisys or NSDI and so on.
I think its a mistake to try to rank networking conferences independent of this understanding.
That said, I think you could identify that certain conferences are perhaps more vital than others (i.e. that they represent a marquis venue for some kind of work instead of a place to send work that can't make it into the marquis venue for that work). I suspect this includes SIGCOMM, INFOCOM, Mobicom, IMC, SenSys, and perhaps a few others. This doesn't mean that these conferences are not "second-tier" with respect to some disciplines -- they are (there are definitely fields for which INFOCOM receives the SIGCOMM rejects for example) but that doesn't mean they are so for all fields.
A final note, a number of these conferences have very different cultures and that too impacts the set of people who submit and the communities they build. For example, I try desperately to avoid submitting to multi-track conferences if I can help it. I know many people who share this bias. Does that make multi-track conferences inherently second tier? I'm not willing to defend that proposition, but I'll observe that many people vote thus with their submissions and their attendance.
I am surprised that no one have mentioned Mobihoc as a good networkng conference. Can experts please comment on that too. Its hard to get paper into Mobihoc, and the overall quality looks pretty good.
To follow up on one of Michael's original questions: "How does/did such a ranking get built and maintained?" I think its pretty clear that this is fundamentally drive by social pressure and momentum. Existing top conferences attract the best work because everyone wants both the recognition of appearing there and the opportunity to impact their peer group (who implicitly supports that top status by sending their best papers and attending). SIGCOMM is a top conference because it has been a top conference. You tend to have to screw up badly to stop this kind of momentum (e.g., what would MIT have to do to NOT be considered a top CS program?).
Now for new conferences I think the same process applies as well. If the community of people who publish in an existing top venue chose to publish in a new venue then it acquires status via authorship. Hence, a very new conference like NSDI (only 6-7 years old) can become a highly-regarded venue because it successfully attracted well-regarded members of the community to submit and attend.
However, its broader status is a function of how large a community it attracts -- systems people regard NSDI highly, but others less so. Indeed more niche conferences are inherently less well regarded in the broader community -- this explaining perhaps the previous poster's question about why Mobihoc didn't get mentioned (if you're not a researcher in ad hoc networking then you wouldn't think about submitting to mobihoc and you may not even know anyone who publishes in mobihoc).
Ultimately, I think the set of paper submitters and conference attendees (and the social network between those people) is ultimately what determines a sense of "rank".
As Stefan noted, it depends on your sub-area. A tier-esque kind of approach is more apt and I think there is a more definite delineation there.
At the top (best-paper wise) between the two, SIGCOMM is one I would likely enjoy reading but not necessarily find it useful long-term while INFOCOM is one that is usually a must read with enjoyment being an unlikely bonus.
Top tier:
- SIGCOMM: On average, I think the papers are written at a nicely accessible level which is appealing. The single track venue is also incredibly nice. On the whole, the paper quality is consistent but there are always (as I imagine any conference would have) a few papers that make me wonder what the TPC was thinking. Typically the questionable papers come from the usual suspect crowd*.
In general, I encourage my students to go as I think the conference itself too is nicely run and very good for networking of the human kind. I also view SIGCOMM as more of a creative conference where interesting (but perhaps non-useful ideas) are posed. That isn't a bad thing as those results can lead to nice useful result later but SIGCOMM always seems to half about a third to half of the papers asking you to believe that unicorns and dragons exist.
- INFOCOM: As noted above, INFOCOM is hit or miss quality-wise and it is easy to lose a good paper amongst the sheer magnitude of papers there. Review quality has gotten better but it still is highly variable with the paper quantity.
However, if I need a theoretical evaluation of something in a networking sense, INFOCOM is the first conference to turn to. One can get some very good results (albeit with the chaff) that while perhaps not as solid presentation-wise, I think often have more practical impact than SIGCOMM papers. The caveat being that you still have to map the theory back to the practice where SIGCOMM tends to invert the process (system before theory).
Quite a few INFOCOM papers tend to fall into the "Consider a spherical cow" dilemma though as well. Systems papers are usually poorly received there with experimentally-driven papers being countable on one hand (which I usually catch those sessions making the multi-tracks nearly single track :)
INFOCOM is also a horrible venue outside of the student workshops to send my students to. Bad policy on travel grants and generally poorly organized events for networking purposes. For the usual registration price, you would think they could do better but that is by in large IEEE, not necessarily the organizers fault.
I don't have enough background to comment on MOBICOM but I would generally place it in the same upper tier, MOBIHOC to an extent as well.
Second tier:
NSDI, IMC, PAM (to some extent), ...
GLOBECOM (similar INFOCOM hit / miss problem)
CoNext
-- Continued from other comment due to character limit --
Third tier:
ICC, ICCCN, LCN, etc., etc.
* There is also a strong effect of quantity accepted versus the size of the community for all of the venues. The consistent quality in SIGCOMM versus the frustrating inconsistency INFOCOM pretty much embodies that. On one hand, you get consistent papers but with largely homogeneous thinking with one and other the other hand you get more novelty with other but a frustrating amount of chaff.
Something that has always bugged me about SIGCOMM has been the inflated sense of the quality of their review process. With M papers accepted and N good quality researchers out there where M < N, it is pretty hard to end up a lot of poor quality papers so as long as most of N submits because they think SIGCOMM is a high quality venue (as is rightly perceived). SIGCOMM's average review quality while not as bad as INFOCOM's is nothing great. It is just that INFOCOM acknowledges more forthrightly that the process is bad while SIGCOMM tends to candy coat it.
Per Stefan's post, it is pretty hard to really move up or down. Barring some sort of major shift on the promotion / tenure track, the top will remain the top ones as that is where people will aspire to send. The interesting aspects come in from "second-tier" must attend venues that emerge.
In general, I've found the security** conferences to be quite good on average with regards to review quality. Why networking seems to engender poorer reviews on average, I'm not sure I'll ever know.
** Generally, I find both conferences (SIGCOMM, INFOCOM) to be very poor on security. Quite a few papers that slip through that simply should not have been published at those venues. If you are going to do security, honestly, head to a security conference, not a networking conferences. SIGCOMM for a while was nicely dodging security papers to their credit.
Feamster's work a few years back on BGP / spam bots at SIGCOMM was a nice system security / experimental papers that was quite appropriate. Others which shall remain nameless really should not be appearing there and would get better feedback at a venue like USENIX Security / ACM CCS, etc.
Another thread on self validation ..pitiful
"It is odd that the CS-oriented conference (SIGCOMM) is less theoretical and the EE-oriented conference (INFOCOM) is more theoretical."
If you'd know anything about `EE' or `theory', you wouldn't have made such a comment!
The conferences I hear about most are NSDI, SIGCOMM, and INFOCOM, roughly in that order (and actually not much about INFOCOM). I also hear a lot about SenSys as the premiere sensor network conference, but it's clearly more narrow than the others. Then again, I'm sitting next to a distributed systems group, and in grad school I spent lots of time with systems people.
I did see an interesting paper recently presented from our networking group that will appear at IMC. The focus is on trying to understand sources of sampling bias in internet measurement, then use bootstrap-like "stochastic imputation" to correct these biases. One of the questions it raised for me is how often are we going in the wrong direction because these sorts of measurement biases lead us to misunderstand our research results? That's the kind of thing where a more specialized conference could lead to work with wide impact on the field as a whole...
Perhaps the most insightful comment in this post comes from Stefan.
It seems that the best strategy is to poll the top people in their field and submit papers where these experts send their best work. There is always a chance you omit some other good venues
that those specific experts do not consider, but at least it makes sense.
Ranking methods (polling Australian researchers like CORE, or impact-factor-based) will always be full of inaccuracies and biases. The Australian ranking exercise is ridiculous in many respects. e.g., they have Usenix security and NSDI as rank B conferences ... They have also made sure that several low-tier conferences they are submitting their work to end up getting A+ or A ranking.
Citation-based ranking makes sense only if you are fully aware of the weaknesses of each ranking and you aggregate among multiple rankings.
For example by looking at
http://libra.msra.cn/ and http://citeseerx.ist.psu.edu/stats/venues
one can pick the conferences that are top in both lists and safely conclude they have great quality (e.g., SIGCOMM and OSDI). But it is unclear how one can rank Infocom, which does fairly well in Libra but not so great in citeseerx.
Again note that each of the above two citation-based rankings has its own unique weaknesses. For example citeseerx only considers the last two years, its data source is only DBLP, and considers a venue only if it has more than 23-25 publications (thus excluding SOSP!). Libra is not as up to date and pretty much omits the last two years.
There is currently a CS conference ranking exercise in wikipedia.
It takes the crowdsourcing approach to rank venues.
http://en.wikipedia.org/wiki/List_of_computer_science_conferences
I agree with Stefan, who basically pointed out that what is considered top is highly depended on ones specific sub-area. Sigcomm is, no doubt about that, definitely a tier 1 conference and belongs to the top, but it will focus on presenting work which has a good story to tell. Sigcomm is more systems oriented, it is not a very good place for theory papers. I wonder that nobody mentioned Sigmetrics, because the performance evaluation community is much more theoretical than Sigcomm. Moreover, Sigmetrics is supplemented with high quality workshops, e.g. MAMA. So for theoretical work, I would consider Sigmetrics much more top than Sigcomm (Sigcomm would be good if one could transform the theory into a good story without presenting too much theory ;-)
On the other hand, I would not consider IMC being a tier 1 conference due to the impact factor that should be lower than Sigcomm. However, for the measurement community, it is the top conference. PAM focuses more on 'work in progress' papers, which is also manifested in the limited space in PAM papers compared to IMC. (Mentioning good paper and PAM in one sentence is typically considered strange by the measurement community...)
I agree on the high variability of INFOCOM. The venue itself is horrible, due to the multi tracked nature - having 25 people listening to ones talk is a clear success :-) Although, as has been already pointed out, it's a good place for theory in general. Although I would think that Sigmetrics papers are 1. and INFOCOM 2.
ITC is also a nice place for theory on the traffic layer (traffic modelling).
And than there is Globecom (big, high variability like INFOCOM) and ICC.
I'd also agree that for the wireless community, MobiCom and MobiSys are the top conferences.
The wikipedia article cited by another user above seems to have a reasonable ranking for most conferences in their subfields. In general it is more reasonable than most of the rankings one can find on the web.
http://en.wikipedia.org/wiki/List_of_computer_science_conferences
This article uses references to almost all such rankings on the web to justify the notability of a conference. Then it seems that it is up to the editors to sort through the noise and to approximately classify a conference.
From the article:
"Only established, well-known conferences are included; third-party references are provided for all conferences without Wikipedia articles. Within each field, the conferences with highest rankings are listed first.
The ranking is approximate and non-authoritative. Editors are encouraged to clean this list from conference spamming that occasionally takes place. This article relies on the community of CS researchers to rank the conferences according to their perceived quality. The third party references are to be used as a general guideline, to ensure that low-cited or low-tier conferences are not ranked higher than the most prestigious ones."
Perhaps this partly answer's Michael's question: "How to build/maintain such a ranking?". Perhaps democratizing the process, while maintaining the requirements for the notability of conferences is the way to go in solving this problem.
Is it really a problem though? Do people really want the conferences and their work to be ranked? Given the plethora of such rankings on the web, one could say that at least some people care.
Nice discussion.
I'm not sure democratization of the process would help in any sort of tangible way. In theory, the US News & World Report is technically democratic. Why would conferences be any different given the relatively high stakes between being considered a Tier 1 and Tier 2 conference?
Sheer inertia / momentum pretty much prevents a general-oriented Tier 1 conference from losing its status. Grad students / junior faculty feel they need to publish there and hence the circle of life or something to that effect carries on.
Inevitably what happens is large enough group get grumpy about the process at the top tier, split off a new conference (ex. NSDI) and if successful, that sub-community regards it highly in 5+ year timeframe.
I agree with Stefan that the ranking depends completely on the topic area, and that I prefer single-track venues to multi-track. That aside, here's my personal ranking:
1. SIGCOMM and NSDI
2. SIGMETRICS and IMC
3. CoNext
4. INFOCOM
5. ICNP and PAM
Why? A few thoughts:
- I think SIGCOMM and NSDI have real buzz, as evidenced by (among other things) the ratio of attendees to papers. I think both conferences have a strong sense of community, and people really care about each others work -- both in terms of liking the work, and in terms of critique. I love the energy.
- I love the ethic of SIGMETRICS -- real value placed on substantive and scholarly evaluation, whether analytically or empirically. I was tempted to put it at par with SIGCOMM and NSDI, but didn't because the topics at SIGMETRICS are quite diffuse, so many authors don't consider the conference their (main) home. The ratio of attendees to papers is much lower than SIGCOMM and NSDI as a result.
- IMC is rated lower than SIGCOMM and NSDI primarily because the scope is narrower, though participants have a similar sense of investment. So, while I rate both SIGMETRICS and IMC slightly below SIGCOMM and NSDI, I do it for rather different reasons. I also wish the measurement community would stretch further beyond "loving the data you have" to telling the world what data the components in the network should be collecting.
- CoNext is up and coming. Single-track, broader in scope than SIGCOMM without resorting to shorter papers in a multi-track chaotic venue (like INFOCOM). For me, it's the out-of-phase SIGCOMM equivalent, despite having less buzz.
- INFOCOM, despite the dodecaplex feel and highly variable paper and reviewing quality, does still publish some very strong papers. For me, INFOCOM is a good home for work is narrower in scope but still strong.
- ICNP and PAM, while small and single track, don't quite have the critical mass of CoNext and IMC, respectively, yet avoid the multi-track chaos of INFOCOM. They don't quite strike me (yet) as most attendees home conference, which is why I rank them lower.
I didn't mention any of the Mobi-* conferences because I don't know them well enough to have much of an opinion.
Jennifer --
Thanks for the detailed comment. I must admit, my bias is to rate INFOCOM a little more highly -- but admittedly because I'd judge it more on the best papers that are there than on the median paper. And I've never thought much of SIGMETRICS, precisely because it's so diffuse and small (and, in my mind, a bit insular). Things I might otherwise send to SIGMETRICS I send elsewhere -- often I just present it at Allerton, which is equally diffuse but runs parallel sessions and is larger to make up for it. (Plus, lunch at the Brown Bag.) But perhaps I'm out of date with my opinion.
I am not sure if this thread was supposed to be intentionally comic, but reading comments like "odd that EE-oriented conference is more theoretical" and "Sigmetrics is insular" makes me smile :-).
Insular is better reserved for the community that believes Theory == STOC/FOCS.
I think that a possible ways of ranking conferences is via the impact that their papers has, especially in terms of how many citations they have. I know it is not that easy to count citations correctly, but if that was possible, I believe it would be a clear indication of how much the conference is "influential" in the field it concerns. An attempt to this for CS conferences can be seen at http://citeseer.ist.psu.edu/impact.html, based on citeseer, which is not the best venue for counting citations IMHO. As mentioned already here, SIGCOMM and MobiCom are high in the list. And Infocom is not bad either.
@Aaron. If wikipedia can have an article like this, without substantial disputes
http://en.wikipedia.org/wiki/Israeli%E2%80%93Palestinian_conflict
then perhaps the community can have a ranking article in wikipedia.
I realize that my comparison may not be exactly accurate, but my point is that wikipedia usually finds ways to resolve disputes.
On the other hand your comparison with a closed ranking like US-news is definitely off the mark.
why not building a page rank tool for papers and measure their impact? i.e. not just count citations (e.g. impact factor, h-index and similar used by google scholar), but weight those citations by how many references those citing papers have themselves...
wouldn't this answer MANY of the questions in this discussion?
"why not building a page rank tool for papers and measure their impact? "
Already done:
http://academic.research.microsoft.com/About/Help.htm
http://confsearch.org/confsearch/
But still not sufficiently accurate, due to incomplete coverage. For example everybody who is active in the networking community knows that NSDI >> INFOCOM but, libra's and confsearch's rankings do not reflect this fact yet.
Post a Comment