I have had papers rejected, with comments of the form that the results seem too easy, and are at the level of a homework assignment. Generally, I think these reviewers miss the point. The fact that the results seem easy may be because the point isn't the derivation but the conception and framing of the problem. I actually think that generally it's an interesting subclass of good papers that can be and are turned into homework assignments.
A new-ish paper of mine, Queues with Small Advice, was recently accepted to the very new SIAM Conference on Applied and Computational Discrete Algorithms (ACDA21), which took place July 19-21. This conference focuses on algorithms with a close tie to applications. Some people unfamiliar with theory conferences might think that algorithms work would naturally be tied to applications, but I've generally found that algorithmic work tied to applications is more negatively reviewed in theory conferences. Indeed, that type of work is much more likely to receive comments of the form that the results seem too easy, and are at the level of a homework assignment. So perhaps this new conference will fill an important role and hole in the current slate of theory conferences.
In any case, I actually do think this paper is in some ways easy (in that the analysis is readily approachable with standard tools), and parts of it would, I believe, make a great homework assignment. The goal was to show the potential power of using even very simple advice, such as from machine-learning algorithms, in queueing systems. This seems to me to be a very understudied topic, and fits into the recently growing theme of Algorithms with Predictions. (The paper was rejected previously from a conference, where the most negative review said "Very accessible and well written paper, which certainly provides motivation to consider problems of this type." but also said "The mathematical analysis in this paper is fairly standard, and in that sense not novel... the paper is interesting, but not advancing sufficiently the state of the art.")
The paper focuses on the case of 1 bit of advice -- essentially, is the job "short" or "long". I think this type is advice is a good approach to look at for queueing -- it corresponds naturally to putting a job at the front of the queue, or the back. And it may be easier for machine-learning algorithms to generate accurately. Simple is often good in practice.
Rather than describe the paper further, I'll go ahead and turn it directly into a collection of homework problems. Feel free to use them or variations you come up with; hopefully, the students won't find the paper for answers. I personally would be thrilled if one outcome of this paper was that prediction-based problems of this form made their way into problem sets. (Although, serious question: who still teaches queueing theory any more?)
Section 1: One-Bit Advice (Single Queue)
a) Consider the standard M/M/1 queue, with Poisson arrivals at rate λ, and exponentially distributed service times of mean 1; the expected time a job spends in the queue in equilibrium is 1/(1-λ). Now suppose each job comes with one bit advice; if the job has service time greater than T, the bit is 1, and if it is smaller than T, the bit is 0. A "big" job goes to the end of the queue, a "small" job goes to the front. (Assume the queue is non-preemptive.) Find the expected time for a job in this queue in equilibrium, as a function of T and λ.
b) What is the optimal value for T (as a function of λ)?
c) Repeat parts a and b, but this time with a preemptive queue. Does preemption help or hurt performance?
Harder variation: The above questions, but with an M/G/1 queue (that is, for a general, given service distribution); derive a formula for the expected time in the system, where the formula may involve terms based on the service distribution.
Easier variation: Write a simulation, experimentally determine the best threshold, and the improvements from one bit of advice. Different service time distributions can be tried.
Section 2: One-Bit Advice with Predictions (Single Queue)
Where would possibly get a bit of advice in real life? Perhaps from a machine learning predictor. But in that case, the advice might turn out to be wrong. What if our bit of advice is just right most of the time?
a) Consider the (non-preemptive) M/M/1 queue variation from Section 1 part a above, but now the advice is correct with some probability p. Find the expected time for a job in this queue in equilibrium, as a function of p, T, and λ.
b) Repeat part a with a preemptive queue.
Harder variations: The above questions, but have the probability the advice is correct depend on the size of the job. A particularly fun example is when the "predicted service time" for a job with true time x is exponentially distributed with mean x, and the prediction bit is 1 if the predicted time is larger than T, and 0 otherwise. Also, one can again consider general service times.
Easier variation: Again, write a simulation and derive experimental results/insights.
Section 3: One-Bit Advice with Prediction (Power of 2 Choices) [harder, grad student level; needs to know fluid limit models; I'd stick with sections 1 and 2!]
a) Derive fluid limit equations for a collection of N queues, where there are two types of jobs: "large" jobs arrive as a Poisson stream of rate λ₁N and have exponentially distributed service times with mean μ₁ and "small" jobs arrive as a Poisson stream of rate λ₂N and have exponentially distributed service times of mean μ₂. Each job comes with a bit of advice determining whether it is large or small, but large jobs are mislabelled with probability p₁ and small jobs are mislabelled with probability p₂. An incoming job selects a queue using "the power of two choices" -- it is up to you to describe how a job determines what is the better of the two choices (there are multiple possibilities) and how jobs are processed within a queue (non-preemptive is suggested).
[Hint: the queue state can be represented by the number of jobs that are labelled short that are waiting, the number of jobs that are labelled long that are waiting, and the type of the job currently being serviced.]
b) Compare fluid limit results to simulations for 1000 queues to see if your equations seem accurate.
2 comments:
You are correctly pointing out a problem in the theory community and the math community and perhaps in academia in general:
1) Simple in retrospect work is under valued
2) Problems people care about in the real world are under valued
3) And then we wonder why there is the tech-transfer problem...
I was once asked what is the difference between Operations Research and Algorithms Research. without intending to be insulting I replied
Operations Research works on problems people care about and is concerned with giving techinques that can really be used.
Anonymous #1: I agree that "simple in retrospect/real-world" problems are undervalued in the theory community -- I don't think it has to be that way, but that's just the direction the theory community has gone and it seems hard to break out of. On the more positive side, I am finding that other venues are more appreciative of this type of theoretical work. There are plenty of systems venues and these days AI(ish) venues that I find will accept theoretical work but gives points for being real-world practical and correspondingly doesn't seem to punish simplicity the same way. There the problem is sometimes they don't want you to go too overboard on the theory, which makes some sense. (Proofs in the appendices, for those who want them!)
SODA and ITCS are "name" theory conferences that still seem somewhat more amenable to practical stuff, for what it's worth.
Post a Comment