IACR Publication Reform :  Cryptology ePrint Archive Forum
Discussion related to IACR's current and future publications: conference proceedings, Journal of Cryptology, and revolution of IACR's publications.  
Goto Thread: PreviousNext
Goto: Forum ListMessage ListNew TopicSearchLog In
The speed of science: two case studies
Posted by: djb (IP Logged)
Date: 15 June 2013 20:31

Nigel Smart was quite clear at Eurocrypt in advertising the Proceedings of the IACR as fixing our "High review load". Well, gee, sounds great, but how come the IACR Board seems unable to explain to the rest of us _how_ this reduction in review load is supposed to happen?

Nigel doesn't answer the question but says he's putting together "a more detailed proposal". Christian Cachin says that there "could" be a one-year "ban on resubmission" but he fails to define "resubmission". Ivan Damgård (not on the current IACR Board) says "Claiming you added something substantial in two weeks is probably bogus anyway."

Let's think about this "two weeks" figure for a moment.

Case study 1: DBLP for "Ivan Damgård" finds 7 conference papers in 2012 (Crypto, CT-RSA, ICITS, PKC, SCN, SCN, TCC), not to mention 7 eprint papers the same year. That's a throughput of one conference paper every 7.4 weeks. How can Ivan claim that 2 weeks isn't enough time for a "substantial" improvement to a paper, if he spends a _total_ of only 7.4 weeks per successful conference paper?

Furthermore, surely Ivan would agree that some papers are easier to write than others, and also that he's not spending all of his time on paper-writing---if he really focuses on a paper then he can probably get it done much more quickly. Is it really so hard to believe that an author has done "something substantial in two weeks"?

Of course, it's actually Ivan plus coauthors, and increased use of the Internet is in general making it easier and easier to have many coauthors, which makes it even easier to believe that a research team is doing something very quickly. How can anyone imagine that a knee-jerk time-based response could substitute for a proper scientific evaluation?

Case study 2: Let's look at what happened to one of those eprint papers, 2012/699, in which Ivan proposed a specific "practical" LPN-based cryptosystem. A few days later I pointed out publicly that this specific proposal failed to account for the attack in 2012/355, a paper at RFIDsec 2012. Of course, RFIDsec isn't a top-tier IACR conference, but surely Ivan will agree that 2012/355---forcing changes in the parameters and "practicality" of his paper 2012/699---was worthy of publication.

Here's how 2012/355 evolved. An LPN-related system "Lapin" was presented at FSE 2012 the morning of 21 March 2012. Tanja Lange and I were in the audience, were both immediately skeptical of the security of the system, and started investigating attacks. We had our attack paper ready for the RFIDsec submission deadline on 31 March 2012, and had it in essentially final form by 5 April 2012---two weeks and one day after the FSE talk. We prioritized other tasks at that point, and didn't end up doing the last few days of work to post the paper until June 2012, but with some slight rescheduling we would have had the complete paper online two weeks after we started.

I'm sure that Ivan, and many hundreds of other people here, can think of similarly efficient paper-writing examples from their own experience. So why do we have Ivan saying "two weeks is probably bogus anyway" for a mere revision? And how can Christian possibly think that a one-year ban is even marginally reasonable?

---Dan

Re: The speed of science: two case studies
Posted by: Orr (IP Logged)
Date: 17 June 2013 15:07

While I do get your point, what I think your analysis is missing the fact that we all have pipelines of results and submissions (even before considering co-authors).

For example, if you have on average 5 papers/year (a rate of 10 weeks/paper), and one year your papers got slightly more rejected, the next year you are likely to have slightly more.

Additionally, you need to consider the fact that some people collaborate more, or have many students (who do most of the writing work). I guess you are not implying that Bart spends 3.25 weeks/conference paper?

Finally, I suspect that there is also a difference in the amount of work people put into Crypto submissions and tier 4 conference submissions (not going to name any, so no one will get offended).

Re: The speed of science: two case studies
Posted by: cbw (IP Logged)
Date: 17 June 2013 22:07

Hi,

I guess it's quite simple math: If the same paper does not get resubmitted to Crypto / Eurocrypt / Asiacrypt / TCC, we don't have to review it again and again 4 (!) times.

If the saved time will be spent on better reviews is clearly a different ball-game...

Best,
Christopher

Re: The speed of science: two case studies
Posted by: hoerder (IP Logged)
Date: 18 June 2013 08:52

Hi,

Christopher, if a paper does not get resubmitted to an IACR venue it doesn't imply that it's not going to be resubmitted at other venues where IACR members are on the program committee and have to spend time reviewing the re-submission. Depending on how it is crafted, the resubmission policy might just end up shifting the workloads around. Also, Dan raises a valid question: What exactly is a resubmission? How much does a rejected paper have to change to be a new submission?

From the CHES community I heard rumors that they're considering an journal of their own but instead of papers people have to submit extended abstracts and reviewers act more or less as shepherds. I'm not sure whether this makes more sense or not, just wanted to point out that there are more possibilities. And that both halfs of IACR are leading very similar discussion in parallel (as far as I can see it).

A friend of mine who is doing solid state physics was just complaining about stupid reviewers a week ago and the way he described their model, it sounded quite like the proposed proceedings of the IACR. I reckon that there will never be a perfect system and that quite a lot depends on the little details for each system and the degree of flexibility they offer.

What I'd truly like to see is a more scientific debate about it. Right now, we have suggestions, examples and hypotheses but no hard data, not even a detailed comparison of two or three submission models that are currently used by other disciplines (of similar size) who are reasonably happy with their system. Please don't get me wrong, I see the need to "grow up" and the suggestions, examples and hypotheses that I've seen so far all make valid points but all that I see emerging from it is that it's not simple. Maybe it would be useful to get outside support from people who do metascience. (I'm sure that someone's doing just that. What else do we have social scientists for?)

Cheers,
Simon Hoerder



Please log in for posting a message. Only registered users may post in this forum.