This forum has been archived on Sep. 16, 2018 in read mode only.
2007 Reports :  Cryptology ePrint Archive Forum
Discussion forum for Cryptology ePrint Archive reports posted in 2007. Please put the report number in the subject.  
Goto Thread: PreviousNext
Goto: Forum ListMessage ListNew Topic••
2007/401: Deus ex Machina?
Posted by: ShaiHalevi (IP Logged)
Date: 22 October 2007 20:05

In his latest article, Koblitz decries those naive cryptographers who "hope that a suitably designed device --- some cleverly written computer code --- will make unpleasant natural phenomena go away."

Call me naive, but I think that the main use of computers is to eliminate unpleasantnesses: Whenever you are faced with a task that requires keeping track of many simple details, it is only natural to consider "some cleverly written computer code" to do this work for you. I am guessing that Koblitz wrote this article using a word processor, a "cleverly written computer code" that helps eliminating the unpleasantness of manual typesetting.

Let me begin with a small comment: Koblitz quotes out of context a partial sentence from my ePrint note from a few years ago, regarding the reason that some published proofs have errors in them. The entire quote reads as follows:

"Some of the reasons for this problem are social (e.g., we mostly
publish in conferences rather than journals), but the true cause of
it is that our proofs are truly complex. After all, the objects that
we deal with are non-trivial algorithms, and what we try to prove
are some properties of an intricate interaction between a few of
these algorithms."

Koblitz omitted the part about "the true cause". Indeed, the part that was omitted is more important than what was quoted: cryptographic proofs are complex mostly because they involve keeping track of an enormous number of details (most of them rather boring). I may be naive, but I would really want "some cleverly written computer code" to help me keep track of them.

On a more technical level, Koblitz completely missed the point of the work that is done in automated/computer-aided cryptographic proofs. Specifically, he writes that

"the hope of authors of recent papers on automated theorem proving
is to take the uncertainties, flaws, and human foibles out of the
process of deriving reductionist security arguments."

I seriously doubt that there is anyone working on automated proofs that has this view. We cannot even "take the uncertainties, flaws, and human foibles out of the process" of writing an essay (even with our clever word-editors), certainly we cannot hope to do so with mathematical proofs.

Instead, the hope is that computer programs can help with the task of checking the "obvious details" of a proof, making sure that there are no errors there. Errors in published proofs are often contained in exactly these "obvious details" (see the OAEP case for example), and computer software may help avoiding such errors. As I wrote in that ePrint note:

"cryptographic proofs have a creative part (e.g., describing the
simulator or the reduction) and a mundane part (e.g., checking that
the reduction actually goes through). It often happens that the
mundane parts are much harder to write and verify, and it is with
these parts that we can hope to have automated help."

Koblitz'es article consists of a few examples from the literature, where he demonstrates that proofs that can be checked by computers are all "obvious". However, contrary to his claim, this does not show that they have no value. Quite the contrary, they demonstrate that these "obvious" steps are simple enough to be checked by a computer, thus lending support to the hope that we may one day have a computer program that can actually be helpful in writing/verifying our proofs.

To be sure, none of the code written so far can really be used by cryptographers when writing their proofs. But the recent work of Blanchet and Pointcheval clearly demonstrates that a "truly useful" program is technically doable. Most of the arguments, transformations, and tools that we use in our proofs are re-used over and over again in many different analyses, and Blanchet and Pointcheval demonstrated that these can indeed be formally expressed and used by a computer program. The challenge in creating a "truly useful" program lies essentially in assembling enough of them and devising a suitable user interface.

Edited 1 time(s). Last edit at 24-Oct-2007 12:59 by ShaiHalevi.

Posted by: jkatz (IP Logged)
Date: 24 October 2007 23:57

I just wanted to note that the criticism given at the end of Section 4 (page 16) is incorrect: if q (the order of the group) has small factors, then indeed El Gamal is insecure. But in that case the DDH assumption does not hold (as you mention), so this does not in any way contradict the theorem that was proved.

Re: 2007/401: Deus ex Machina?
Posted by: nigel (IP Logged)
Date: 25 October 2007 12:49

I think there is a major misunderstanding in the article as to why these works are interesting.

In chip design, automated verification via computer assisted, or directed, proofs is crucial; otherwise we cannot get the complex microprocessors we now have. Once upon a time, academic papers were published which did these things on small examples. At the time it would have been clear that the human could have done better. Now however we reap the benefits of such early research as we can purchase fast computers etc.

Would it not be nice if in the future we could verify, in the complexity theoretic/cryptographic model, a very very complicated security protocol ? For example a protocol to implement an electronic election, or run a part of national critical infrastructure. To do this we need to develop tools, clearly at first these tools will only be applied to smallish protocols, which may have easier hand based proofs; but eventually these tools will allow us to do things a human cannot.
- Going back to my earlier analogy; would anyone try and verify the Intel Core 2 design by hand ?

To criticise such papers, as Koblitz does, is in my view showing that one does not appreciate where this research may take us.

Indeed I would say that this merging of the cryptographic and the symbolic (computer assisted proof) methods has been one of the more important things in crypto in the last few years. For too long have the two communities moved apart.

Re: 2007/401: Deus ex Machina?
Posted by: Vielhaber (IP Logged)
Date: 03 November 2007 00:31

Observing that Koblitz cites Sokal, I was essentially sure I would like this report. Not disappointed.

So, what is Nowak showing then?
Since ElGamal is unsecure for certain q's, the "proof" should mention this (apparently impossible, the prime factorisation of q is outside the focus) or then what does it *really* prove?
So, game-five is *not* true after all, if q is special??

Comparing VLSI design with cryptography leads us to the very point:
VLSI design is a check of some (maybe huge, but) *finite* set of rules (electrical design rule check, Boolean transformations etc.) that have to be satisfied, and if they do, the chip is considered to be OK (including the famous Pentium division bug, the Pentium was OK according to the rules).

Cryptographic security, in particular *provable security* is open-ended. So, the (necessarily) finitely many checks an automated prover can make, never can assure 100% security (and how would a proof system distinguish P=NP from P \neq NP?, with its obviously different implications).

Of course, computers aid. I think the main point is the exageration of claiming to achieve "provable security", when there is no such thing outside the OneTimePad and QuantumBitTransmission realm. And then, what is "some" security worth?

Re: 2007/401: Deus ex Machina?
Posted by: nigel (IP Logged)
Date: 07 November 2007 19:34

Actually it is the VERIFICATION of the current Intel floating point unit, which was done as a result of the pentium bug which I would claim shows that formal methods have a place.

Talk to a chip designer. Almost all their effort is now spent in verifying the design.

The same should be true of crypto. Coming up with a new scheme should be the easy bit, we should be spending our time on the tools and effort needed to verify it is secure.

Re: 2007/401: Deus ex Machina?
Posted by: nkoblitz (IP Logged)
Date: 02 June 2010 03:45

In view of this recent posting agreeing with Jonathan Katz's 2007 posting, I'd like to point out that Katz is wrong in stating that my comment on p. 16 is incorrect. Of course, neither the theorem nor Nowak's proof is invalidated by the absence of the primality condition for the group order, as I explicitly said on p. 16. ("This condition does not, of course, affect Nowak's proof of security, which is formal and generic.") Rather, the problem is that the theorem is vacuous (that is, trivially true but useless) without the primality condition. This is not an issue of formal correctness of the theorem or proof. But it's a big issue for an outside person who might believe that the "provable security" theorem gives some sort of security guarantee and who is unaware of the need to check for certain conditions that are missing from the theorem.

Edited 1 time(s). Last edit at 02-Jun-2010 23:45 by nkoblitz.

Goto: Forum ListMessage List••
Please log in for posting a message. Only registered users may post in this forum.