2017 Reports : Cryptology ePrint Archive Forum

**Re: 2017/370**

Discussion forum for Cryptology ePrint Archive reports posted in 2017. Please put the report number in the subject.

2017/370

Posted by: **claudiorlandi** (IP Logged)

Date: 08 May 2017 12:45

I am Claudio Orlandi, one of the authors of the “Simple OT” protocol (https://eprint.iacr.org/2015/267, CO from now on). A paper recently appeared on ePrint claiming a number of serious flaws in the proof of security of CO (https://eprint.iacr.org/2017/370, version 28 Apr 2017, GIR from now on).

The tl;dr version is the following:

- Most of the alleged problems pointed out by GIR are simple misunderstandings which could have been easily clarified in a private conversation. Unfortunately, we were not contacted prior to the posting of GIR and therefore I will try to clarify the misunderstandings here.

- GIR claims (Theorem 1) not only that the CO proof is wrong, but that the CO protocol cannot be proven secure. The impossibility result however relies on a false computational assumption (Assumption 1), and the GIR theorem is therefore vacuous (see below for a distinguisher which wins the security experiment with probability close to 3/4).

- GIR correctly identifies a problem with our proof in the case of a corrupt sender. I believe however that the protocol can be proven secure using a stronger computational assumption, namely the "Gap Diffie-Hellman Assumption".

I am very grateful to the authors of GIR for pointing out the flaw in the proof for the corrupt sender and I will work on an updated version of the paper that will be updated on ePrint in the following weeks. Stay tuned.

Specific answers to technical issues:

- "The OT ideal functionality in CO does not interact with the environment."

In Section 2 GIR restates a "problem" with the OT functionality defined in CO. This "problem" was already raised by Li and Micciancio (http://eprint.iacr.org/2016/624). It is trivial to see that a functionality that does not communicate with the environment cannot be securely implemented, since the simulator would not know when to produce the different parts of the view. It is however (and perhaps unfortunately) "standard practice" to omit these details in the definition of the ideal functionalities, see e.g., the OT functionality defined Canetti et al. at page 41 of [eprint.iacr.org].

The reason for the omission is that in the UC framework all messages to ideal functionalities are composed of a "header" and a "payload". The "header" (which contains the kind of message such as "send" or "receive", the identifiers of the parties and the current protocol instance) is typically public and can be seen by the environment while the "payload" (which contains choice bits, messages, etc.) is typically private and can only be accessed by the parties. On top of that, it is implicitly assumed that the environment (controlling the network) can arbitrarily drop or reorder messages. This was explicitly mentioned in early UC papers (e.g., at page 13 of Canetti et al.), and is now implicitly assumed.

-- "The CO robust encryption scheme is not robust"

In Section 3 GIR claim that the robust encryption scheme defined and constructed in CO is in fact not robust, since the scheme is not robust against adversarially chosen key. Definition 2 in CO clearly states that the required property only has to hold when the keys are randomly sampled from the key space (in retrospect, "weak robustness" would have been a better name for the property defined in CO and it might have avoided the misunderstanding). Therefore the "attack" presented at the end of GIR Section 3 does not contradict Definition 2 of CO. GIR complain that the key generation algorithm is not fully specified in CO. We note that it is standard practice (e.g., page 52, Katz and Lindell "Introduction to Modern Cryptography"), to assume that the key generation of symmetric key encryption schemes simply samples a uniformly random bit string of the right length when the key generation algorithm is omitted.

-- "The CO proof is wrong in the case of a corrupt receiver, Part 1"

At page 8 GIR claim that the proof of security in the case of a corrupt receiver is wrong, since the simulator can only extract the choice bit when the adversary queries the RO, and this could be *after* the simulator sends the two cipher-texts (e0,e1) to the adversary. This case is explicitly addressed in the proof in CO, and is dealt using a "non-committing encryption scheme" i.e., a scheme where the simulator can first produce a cipher-text e, then learn the corresponding message m, and finally produce a key k such that D(k,e)=m.

-- "The CO proof is wrong in the case of a corrupt receiver, Part 2"

At the bottom of page 8 GIR claim that we cannot use Lemma 2 to reduce the security of the protocol to the CDH assumption since the lemma uses rewinding, and this should not be possible in the UC framework. This is a common misunderstanding. While it is true that in the UC framework the *simulator* cannot rewind the adversary, this does not mean that the *reduction* cannot rewind the environment. That is, assuming that there is an environment Z that distinguishes between the real and ideal world, it is possible to arbitrarily use this environment (even by rewinding it) in a reduction to a computational problem. Informally this is possible since the rewinding happens "outside the UC framework". Note that this technique was not introduced in CO, and can be found in many previous UC papers.

-- "The CO proof is wrong in the case of a corrupt sender"

At page 7-8 GIR propose a clever and simple attack which breaks the CO proof of security. In a nutshell, the adversary queries the RO on two arbitrary group elements and encrypts two messages using the key received by the RO. Now, in a real execution of the protocol, the honest receiver will output "bot" since the receiver attempts to decrypt with a different key than the one used for encryption (and the encryption scheme is robust). However, the CO simulator does not output "bot", but instead extracts the two messages encrypted by the adversary. In the same section, GIR proposes a fix to the proof: if the simulator could *compute* the two group elements that an honest sender would use, then the simulator could extract the correct messages. However, as GIR points out, this would require the simulator to compute g^(a^2) from g^a, which is a hard computational problem. We note here that it is not necessary for the simulator to *compute* this value, but only to *recognise* it when the adversary queries the RO on it. Therefore, it seems possible to fix the proof using the Gap Diffie-Hellman assumption (where the simulator has access to a DDH oracle). I believe that this intuition can be expanded to a full proof, which I hope I will be able to post online in the upcoming weeks in a revised version of the CO paper.

-- "The CO protocol cannot be proven secure if Assumption 1 holds"

In Section 5 GIR proves that the CO protocol cannot be proven secure (Theorem 1) under the assumption that a given computational problem is hard (Assumption 1). I have not checked the proof of Theorem 1, but I will explain here that Assumption 1 is trivially false.

In particular, the following attacker wins the experiment of Assumption 1 with probability ~3/4. The adversary:

1. Receives a group description (G,g,p) and a group element A=g^a

2. Computes and send B=g^x to the challenger (with arbitrary x!=a)

3. Receives Z0,Z1.

4. Outputs 0 if Z0=A^x or 1 otherwise

Note that in the experiment of Assumption 1 Z0=B^a=g^(xa)=A^x in 3/4 of the cases (b=0 and b=1,d=1) and uniformly random in 1/4 of the cases (b=1,d=0). Therefore the probability that the adversary wins is:

Pr[b'=b]=Pr[b'=0|b=0]/2+Pr[b'=1|b=1,d=0]/4+Pr[b'=1|b=1,d=1]/4

=1/2 + (1-1/p)/4 + 0 ~ 3/4 (for large p)

Can we now turn this attack on Assumption 1 into a simulator for the CO protocol? Unfortunately the above attack only works with probability 3/4 while the simulator needs to be successful with probability close to 1. We note here that, given access to a DDH oracle, it is possible to win the game with probability 1 in the following way:

1. Receive (G,g,p,A=g^a).

2. Send B=g^x to the challenger

3. Receive Z0,Z1

4. If Z0!=A^x output 1, else

5. Query the DDH oracle on (A,B/A,Z1). If the oracle says "non-DDH" output b=1

6. Else output b=0

Note that the check at step 4 guesses correctly the case (b=1, d=0) with probability overwhelmingly close to 1. The check at step 5 allows to distinguish between the case (b=0) where Z1=B^a/g^(a^2)=g^a*(x-a) and the case (b=1,d=1) where Z1 is uniforly random.

Best

Claudio Orlandi

(contact info at orlandi.dk)

The tl;dr version is the following:

- Most of the alleged problems pointed out by GIR are simple misunderstandings which could have been easily clarified in a private conversation. Unfortunately, we were not contacted prior to the posting of GIR and therefore I will try to clarify the misunderstandings here.

- GIR claims (Theorem 1) not only that the CO proof is wrong, but that the CO protocol cannot be proven secure. The impossibility result however relies on a false computational assumption (Assumption 1), and the GIR theorem is therefore vacuous (see below for a distinguisher which wins the security experiment with probability close to 3/4).

- GIR correctly identifies a problem with our proof in the case of a corrupt sender. I believe however that the protocol can be proven secure using a stronger computational assumption, namely the "Gap Diffie-Hellman Assumption".

I am very grateful to the authors of GIR for pointing out the flaw in the proof for the corrupt sender and I will work on an updated version of the paper that will be updated on ePrint in the following weeks. Stay tuned.

Specific answers to technical issues:

- "The OT ideal functionality in CO does not interact with the environment."

In Section 2 GIR restates a "problem" with the OT functionality defined in CO. This "problem" was already raised by Li and Micciancio (http://eprint.iacr.org/2016/624). It is trivial to see that a functionality that does not communicate with the environment cannot be securely implemented, since the simulator would not know when to produce the different parts of the view. It is however (and perhaps unfortunately) "standard practice" to omit these details in the definition of the ideal functionalities, see e.g., the OT functionality defined Canetti et al. at page 41 of [eprint.iacr.org].

The reason for the omission is that in the UC framework all messages to ideal functionalities are composed of a "header" and a "payload". The "header" (which contains the kind of message such as "send" or "receive", the identifiers of the parties and the current protocol instance) is typically public and can be seen by the environment while the "payload" (which contains choice bits, messages, etc.) is typically private and can only be accessed by the parties. On top of that, it is implicitly assumed that the environment (controlling the network) can arbitrarily drop or reorder messages. This was explicitly mentioned in early UC papers (e.g., at page 13 of Canetti et al.), and is now implicitly assumed.

-- "The CO robust encryption scheme is not robust"

In Section 3 GIR claim that the robust encryption scheme defined and constructed in CO is in fact not robust, since the scheme is not robust against adversarially chosen key. Definition 2 in CO clearly states that the required property only has to hold when the keys are randomly sampled from the key space (in retrospect, "weak robustness" would have been a better name for the property defined in CO and it might have avoided the misunderstanding). Therefore the "attack" presented at the end of GIR Section 3 does not contradict Definition 2 of CO. GIR complain that the key generation algorithm is not fully specified in CO. We note that it is standard practice (e.g., page 52, Katz and Lindell "Introduction to Modern Cryptography"), to assume that the key generation of symmetric key encryption schemes simply samples a uniformly random bit string of the right length when the key generation algorithm is omitted.

-- "The CO proof is wrong in the case of a corrupt receiver, Part 1"

At page 8 GIR claim that the proof of security in the case of a corrupt receiver is wrong, since the simulator can only extract the choice bit when the adversary queries the RO, and this could be *after* the simulator sends the two cipher-texts (e0,e1) to the adversary. This case is explicitly addressed in the proof in CO, and is dealt using a "non-committing encryption scheme" i.e., a scheme where the simulator can first produce a cipher-text e, then learn the corresponding message m, and finally produce a key k such that D(k,e)=m.

-- "The CO proof is wrong in the case of a corrupt receiver, Part 2"

At the bottom of page 8 GIR claim that we cannot use Lemma 2 to reduce the security of the protocol to the CDH assumption since the lemma uses rewinding, and this should not be possible in the UC framework. This is a common misunderstanding. While it is true that in the UC framework the *simulator* cannot rewind the adversary, this does not mean that the *reduction* cannot rewind the environment. That is, assuming that there is an environment Z that distinguishes between the real and ideal world, it is possible to arbitrarily use this environment (even by rewinding it) in a reduction to a computational problem. Informally this is possible since the rewinding happens "outside the UC framework". Note that this technique was not introduced in CO, and can be found in many previous UC papers.

-- "The CO proof is wrong in the case of a corrupt sender"

At page 7-8 GIR propose a clever and simple attack which breaks the CO proof of security. In a nutshell, the adversary queries the RO on two arbitrary group elements and encrypts two messages using the key received by the RO. Now, in a real execution of the protocol, the honest receiver will output "bot" since the receiver attempts to decrypt with a different key than the one used for encryption (and the encryption scheme is robust). However, the CO simulator does not output "bot", but instead extracts the two messages encrypted by the adversary. In the same section, GIR proposes a fix to the proof: if the simulator could *compute* the two group elements that an honest sender would use, then the simulator could extract the correct messages. However, as GIR points out, this would require the simulator to compute g^(a^2) from g^a, which is a hard computational problem. We note here that it is not necessary for the simulator to *compute* this value, but only to *recognise* it when the adversary queries the RO on it. Therefore, it seems possible to fix the proof using the Gap Diffie-Hellman assumption (where the simulator has access to a DDH oracle). I believe that this intuition can be expanded to a full proof, which I hope I will be able to post online in the upcoming weeks in a revised version of the CO paper.

-- "The CO protocol cannot be proven secure if Assumption 1 holds"

In Section 5 GIR proves that the CO protocol cannot be proven secure (Theorem 1) under the assumption that a given computational problem is hard (Assumption 1). I have not checked the proof of Theorem 1, but I will explain here that Assumption 1 is trivially false.

In particular, the following attacker wins the experiment of Assumption 1 with probability ~3/4. The adversary:

1. Receives a group description (G,g,p) and a group element A=g^a

2. Computes and send B=g^x to the challenger (with arbitrary x!=a)

3. Receives Z0,Z1.

4. Outputs 0 if Z0=A^x or 1 otherwise

Note that in the experiment of Assumption 1 Z0=B^a=g^(xa)=A^x in 3/4 of the cases (b=0 and b=1,d=1) and uniformly random in 1/4 of the cases (b=1,d=0). Therefore the probability that the adversary wins is:

Pr[b'=b]=Pr[b'=0|b=0]/2+Pr[b'=1|b=1,d=0]/4+Pr[b'=1|b=1,d=1]/4

=1/2 + (1-1/p)/4 + 0 ~ 3/4 (for large p)

Can we now turn this attack on Assumption 1 into a simulator for the CO protocol? Unfortunately the above attack only works with probability 3/4 while the simulator needs to be successful with probability close to 1. We note here that, given access to a DDH oracle, it is possible to win the game with probability 1 in the following way:

1. Receive (G,g,p,A=g^a).

2. Send B=g^x to the challenger

3. Receive Z0,Z1

4. If Z0!=A^x output 1, else

5. Query the DDH oracle on (A,B/A,Z1). If the oracle says "non-DDH" output b=1

6. Else output b=0

Note that the check at step 4 guesses correctly the case (b=1, d=0) with probability overwhelmingly close to 1. The check at step 5 allows to distinguish between the case (b=0) where Z1=B^a/g^(a^2)=g^a*(x-a) and the case (b=1,d=1) where Z1 is uniforly random.

Best

Claudio Orlandi

(contact info at orlandi.dk)

Posted by: **FrancescoTotti** (IP Logged)

Date: 24 May 2017 16:10

I am Vincenzo Iovino and I write on behalf of my co-authors Ziya Alper Genç, and Alfredo Rial.

We would like to thank Tung Chou and Claudio Orlandi for their insightful comments. We have updated our paper. The main changes are the following:

- We acknowledge that our claim that there are mistakes in the security proof for the case where the receiver is corrupt was a misunderstanding.

- Because of this reason, we do not claim anymore that the OT protocol by Chou and Orlandi cannot be proven UC secure.

- Instead, we show that the protocol needs to be instantiated with a group G in which a certain decisional problem (referred to as Assumption 1 in the previous version) can be solved correctly with overwhelming probability. This decisional problem can indeed be solved correctly with overwhelming probability if a DDH oracle is provided, and thus gap-DH groups seem likely to provide a secure instantiation. We remark that the comments by Tung Chou and Claudio Orlandi on the advantage of an adversary to solve our decisional problem (Assumption 1) are correct: it is true that an adversary can win with probability 3/4 but for our result to hold we just need to conjecture that the probability of an adversary in winning the game is not overwhelming. In fact, our result holds because we only prove that a correct simulator cannot be provided if our decisional problem cannot be solved correctly with overwhelming probability. We have corrected and simplified our proof in the previous version.

We would like to thank Tung Chou and Claudio Orlandi for their insightful comments. We have updated our paper. The main changes are the following:

- We acknowledge that our claim that there are mistakes in the security proof for the case where the receiver is corrupt was a misunderstanding.

- Because of this reason, we do not claim anymore that the OT protocol by Chou and Orlandi cannot be proven UC secure.

- Instead, we show that the protocol needs to be instantiated with a group G in which a certain decisional problem (referred to as Assumption 1 in the previous version) can be solved correctly with overwhelming probability. This decisional problem can indeed be solved correctly with overwhelming probability if a DDH oracle is provided, and thus gap-DH groups seem likely to provide a secure instantiation. We remark that the comments by Tung Chou and Claudio Orlandi on the advantage of an adversary to solve our decisional problem (Assumption 1) are correct: it is true that an adversary can win with probability 3/4 but for our result to hold we just need to conjecture that the probability of an adversary in winning the game is not overwhelming. In fact, our result holds because we only prove that a correct simulator cannot be provided if our decisional problem cannot be solved correctly with overwhelming probability. We have corrected and simplified our proof in the previous version.

Please log in for posting a message. Only registered users may post in this forum.