Location Leakage in Distance Bounding: Why Location Privacy does not Work

. In many cases, we can only have access to a service by proving we are suﬃciently close to a particular location (e.g. in automobile or building access control). In these cases, proximity can be guaranteed through signal attenuation. However, by using additional transmitters an attacker can relay signals between the prover and the veriﬁer. Distance-bounding protocols are the main countermeasure against such attacks; however, such protocols may leak information regarding the location of the prover and/or the veriﬁer who run the distance-bounding protocol. In this paper, we consider a formal model for location privacy in the context of distance-bounding. In particular, our contributions are threefold: we ﬁrst deﬁne a security game for location privacy in distance-bounding; secondly, we deﬁne an adversarial model for this game, with two adversary classes; ﬁnally, we assess the feasibility of attaining location privacy for distance-bounding protocols. Concretely, we prove that for protocols with a beginning or a termination, it is theoretically impossible to achieve location privacy for either of the two adversary classes, in the sense that there always exists a polynomially bounded adversary that wins the security game. However, for so-called limited adversaries, which cannot see the location of arbitrary provers, carefully chosen parameters do, in practice, enable computational location privacy.


Introduction
Often, our location is critical in order to gain access to places and/or services.For instance, in applications such as automobile access control the key (prover) needs to be close enough to the car lock (verifier) in order to unlock it [17].In some cases, unlocking the car may in fact also start the car (in passive keyless entry and start (PKES) systems [18]).If the proximity check is performed through signal attenuation, an adversary may easily perform man-in-the-middle attacks by relaying messages between the communicating parties (provers and verifiers), while these parties are situated far from each other.Thus, in the automobile example, an adversary may unlock the car even if the car key (and the prover) is located very far.This type of attack (called mafia fraud [11]) can also be mounted against bankcards [13], mobile phones [19], proximity cards [20], and wireless ad hoc networks [21].
Distance-bounding (DB) protocols are meant to counteract manin-the-middle relay attacks in authentication schemes.They are challenge -response authentication protocols, that allow the verifier, by measuring the time-of-flight of the messages exchanged, to calculate an upper bound on the prover's distance (as well as checking the validity of the responses, which usually ensure authentication).DB protocols were first introduced by Brands and Chaum [6] to preclude relay attacks in ATM systems.Subsequently, numerous DB protocols were proposed [22,27,9] and many attacks against them have been published [2,3,15].DB protocols have also been analysed for the case of noisy channels [23] and the optimal setting of security parameters [12,25].To the best of our knowledge [4,5] describes the latest most secure distance-bounding protocol against all known attack modes.Another provably-secure protocol attaining quite strong terrorist-fraud resistance requirements has been recently published in [16].
Location privacy was introduced in the context of distancebounding by Rasmussen and Čapkun [26], who noted that distancebounding protocols may leak further location-related information than just the fact that the prover is within the maximum allowed distance from the verifier.This information leakage follows from the measurement of messages' arrival times.
To combat this, Rasmussen and Čapkun [26] proposed a privacypreserving distance-bounding protocol (denoted here as the R Č protocol).Though the protocol in [26] claims to preserve location privacy, we note that location privacy has never been formalised in the literature.Additionally, the R Č protocol has been shown to be susceptible to a non-polynomial dictionary attack which may reveal the prover's and verifier's locations [1] as well as to a mafia fraud attack [24].Mitrokotsa et al. [24] have proposed a new distance bounding protocol called Location-Private Distance Bounding (LPDB) that improves the basic construction of the R Č protocol and renders it secure against the latter attack.
Distance bounding can also be extended to location verification [29] (also known as secure positioning [28]) when multiple verifiers interact with a single prover.In that case the location of the prover can be determined using the intersection of the bounding spheres surrounding each verifier.This approach is also taken under consideration in the recent work regarding position-based cryptography [10].Our approach here is different as we consider a single verifier and many provers, and we thus only achieve distance bounding, and not secure positioning.Furthermore, in position-based cryptography all the adversaries have the same knowledge as the prover, including the secret key.However, in our model, we do not allow the adversary knowledge of the secret key, as that would allow it to trivially distinguish between the two provers in the location privacy game, without actually requiring any location data.
We also mention the recent work on localisation privacy by Burmester [7,8], where location is used in a steganographic sense (such that provers are convinced that verifier-generated challenges are honest, and they do not reveal their presence to adversaries).However, very notably the constructions in [8] require provers to be aware of their position/location, which is a strong assumption in the case of general provers.In this case, location is used as a part of the verifier's challenge, and the prover verifies that the location is sufficiently close to the prover's location.
Contributions: In this paper, we address precisely the topics of location privacy in distance-bounding.Our contributions are threefold: 1. We first define a classical left-or-right indistinguishability game for location privacy in distance-bounding protocols.In this game, the adversary knows its distance to the verifier V and can create provers P at arbitrary distances from itself and V. 2. For this location privacy game, we consider two main adversarial classes: omniscient and limited adversaries.Omniscient adver-saries capture an adversary that can measure the signal strength of the transmitted messages and are aware, for all transmissions along the timed channel, when the message is sent and when it arrives at them.Unsurprisingly, no location privacy is feasible for omniscient adversaries.Limited adversaries, on the other hand, are only aware of the time at which they receive messages from other participants.3. Finally, we show that achieving location privacy with respect to limited adversaries is impossible for protocols with a beginning or a termination, and which run in polynomial time.We prove that location privacy against limited adversaries minimally requires the prover and the verifier to introduce exponential delays between receiving and sending messages, and we give a lower bound for these delays.Since the transmission speed is high (e.g. the speed of light in the case of RFID transmissions), the delay can be implemented in practice.Finally, we show how to specify these delays in the LPDB protocol proposed in [24].
Organisation: This paper is organised as follows.We begin by defining distance-bounding protocols and location privacy in section 2, outlining also our adversarial classes.We then assess the feasibility of achieving location privacy for distance-bounding protocols in section 3, for both omniscient and limited adversaries, giving a lower bound for the delays that each party must have between receiving a message and sending a response message.We apply our results and the obtained bound in section 4, in order to modify the LPDB protocol [24] to attain location privacy with respect to limited adversaries.

Communication Model
Our distance-bounding scenario resembles that of Dürholz et al. [14], but we consider multiple provers.Concretely, there is a single verifier V, but many provers P 1 , . . ., P n , such that V and P i for every i share a secret key K i output by a key generation algorithm Kg.We also assume that when it is initialised, the verifier V is also equipped with an upper bound on the maximum allowed communication time (or time distance) t max between itself and the prover.
The communication model considered by [14] is round-based.However, e.g. the R Č [26] and the LPDB [24] distance-bounding protocols are not round-based.Therefore, we consider a more generalised model, where the two parties P and V interact with no roundbased restriction, via two types of channels: a timeless and a timed channel.Parties P and V may send messages m along each of the two channels (i.e., they are duplex channels).In order to make the model more realistic we consider the transmissions along the timed channel to be bit-by-bit.
More formally, the timed channel is associated with the global clock, such that each bit of an input message m will be associated with a time ts at which the sending party has sent the bit.The corresponding output bit of message m is associated with a time tr, which is the time at which the receiving party has received the bit.The bit-by-bit treatment of the transmission time is compulsory, as in practice, each bit of the message is transmitted sequentially or in smaller packets.However, for practical purposes we will often associate (in our proofs) the sending time of a message by the sending time of the first bit of this message, since this particular value is enough to leak significant information regarding the position of the honest protocol participants (prover and/or verifier).
For the sake of completeness for our model, however, we associate a message m with an m -dimensional vector of sending times ts and an m -dimensional vector of transmission times tr.We also require that the values in ts and those in tr are monotone non-decreasing, i.e. for any message m and any 1 i j m, it holds that ts i ts j and tr i tr j .Furthermore, if we consider the communication between two parties A and B and that a message m is sent from the party A to the party B at time ts then the reception time tr of the message m at the party B will satisfy the following equation for every i Ø1, . . ., m Ù: where t AB denotes the time distance between the parties A and B.More precisely, t AB denotes the time (measured in time units TU) that every bit of a message m takes to travel between A and B.
Moreover, if the message m leaks off this channel to an adversary A, each bit of the leaked message is associated with an mdimensional timestamp tr A .Note that this information alone may not suffice to learn the sending time of the message, as the adversary does not necessarily know the distance between it and the sending party.
Both channels allow the prover P and the verifier V to interact concurrently, i.e. it is possible that both the prover P and the verifier V transmit at the same time across the duplex channel.This is indeed the case for the R Č protocol [26].
We now define communication in distance-bounding protocols as being slow (or lazy) if it takes place on the timeless communication channel and fast (or time-critical) if it takes place on the timed communication channel.Note that it is possible to alternate fast and slow communication arbitrarily.We note that this approach is perfectly in-tune with the similar communication model of [14], but it is also compatible with protocols that are not round-based.Definition 1.We say that DB ÔV, P, KgÕ is a distance-bounding protocol with parameters Ôt max , ǫÕ where t max denotes the upper bound on transmission time in the fast phase and ǫ denotes the tolerance level for honest P-V authentication failures if: Key Generation: Kg generates a secret key K KgÔ1 ℓ Õ for any ℓ È N.

Distance-Bounding Authentication:
The joint execution of the prover and verifier algorithms V and P for parameters Ôt max , ǫÕ ends with a verifier-generated distance-bounding authentication bit b È Ø0, 1Ù.
We require ǫ-completeness, i.e., the interaction of an honest prover P and an honest, fixed verifier V for parameters Ôt max , ǫÕ is accepted by the verifier with probability at least 1 ¡ ǫ if t VP t max .

Adversarial Models
In our framework, the goal of the adversary is to break location privacy as defined below.In this section, we first show how adversaries interact with the communication channels and with the honest parties during an attack.Then, we define two adversarial classes depending on the strength of the adversary.Finally, we show the location privacy game.
We consider adversaries A that interact with the distance-bounding system as follows: (1) A may eavesdrop on the communication (across both the timed and the timeless channel) of an honest prover P and an honest verifier V; and (2) A may interact with honest provers in prover-adversary sessions and with honest verifiers in adversaryverifier sessions.Note that this behaviour implies that an adversary can mount a full man-in-the-middle attack by simply opening concurrent prover-adversary and adversary-verifier sessions.This is again in agreement with the treatment given by Duerholz et al.; we refer to that paper for the more formal notions of session identifiers.
In view of [30, ?], we consider that frequency hopping (i.e.implementing a protocol such that the sender and the receiver hop from one frequency to another during the transmission) is not an effective countermeasure against eavesdropping adversaries.In particular, by simply eavesdropping all possible frequencies (in practice the prover and the verifier are unable to use too many different frequencies), the adversary can successfully "piece together" the communication.
We consider two types of adversaries: the limited and the omniscient adversaries, which are described as follows: Limited adversaries: These adversaries may eavesdrop on honest prover-verifier sessions or communicate with provers and verifiers in prover-adversary and respectively adversary-verifier sessions.On eavesdropping the timed channel in honest prover-verifier sessions, limited adversaries learn the transmitted message m and the bit-by-bit time the message is received at, tr A ts tP A , where P is the party that sent the message m and tP A is an m -dimensional vector with entries equalling the time distance t P A between P and the adversary A. Note that the adversary A is able to choose its location and knows t AV (i.e. its time distance from the verifier V); consequently, A learns the sending times at which the verifier sends its messages.
Omniscient adversaries: These adversaries can also eavesdrop on honest prover-verifier sessions or communicate with provers and verifiers as above.Additionally, an omniscient adversary can measure the signal strength of the transmitted messages and is aware, for all transmissions along the timed channel, when the message is sent and when it arrives at them.More precisely, on eavesdropping on the timed channel during an honest prover-verifier session, strong adversaries learn the message m, the bit-by-bit time the message is received, tr A ts tP A , and the bit-by-bit sending time ts.Thus, strong adversaries can trivially learn the distance between them and the party P that sent the message.
To justify that an omniscient adversary can also learn the sending time of messages, we could model this by distributed, limited adversaries, i.e.A ÔA 1 , A 2 Õ.The composite adversary A chooses the locations of A 1 and A 2 and can do triangulation of signals.This definition also extends to a moving adversary (i.e. an adversary that is able to change its location) as discussed in Section 3.1.
We consider only polynomial adversaries, (i.e.having polynomial run-time and running polynomially many sessions with the provers and the verifier).The adversary's goal is to break the location privacy of the distance-bounding protocol, which we define by means of a left-or-right indistinguishability game as described below.
Phase 1: In this phase, a limited adversary is given the security parameter (in unary) 1 λ .The adversary may now initialise provers P i and the verifier V at arbitrary locations with respect to itself and the verifier, and may interact arbitrarily with the provers and the verifier.At the end of this phase, the adversary outputs two indices i, j such that t P i V and t P j V are both smaller than the threshold t max ; which are forwarded to a challenger.Phase 2: The challenger checks that the two provers are both within the maximum distance t max , then closes all sessions that are open for these provers.The challenger flips a bit b and assigns the handle P Chal as follows: P Chal P i if b 0 and P Chal P j if b 1. Phase 3: Finally, by interacting with the challenge prover P Chal , as well as all other provers with the exception of P i and P j , the adversary must produce a decision bit d.Let Exp LocPriv DB ÔA, 1 λ Õ be the output of a single run of the location privacy game.We say that the adversary wins if d b, and we write it as Exp LocPriv DB ÔA, 1 λ Õ 1.The adversary can be considered as a hypothesis test for the following hypotheses: H 0 : the response sent from the prover P Chal to V's challenge is actually from the prover P 0 .and H 1 : the response sent from the prover P Chal to V's challenge is actually from the prover P 1 .
We define the advantage of the adversary in this game as: We say that distance-bounding protocols provide location privacy if loc P 0 , loc P 1 , loc V , A it holds: We should note here that an adversary would select the location of the participants in such a way to maximize his advantage.Thus, an adversary A would not select P 0 and P 1 at the same location or at equal distance to A and V.

Why Location Privacy does not Work
In this section we first argue that location privacy cannot be achieved with respect to an omniscient adversary.Then, we show that location privacy can only be achieved with respect to limited adversaries if the honest parties running the protocol introduce a delay in their transmissions; we furthermore give a lower bound on this delay.

Omniscient Adversary
It is trivial to see that no location privacy can be attained with respect to an omniscient adversary.Indeed, consider an omniscient adversary placed arbitrarily with respect to the verifier.Let this adversary A create two provers P 0 and P 1 such that the distance between this adversary and the provers is different i.e. t P 0 A t P 1 A .
Obviously an adversary A would choose his location in such a way in order to maximise his advantage.Thus, choosing to be at equal distance from the two provers he is trying to distinguish would not be a good choice.
The adversary forwards P 0 , P 1 to the challenger, receiving the handle P Chal , which is either P 0 or P 1 .Now, the adversary eavesdrops on a session between P Chal and V, thus learning the sending time of the messages and the time it receives them.It thus calculates the time distance between itself and the two parties communicating and, since the distances are all different, it can identify the parties w.p. 1.
A single, but moving adversary (i.e., an adversary than can change its position during the attack) could also infer some information about the location of the prover by standing between P 0 and P 1 and moving toward P 0 due to the Doppler phenomenon.If bits arrive with a higher frequency, they must be sent by P 0 instead of P 1 .

Limited Adversary
By eavesdropping on the duplex timed channel between the challenged prover and the verifier, the adversary will receive tr i A , the timestamp when A receives the first bit of message m i .The adversary A also observes: t V tr 1 A : the time A receives the first message bit from V. -t P tr 2 A : the time A receives the first message bit from P. In what follows we show that the very first bit sent through the timed channel leaks.To be able to prove that, we make the following reasonable assumptions as for how the sending time of this first bit is decided during the protocol.Note that similar observations hold for the final bit sent.For simplicity, we only treat the first one.

Assumption 1 We assume that the distance bounding phase of a distance-bounding protocol may have one of the following constructions:
-Case 1: The verifier V starts the distance bounding phase after a reference time t 0 and a random delay, possibly equal to 0, which we denote delay V , while the prover P b where b È Ø0, 1Ù starts after receiving the first message from the verifier V and a random delay delay P b .
-Case 2: The prover P b starts the distance bounding phase after a reference time t 0 and a random delay delay P b , while the verifier V starts after receiving the first message from the prover P b and a random delay delay V .-Case 3: The prover P b and the verifier V start sending messages independently.More precisely, the prover P b starts sending messages after a reference time T P b and a random delay delay P b , while the verifier V starts sending messages after a reference time T V and a random delay delay V .
We should note here that when we mention "random delay" we mean a delay of arbitrary distribution.
Assumption 2 We also assume that A knows the times T P b (where b È Ø0, 1Ù) and T V ; the latter value is defined only for Case 3 of Assumption 1.
In figure 1 are depicted the above described cases.Without loss of generality in figure 1 the adversary A is located between the verifier V and the prover P.
It is easy to see that in our model a limited adversary A, knows and can even choose the locations of P 0 , P 1 with respect to itself and the verifier V, i.e. the values t AP 0 , t AP 1 , t VP 0 , t VP 1 .Also, A knows the distance t AV to V. We will show how an adversary intercepting the values above can distinguish between the two hypotheses H 0 , H 1 with non-negligible probability.
Lemma 1.Under Assumptions 1 and 2 we assume that there exists ǫ and a bound B such that: where delay might represent the delays of the provers delay P 0 , delay P 1 , or the delay (delay V ) of the verifier as defined in Assumption 1.

Then, there exists an adversary A against location indistinguishability which achieves a distinguishing advantage:
where t max is the maximum allowed transmission time between a legitimate prover P and a verifier V.
Moreover, this adversary does not need to take part in the actual protocol; the attack relies exclusively on eavesdropping.Assuming that the protocol is complete and polynomially bounded, there is a negligible ǫ such that B exists and is polynomially bounded.So, the advantage Adv A is not negligible.Consequently, a distance-bounding protocol as defined in definition 1 does not provide location privacy as per definition 2. Proof.Based on Assumption 1 we have three cases.
Case 1: The verifier V starts the distance bounding phase after a reference time t 0 and a random delay (denoted as delay V ), whereas the prover P b starts after receiving the first message from the verifier V and a random delay (denoted as delay P b ).This case is depicted in figure 1 (a).More precisely, we consider that the following events take place: 1.After some time reference t 0 and a delay V the verifier V sends a message c to the prover P b where b È Ø0, 1Ù.The first bit of this message will arrive at the adversary A at time t V such that: where t VA denotes the time of flight for one bit from the verifier V to the adversary A.
2. The prover P b with b È Ø0, 1Ù responds to the verifier V with a message r, after some delay (delay P b ).The first bit of r arrives at A at time t P b such that: where t VP b denotes the time-of-flight for one bit from V to P b , and t P b A denotes the time-of-flight for one bit from P b to A.
From equations (1) and (2) it is easy to see that: We let d b be the probability density function (pdf) of delay P b , i.e. we consider the delay to be a random variable distributed according to d b .If hypothesis H 0 holds, then t P t P 0 , while if hypothesis H 1 holds, then t P t P 1 .Since t P and t V depend on random delays, they can be perceived as random variables.Let: Note that whereas the value ∆ is fixed and even chosen by the adversary, T is a random variable, depending on the delays.Indeed, if hypothesis H 0 holds then T delay P 0 has pdf d 0 , while if hypothesis H 1 holds, then T delay P 1 ∆ and we write P ÖT t× d 1 Ôt¡∆Õ, i.e.T has a distribution equivalent to d 1 , shifted by a fixed value ∆.
In the following, we often condition success probabilities on hypotheses H 0 and H 1 and use the notation P H b Öevent× for PÖevent H b holds×, i.e. the probability that event holds, conditioned on the fact that H b holds.
We consider that A is implementing a best distinguisher based on the likelihood that P H 0 ÖT t× P H 1 ÖT t× for observed value t.If this holds, then A outputs 0, else it outputs 1.So A outputs 0 iff the observed value of Then, it holds: where d 0 and d 1 make Ö0, B× have density at least 1 ¡ǫ.When t P 0 V t P 1 V t max , P 0 , V and P 1 are aligned in this order and the adversary A overlaps with the location of P 0 , then ∆ 2t max .Case 2: The prover P b starts the distance bounding phase after a reference time t 0 and a random delay (denoted as delay P b ).While the verifier V starts after receiving the first message from the prover P b and a random delay (denoted as delay V ).This case is depicted in figure 1 (b).Now, we have: We let: Similarly, if the adversary A is implementing a distinguisher for the two provers P 0 and P 1 then its advantage is given by: where d denotes the pdf of the random variable delay V , such that Ö0, B× has density at least 1 ¡ǫ.When t P 0 V t P 1 V t max , P 0 , V and P 1 are aligned and the location of the adversary A overlaps with the location of the prover P 1 , then ∆ 2t max .Thus, from equations ( 3) and ( 4) we derive that in both cases it holds: for some functions q 0 and q 1 that make Ö0, B× have density at least 1 ¡ ǫ.We further have a case where ∆ 2t max .Let: We have x b,0 0, x b,n 1 0, x b,i 0 and x b,1 ¤ ¤ ¤ x b,n 1 ¡ ǫ.
Given I Ø0, . . ., nÙ we let T I ä iÈI Ôi ¡ 1Õ ∆ , i ∆ .For ∆ 0, we have: (5) Adv ¡∆ max We have: Since x 1,i 0 and x 1,1 ¤ ¤ ¤ x 1,n 1 ¡ ǫ, there exists j such that: x 1,j 1¡ǫ n .Thus: So, there exists ∆ such that: For ∆ 2t max there exists an adversary A such that: The prover P b and the verifier V send messages independently.More precisely, the prover P b starts sending messages after a reference time T P b and a random delay (delay P b ) while the verifier V starts sending messages after a reference time T V and a random delay (delay V ).We assume that for this case the adversary A knows the values This case is depicted in figure 1 (c).We now have: We let: We consider that the adversary A is implementing a best distinguisher based on the likelihood if Then, it holds: where q b for b È Ø0, 1Ù denotes the pdf of the random variable delay P b ¡ delay V and the support of q 0 and q 1 make Ö¡B, B× have density at least 1 ¡ 2ǫ.When t P 0 V t P 1 V t max , P 0 , V and P 1 are aligned in this order and if T P 1 T P 0 the location of the adversary A overlaps with the location of P 0 while if T P 1 T P 0 the location of the adversary A overlaps with the location of the prover P 1 .Thus, in both of these cases it holds that ∆ 2t max .Let: We have x b,0 0, x b,n 1 0, x b,i 0, x b,¡n 1 ... x b,n 1 ¡ 2ǫ and: Since x 1,i 0 and x 1,¡n 1 ¤ ¤ ¤ x 1,n 1 ¡ 2ǫ, there exists j such that: x 1,j 1¡2ǫ 2n .Thus: So, there exists ∆ such that: If Assumption 1 holds and d b follows the uniform distribution in the range Ö0, B× and denotes the pdf of the delay P b while delay V is always equal to 0 then the best distinguisher based on t P ¡t V and the locations satisfies: where t max denotes the maximum allowed transmission time between a legitimate prover P and a verifier V.
Proof.Following the proof of the Lemma 1 on page 11 the best distinguisher based on t P ¡ t V and the locations (of the provers and the verifier) follows equations ( 3), ( 4) or (10).So, it satisfies: since delay V 0. Since d b follows the uniform distribution in the range Ö0, B×, it holds: and ∆ is bounded by 2t max in all three cases.
Practical Consequences Although the attack is polynomial, we can still live with it in practice thanks to the very high celerity of light, since the time it takes to cover 10 m is 2 ¡25 sec.Indeed, let: The best advantage is comparable to guessing h bits correctly.To have a privacy level of h bits (i.e., a best advantage of 2 ¡h ), we shall thus have: For instance, when t max is the time light takes to go through the distance of 10 m and h 20 bits (i.e., an adversary cannot distinguish two provers, accept with one chance out of a million), we have B sec, which is still a reasonable delay, though not polynomially bounded due to equation (11).
However, note that adding such a delay does not immediately guarantee location privacy against any attacker.This delay only prevents the generic attack we showed, and can be extended to any passive attacker, but it is not trivial to know whether it also automatically prevents active limited-adversary attacks.This issue is left for future work.

Location Private Construction
In this section we apply our results from the previous section to achieve a location private distance-bounding protocol for limited adversaries.The proposed protocol is based on the LPDB protocol [24].We assume that the verifier V and the prover P share a secret key K.As in the LPDB protocol, we have two phases: the initialisation phase and the distance-bounding phase.
-Initialisation Phase: The prover P generates a random nonce N P and sends it to the verifier V.The verifier V generates a random nonce N V and sends it to the prover P. Both the prover and the verifier use as input the concatenation of the nonces N P and N V as input to a keyed pseudorandom function (f K ) and divide the output of the PRF into two parts, i.e.: Furthermore, V generates another random value R V of length n. -Distance Bounding Phase: Both the prover P and the verifier V start their actions at a commonly agreed time t.More We should mention here that the security of the proposed protocol conforms with the theorem 2 that has already been proven for the LPDB protocol [24].
Theorem 2. Assuming that f is a PRF, that R V is uniformly distributed in a set of exponential size, that R P is in a set of exponential size, the LPDB protocol [24] is a distance bounding protocol which provides resistance to distance fraud, and resistance to mafia fraud.

Conclusions
In this paper, we investigate the problem of location privacy in distance-bounding protocols.More precisely, we define a security game for location privacy in distance-bounding protocols and an adversarial model, composed of two classes of adversaries, an omniscient and a limited adversary.We prove that location privacy is information-theoretically impossible for any adversary of the two classes.In particular, a generic passive adversary can break the location privacy of any polynomial-time protocol.Nevertheless, we show that for limited adversaries, carefully chosen parameters enable computational, provable location privacy in practice.For those parameters we propose a location private distance-bounding protocol based on the LPDB distance-bounding protocol [24].

3 Fig. 1 .
Fig. 1.Transmission of messages between the verifier and the prover for the three different cases of the construction of a distance-bounding protocol.