All papers in 2019 (1498 results)

Last updated:  2020-01-07
Authenticated Key Distribution: When the Coupon Collector is Your Enemy
Marc Beunardeau, Fatima-Ezzahra El Orche, Diana Maimut, David Naccache, Peter B. Roenne, Peter Y. A. Ryan
We introduce new authenticated key exchange protocols which on one hand do not resort to standard public key setups with corresponding assumptions of computationally hard problems, but on the other hand are more efficient than distributing symmetric keys among the participants. To this end, we rely on a trusted central authority distributing key material which size is independent of the total number of users, and which allows the users to obtain shared secret keys. We analyze the security of our construction taking into account various attack models. Importantly, only symmetric primitives are needed in the protocol making it an alternative to quantum-safe key exchange protocols which rely on hardness assumptions.
Last updated:  2019-12-30
Supersingular Isogeny-Based Designated Verifier Blind Signature
Rajeev Anand Sahu, Agnese Gini, Ankan Pal
Recently, Srinath and Chandrasekaran have proposed an undeniable blind signature scheme (UBSS) from supersingular isogeny to provide signer’s control in a quantum-resistant blind signature. However, certain weaknesses of undeniable signature have already been observed and have been overcome by formalizing the designated verifier signature (DVS). In this paper, we explore the possibility of generic construction of a DVS from hard homogeneous spaces. Further, following this motivation, we realize a quantum-resistant designated verifier blind signature (DVBS) scheme based on supersingular isogenies from the proposed generic construction. In contrast to the UBSS, our construction do not require interactive communication between the signer and the verifier, yet engages the signer in the verification. The compact signature adds more security properties in a quantum-resistant blind signature to be useful in specific applications including electronic tendering, online auctions etc.
Last updated:  2019-12-30
Analysis of Modified Shell Sort for Fully Homomorphic Encryption
Joon-Woo Lee, Young-Sik Kim, Jong-Seon No
The Shell sort algorithm is one of the most practically effective sorting algorithms. However, it is difficult to execute this algorithm with its intended running time complexity on data encrypted using fully homomorphic encryption (FHE), because the insertion sort in Shell sort has to be performed by considering the worst-case input data. In this paper, in order for the sorting algorithm to be used on FHE data, we modify the Shell sort with an additional parameter $\alpha$ and a gap sequence of powers of two. The modified Shell sort is found to have the trade-off between the running time complexity of $O(n^{3/2}\sqrt{\alpha+\log\log n})$ and the sorting failure probability of $2^{-\alpha}$. Its running time complexity is close to the intended running time complexity of $O(n^{3/2})$ and the sorting failure probability can be made very low with slightly increased running time. Further, the optimal window length of the modified Shell sort is also derived via convex optimization. The proposed analysis of the modified Shell sort is numerically confirmed by using randomly generated arrays. Further, the performance of the modified Shell sort is numerically compared with the case of Ciura's optimal gap sequence and the case of the optimal window length obtained through the convex optimization.
Last updated:  2019-12-30
Improved on Identity-based quantum signature based on Bell states
Chang-Bin Wang, Shu-Mei Hsu, Hsiang Chang, Jue-Sam Chou
In 2020 Xin et al.proposed a new identity-based quantum signature based on Bell states scheme. By using a one-time padding (OTP) for both-side transfer operations like, "XOR", Hadamard H, and Y, they confirmed the security of the proposed scheme. However, after analyses, we found that the scheme cannot resist both the existing forgery attack and meaningful message attack. Therefore, we modified their scheme to include the required security, unforgeability, which is very important in quantum signature scheme.
Last updated:  2020-05-15
Tight Security of Cascaded LRW2
Ashwin Jha, Mridul Nandi
At CRYPTO '12, Landecker et al. introduced the cascaded LRW2 (or CLRW2) construction, and proved that it is a secure tweakable block cipher up to roughly $ 2^{2n/3} $ queries. Recently, Mennink presented a distinguishing attack on CLRW2 in $ 2n^{1/2}2^{3n/4} $ queries. In the same paper, he discussed some non-trivial bottlenecks in proving tight security bound, i.e. security up to $ 2^{3n/4} $ queries. Subsequently, he proved security up to $ 2^{3n/4} $ queries for a variant of CLRW2 using $ 4 $-wise independent AXU assumption and the restriction that each tweak value occurs at most $ 2^{n/4} $ times. Moreover, his proof relies on a version of mirror theory which is yet to be publicly verified. In this paper, we resolve the bottlenecks in Mennink's approach and prove that the original CLRW2 is indeed a secure tweakable block cipher up to roughly $ 2^{3n/4} $ queries. To do so, we develop two new tools: First, we give a probabilistic result that provides improved bound on the joint probability of some special collision events; Second, we present a variant of Patarin's mirror theory in tweakable permutation settings with a self-contained and concrete proof. Both these results are of generic nature, and can be of independent interests. To demonstrate the applicability of these tools, we also prove tight security up to roughly $ 2^{3n/4} $ queries for a variant of DbHtS, called DbHtS-p, that uses two independent universal hash functions.
Last updated:  2020-06-23
Scaling Verifiable Computation Using Efficient Set Accumulators
Alex Ozdemir, Riad S. Wahby, Barry Whitehat, Dan Boneh
Verifiable outsourcing systems offload a large computation to a remote server, but require that the remote server provide a succinct proof, called a SNARK, that proves that the server carried out the computation correctly. Real-world applications of this approach can be found in several blockchain systems that employ verifiable outsourcing to process a large number of transactions off-chain. This reduces the on-chain work to simply verifying a succinct proof that transaction processing was done correctly. In practice, verifiable outsourcing of state updates is done by updating the leaves of a Merkle tree, recomputing the resulting Merkle root, and proving using a SNARK that the state update was done correctly. In this work, we use a combination of existing and novel techniques to implement an RSA accumulator inside of a SNARK, and use it as a replacement for a Merkle tree. We specifically optimize the accumulator for compatibility with SNARKs. Our experiments show that the resulting system reduces costs compared to existing approaches that use Merkle trees for committing to the current state. These results apply broadly to any system that needs to offload batches of state updates to an untrusted server.
Last updated:  2019-12-30
Solving $X^{q+1}+X+a=0$ over Finite Fields
Kwang Ho Kim, Junyop Choe, Sihem Mesnager
Solving the equation $P_a(X):=X^{q+1}+X+a=0$ over finite field $\GF{Q}$, where $Q=p^n, q=p^k$ and $p$ is a prime, arises in many different contexts including finite geometry, the inverse Galois problem \cite{ACZ2000}, the construction of difference sets with Singer parameters \cite{DD2004}, determining cross-correlation between $m$-sequences \cite{DOBBERTIN2006,HELLESETH2008} and to construct error-correcting codes \cite{Bracken2009}, as well as to speed up the index calculus method for computing discrete logarithms on finite fields \cite{GGGZ2013,GGGZ2013+} and on algebraic curves \cite{M2014}. Subsequently, in \cite{Bluher2004,HK2008,HK2010,BTT2014,Bluher2016,KM2019,CMPZ2019,MS2019}, the $\GF{Q}$-zeros of $P_a(X)$ have been studied: in \cite{Bluher2004} it was shown that the possible values of the number of the zeros that $P_a(X)$ has in $\GF{Q}$ is $0$, $1$, $2$ or $p^{\gcd(n, k)}+1$. Some criteria for the number of the $\GF{Q}$-zeros of $P_a(x)$ were found in \cite{HK2008,HK2010,BTT2014,KM2019,MS2019}. However, while the ultimate goal is to identify all the $\GF{Q}$-zeros, even in the case $p=2$, it was solved only under the condition $\gcd(n, k)=1$ \cite{KM2019}. We discuss this equation without any restriction on $p$ and $\gcd(n,k)$. New criteria for the number of the $\GF{Q}$-zeros of $P_a(x)$ are proved. For the cases of one or two $\GF{Q}$-zeros, we provide explicit expressions for these rational zeros in terms of $a$. For the case of $p^{\gcd(n, k)}+1$ rational zeros, we provide a parametrization of such $a$'s and express the $p^{\gcd(n, k)}+1$ rational zeros by using that parametrization.
Last updated:  2023-04-04
Too Much Crypto
Jean-Philippe Aumasson
We show that many symmetric cryptography primitives would not be less safe with significantly fewer rounds. To support this claim, we review the cryptanalysis progress in the last 20 years, examine the reasons behind the current number of rounds, and analyze the risk of doing fewer rounds. Advocating a rational and scientific approach to round numbers selection, we propose revised number of rounds for AES, BLAKE2, ChaCha, and SHA-3, which offer more consistent security margins across primitives and make them much faster, without increasing the security risk.
Last updated:  2019-12-30
Classification of quadratic APN functions with coefficients in GF(2) for dimensions up to 9
Yuyin Yu, Nikolay Kaleyski, Lilya Budaghyan, Yongqiang Li
Almost perfect nonlinear (APN) and almost bent (AB) functions are integral components of modern block ciphers and play a fundamental role in symmetric cryptography. In this paper, we describe a procedure for searching for quadratic APN functions with coefficients in GF(2) over the finite fields GF(2^n) and apply this procedure to classify all such functions over GF(2^n) with n up to 9. We discover two new APN functions (which are also AB) over GF(2^9) that are CCZ-inequivalent to any known APN function over this field. We also verify that there are no quadratic APN functions with coefficients in GF(2) over GF(2^n) with n between 6 and 8 other than the currently known ones.
Last updated:  2020-07-20
Cryptanalysis of The Lifted Unbalanced Oil Vinegar Signature Scheme
Jintai Ding, Joshua Deaton, Kurt Schmidt, Vishakha, Zheng Zhang
In 2017, Ward Beullens \textit{et al.} submitted Lifted Unbalanced Oil and Vinegar (LUOV)\cite{beullens2017field}, a signature scheme based on the famous multivariate public key cryptosystem (MPKC) called Unbalanced Oil and Vinegar (UOV), to NIST for the competition for post-quantum public key scheme standardization. The defining feature of LUOV is that, though the public key $\mathcal{P}$ works in the extension field of degree $r$ of $\mathbb{F}_2$, the coefficients of $\mathcal{P}$ come from $\mathbb{F}_2$. This is done to significantly reduce the size of $\mathcal{P}$. The LUOV scheme is now in the second round of the NIST PQC standardization process. In this paper we introduce a new attack on LUOV. It exploits the "lifted" structure of LUOV to reduce direct attacks on it to those over a subfield. We show that this reduces the complexity below the targeted security for the NIST post-quantum standardization competition.
Last updated:  2020-10-20
Keep the Dirt: Tainted TreeKEM, Adaptively and Actively Secure Continuous Group Key Agreement
Joël Alwen, Margarita Capretto, Miguel Cueto, Chethan Kamath, Karen Klein, Ilia Markov, Guillermo Pascual-Perez, Krzysztof Pietrzak, Michael Walter, Michelle Yeo
While messaging systems with strong security guarantees are widely used in practice, designing a protocol that scales efficiently to large groups and enjoys similar security guarantees remains largely open. The two existing proposals to date are ART (Cohn-Gordon et al., CCS18) and TreeKEM (IETF, The Messaging Layer Security Protocol, draft). TreeKEM is the currently considered candidate by the IETF MLS working group, but dynamic group operations (i.e. adding and removing users) can cause efficiency issues. In this paper we formalize and analyze a variant of TreeKEM which we term Tainted TreeKEM (TTKEM for short). The basic idea underlying TTKEM was suggested by Millican (MLS mailing list, February 2018). This version is more efficient than TreeKEM for some natural distributions of group operations, we quantify this through simulations. Our second contribution is two security proofs for TTKEM which establish post compromise and forward secrecy even against adaptive attackers. If $n$ is the group size and $Q$ the number of operations, the security loss (to the underlying PKE) in the Random Oracle Model is a polynomial factor $(Qn)^2$, and in the Standard Model a quasipolynomial $Q^{\log(n)}$. Our proofs can be adapted to TreeKEM as well. Before our work no security proof for any TreeKEM-like protocol establishing tight security against an adversary who can adaptively choose the sequence of operations was known. We also are the first to prove (or even formalize) active security where the server can arbitrarily deviate from the protocol specification. Proving fully active security - where also the users can arbitrarily deviate - remains open.
Last updated:  2019-12-30
Fine-Grained Cryptography Revisited
Shohei Egashira, Yuyu Wang, Keisuke Tanaka
Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutations, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions. All of our constructions are computable in $\mathsf{NC}^1$ and secure against (non-uniform) $\mathsf{NC}^1$ circuits under the widely believed worst-case assumption $\mathsf{NC}^1 \subsetneq \oplus \mathsf{L/poly}$.
Last updated:  2019-12-30
SNR-Centric Power Trace Extractors for Side-Channel Attacks
Changhai Ou, Degang Sun, Siew-Kei Lam, Xinping Zhou, Kexin Qiao, Qu Wang
The existing power trace extractors consider the case that the number of power traces owned by the attacker is sufficient to guarantee his successful attacks, and the goal of power trace extraction is to lower the complexity rather than increase the success rates. Although having strict theoretical proofs, they are too simple and leakage characteristics of POIs have not been thoroughly analyzed. They only maximize the variance of data-dependent power consumption component and ignore the noise component, which results in very limited SNR to improve and seriously affects the performance of extractors. In this paper, we provide a rigorous theoretical analysis of SNR of power traces, and propose a novel SNR-centric extractor, named Shortest Distance First (SDF), to extract power traces with smallest the estimated noise by taking advantage of known plaintexts. In addition, to maximize the variance of the exploitable component while minimizing the noise, we refer to the SNR estimation model and propose another novel extractor named Maximizing Estimated SNR First (MESF). Finally, we further propose an advanced extractor called Mean optimized MESF (MMESF) that exploits the mean power consumption of each plaintext byte value to more accurately and reasonably estimate the data-dependent power consumption of the corresponding samples. Experiments on both simulated power traces and measurements from an ATmega328p micro-controller demonstrate the superiority of our new extractors.
Last updated:  2020-09-02
RLWE-based Zero-Knowledge Proofs for linear and multiplicative relations
Ramiro Martínez, Paz Morillo
We present efficient Zero-Knowledge Proofs of Knowledge (ZKPoK) for linear and multiplicative relations among secret messages hidden as Ring Learning With Errors (RLWE) samples. Messages are polynomials in $\mathbb{Z}_q[x]/\left<x^{n}+1\right>$ and our proposed protocols for a ZKPoK are based on the celebrated paper by Stern on identification schemes using coding problems (Crypto'93). Our $5$-move protocol achieves a soundness error slightly above $1/2$ and perfect Zero-Knowledge. As an application we present Zero-Knowledge Proofs of Knowledge of relations between committed messages. The resulting commitment scheme is perfectly binding with overwhelming probability over the choice of the public key, and computationally hiding under the RLWE assumption. Compared with previous Stern-based commitment scheme proofs we decrease computational complexity, improve the size of the parameters and reduce the soundness error of each round.
Last updated:  2020-04-28
Implementation of a Strongly Robust Identity-Based Encryption Scheme over Type-3 Pairings
Hiroshi Okano, Keita Emura, Takuya Ishibashi, Toshihiro Ohigashi, Tatsuya Suzuki
Identity-based encryption (IBE) is a powerful mechanism for maintaining security. However, systems based on IBE are unpopular when compared with those of the public-key encryption (PKE). In our opinion, one of the reasons is a gap between theory and practice. For example, a generic transformation of weakly/strongly robust IBE from any IBE has been proposed by Abdalla et al., no robust IBE scheme is explicitly given. This means that, theoretically, anyone can construct a weakly/strongly robust IBE scheme by employing this transformation. However, this seems not easily applicable to non-cryptographers. In this paper, we first introduce the Gentry IBE scheme constructed over Type-3 pairings by employing the transformation proposed by Abe et al., and second we explicitly give strongly/weakly robust Gentry IBE schemes by employing the Abdalla et al. transformation. Finally, we show its implementation result and show that we can add strong robustness to the Gentry IBE scheme with a very few additional costs. We employ the mcl library to support a Barreto-Naehrig curve defined over the 462-bit prime. The encryption requires about 5 ms, whereas the decryption requires about 9 ms.
Last updated:  2020-01-24
Force-Locking Attack on Sync Hotstuff
Atsuki Momose, Jason Paul Cruz
Blockchain, which realizes state machine replication (SMR), is a fundamental building block of decentralized systems, such as cryptocurrencies and smart contracts. These systems require a consensus protocol in their global-scale, public, and trustless networks. In such an environment, consensus protocols require high resiliency, which is the ability to tolerate a fraction of faulty replicas, and thus synchronous protocols have been gaining significant research attention recently. Abraham et al. proposed a simple and practical synchronous SMR protocol called Sync Hotstuff (to be presented in IEEE S\&P 2020). Sync Hotstuff achieves $2\Delta$ latency, which is near optimal in a synchronous protocol, and its throughput without lock-step execution is comparable to that of partially synchronous protocols. Sync Hotstuff was presented under a standard synchronous model as well as under a weaker, but more realistic, model called mobile sluggish model. Sync Hotstuff also adopts an optimistic responsive mode, in which the latency is independent of $\Delta$. However, Sync Hotstuff has a critical security vulnerability with which an adversary can conduct double spending or denial-of-service attack. In this paper, we present an attack we call force-locking attack on Sync Hotstuff. This attack violates the safety, i.e., consistency of agreements, of the protocol under the standard synchronous model and the liveness, i.e., progress of agreements, of all versions of the protocol, including the mobile sluggish model and responsive mode. The force-locking attack is not only a specific attack on Sync Hotstuff but also on some general blockchain protocols. After describing the attack, we will present some refinements to prevent this attack. Our refinements remove the security vulnerability on Sync Hotstuff without any performance compromises. We will also provide formal proofs of the security for each model.
Last updated:  2021-08-03
Communication--Computation Trade-offs in PIR
Asra Ali, Tancrède Lepoint, Sarvar Patel, Mariana Raykova, Phillipp Schoppmann, Karn Seth, Kevin Yeo
We study the computation and communication costs and their possible trade-offs in various constructions for private information retrieval (PIR), including schemes based on homomorphic encryption and the Gentry--Ramzan PIR (ICALP'05). We improve over the construction of SealPIR (S&P'18) using compression techniques and a new oblivious expansion, which reduce the communication bandwidth by 60% while preserving essentially the same computation cost. We then present MulPIR, a PIR protocol leveraging multiplicative homomorphism to implement the recursion steps in PIR. This eliminates the exponential dependence of PIR communication on the recursion depth due to the ciphertext expansion, at the cost of an increased computational cost for the server. Additionally, MulPIR outputs a regular homomorphic encryption ciphertext, which can be homomorphically post-processed. As a side result, we describe how to do conjunctive and disjunctive PIR queries. On the other end of the communication--computation spectrum, we take a closer look at Gentry--Ramzan PIR, a scheme with asymptotically optimal communication rate. Here, the bottleneck is the server's computation, which we manage to reduce significantly. Our optimizations enable a tunable trade-off between communication and computation, which allows us to reduce server computation by as much as 85%, at the cost of an increased query size. We further show how to efficiently construct PIR for sparse databases. Our constructions support batched queries, as well as symmetric PIR. We implement all of our PIR constructions, and compare their communication and computation overheads with respect to each other for several application scenarios.
Last updated:  2020-07-16
Transparent Polynomial Delegation and Its Applications to Zero Knowledge Proof
Jiaheng Zhang, Tiancheng Xie, Yupeng Zhang, Dawn Song
We present a new succinct zero knowledge argument scheme for layered arithmetic circuits without trusted setup. The prover time is $O(C + n \log n)$ and the proof size is $O(D \log C + \log^2 n)$ for a $D$-depth circuit with $n$ inputs and $C$ gates. The verification time is also succinct, $O(D \log C + \log^2 n)$, if the circuit is structured. Our scheme only uses lightweight cryptographic primitives such as collision-resistant hash functions and is plausibly post-quantum secure. We implement a zero knowledge argument system, Virgo, based on our new scheme and compare its performance to existing schemes. Experiments show that it only takes 53 seconds to generate a proof for a circuit computing a Merkle tree with 256 leaves, at least an order of magnitude faster than all other succinct zero knowledge argument schemes. The verification time is 50ms, and the proof size is 253KB, both competitive to existing systems. Underlying Virgo is a new transparent zero knowledge verifiable polynomial delegation scheme with logarithmic proof size and verification time. The scheme is in the interactive oracle proof model and may be of independent interest.
Last updated:  2020-10-19
On metric regularity of Reed-Muller codes
Alexey Oblaukhov
In this work we study metric properties of the well-known family of binary Reed-Muller codes. Let $A$ be an arbitrary subset of the Boolean cube, and $\widehat{A}$ be the metric complement of $A$ --- the set of all vectors of the Boolean cube at the maximal possible distance from $A$. If the metric complement of $\widehat{A}$ coincides with $A$, then the set $A$ is called a metrically regular set. The problem of investigating metrically regular sets appeared when studying bent functions, which have important applications in cryptography and coding theory and are also one of the earliest examples of a metrically regular set. In this work we describe metric complements and establish the metric regularity of the codes $\mathcal{RM}(0,m)$ and $\mathcal{RM}(k,m)$ for $k \geqslant m-3$. Additionally, the metric regularity of the codes $\mathcal{RM}(1,5)$ and $\mathcal{RM}(2,6)$ is proved. Combined with previous results by Tokareva N. (2012) concerning duality of affine and bent functions, this establishes the metric regularity of most Reed-Muller codes with known covering radius. It is conjectured that all Reed-Muller codes are metrically regular.
Last updated:  2019-12-23
Analogue of Vélu's Formulas for Computing Isogenies over Hessian Model of Elliptic Curves
Fouazou Lontouo Perez Broon, Emmanuel Fouotsa
Vélu's formulas for computing isogenies over Weierstrass model of elliptic curves has been extended to other models of elliptic curves such as the Huff model, the Edwards model and the Jacobi model of elliptic curves. This work continues this line of research by providing efficient formulas for computing isogenies over elliptic curves of Hessian form. We provide explicit formulas for computing isogenies of degree 3 and isogenies of degree l not divisible by 3. The theoretical cost of computing these maps in this case is slightly faster than the case with other curves. We also extend the formulas to obtain isogenies over twisted and generalized Hessian forms of elliptic curves. The formulas in this work have been verified with the Sage software and are faster than previous results on the same curve.
Last updated:  2019-12-23
A New Encoding Framework for Predicate Encryption with Non-Linear Structures in Prime Order Groups
Jongkil Kim, Willy Susilo, Fuchun Guo, Joonsang Baek, Nan Li
We present an advanced encoding framework for predicate encryption (PE) in prime order groups. Our framework captures a wider range of adaptively secure PE schemes such as non-monotonic attribute-based encryption by allowing PE schemes to have more flexible structures. Prior to our work, frameworks featuring adaptively secure PE schemes in prime order groups require strong structural restrictions on the schemes. In those frameworks, exponents of public keys and master secret keys of PE schemes, which are also referred to as common variables, must be linear. In our work, we introduce a modular framework which includes non-linear common variables in PE schemes. First, we formalize non-linear structures which can appear in PE by improving Attrapadung's pair encoding framework (Eurocrypt'14). Then, we provide a generic compiler that features encodings under our framework to PE schemes in prime order groups. Particularly, the security of our compiler is proved by introducing a new technique which decomposes common variables into two types and makes one of them be shared between semi-functional and normal spaces on processes of the dual system encryption to mitigate the linear restriction. As instances of our new framework, we introduce new attribute-based encryption schemes supporting non-monotonic access structures, namely non-monotonic ABE, in prime order groups. We introduce adaptively secure non-monotonic ABE schemes having either short ciphertexts (if KP-ABE) or short keys (if CP-ABE) for the first time. Additionally, we introduce the first non-monotonic ABE schemes supporting both adaptive security and multi-use of attributes property in prime order groups.
Last updated:  2019-12-23
Leakage Detection with Kolmogorov-Smirnov Test
Xinping Zhou, Kexin Qiao, Changhai Ou
Leakage detection seeking the evidence of sensitive data dependencies in the side-channel traces instead of trying to recover the sensitive data directly under the enormous efforts with numerous leakage models and state-of-the-art distinguishers can provide a fast preliminary security assessment on the cryptographic devices for designers and evaluators. Therefore, it is a popular topic in recent side-channel research of which the Welch's $t$-test-based Test Vector Leakage Assessment (TVLA) methodology is the most widely used one. However, the TVLA is not always the best option under all kinds of conditions (as we can see in the latter section of this paper). Kolmogorov-Smirnov test is a well-known nonparametric method for statistical analysis to determine whether the samples are from the same distribution by analyzing the cumulative distribution. It has been proposed into side-channel analysis as a successful distinguisher. This paper proposes---to our knowledge, for the first time---Kolmogorov-Smirnov test as a new method for leakage detection. Besides, we propose two implementations to speed up the KS leakage detection procedure. Experimental results on simulated leakage with various parameters and the practical traces verify that KS is an effective and robust leakage detection tool and the comprehensive comparison with TVLA shows that KS-based leakage detection can be a right-hand supplement to TVLA when performing the side-channel assessment.
Last updated:  2019-12-23
Kilroy was here: The First Step Towards Explainability of Neural Networks in Profiled Side-channel Analysis
Uncategorized
Daan van der Valk, Stjepan Picek, Shivam Bhasin
Show abstract
Uncategorized
While several works have explored the application of deep learning for efficient profiled side-channel analysis, explainability or in other words what neural networks learn remains a rather untouched topic. As a first step, this paper explores the Singular Vector Canonical Correlation Analysis (SVCCA) tool to interpret what neural networks learn while training on different side-channel datasets, by concentrating on deep layers of the network. Information from SVCCA can help, to an extent, with several practical problems in a profiled side-channel analysis like portability issue and criteria to choose a number of layers/neurons to fight portability, provide insight on the correct size of training dataset and detect deceptive conditions like over-specialization of networks.
Last updated:  2020-07-13
On the Performance of Multilayer Perceptron in Profiling Side-channel Analysis
Uncategorized
Leo Weissbart
Show abstract
Uncategorized
In profiling side-channel analysis, machine learning-based analysis nowadays offers the most powerful performance. This holds especially for techniques stemming from the neural network family: multilayer perceptron and convolutional neural networks. Convolutional neural networks are often favored as results suggest better performance, especially in scenarios where targets are protected with countermeasures. Multilayer perceptron receives significantly less attention, and researchers seem less interested in this method, narrowing the results in the literature to comparisons with convolutional neural networks. On the other hand, a multilayer perceptron has a much simpler structure, enabling easier hyperparameter tuning and, hopefully, contributing to the explainability of this neural network inner working. We investigate the behavior of a multilayer perceptron in the context of the side-channel analysis of AES. By exploring the sensitivity of multilayer perceptron hyperparameters over the attack's performance, we aim to provide a better understanding of successful hyperparameters tuning and, ultimately, this algorithm's performance. Our results show that MLP (with a proper hyperparameter tuning) can easily break implementations with a random delay or masking countermeasures. This work aims to reiterate the power of simpler neural network techniques in the profiled SCA.
Last updated:  2020-06-24
On the Security of Sponge-type Authenticated Encryption Modes
Bishwajit Chakraborty, Ashwin Jha, Mridul Nandi
The sponge duplex is a popular mode of operation for constructing authenticated encryption schemes. In fact, one can assess the popularity of this mode from the fact that around $ 25 $ out of the $ 56 $ round 1 submissions to the ongoing NIST lightweight cryptography (LwC) standardization process are based on this mode. Among these, $14$ sponge-type constructions are selected for the second round consisting of $32$ submissions. In this paper, we generalize the duplexing interface of the duplex mode, which we call Transform-then-Permute. It encompasses Beetle as well as a new sponge-type mode SpoC (both are round 2 submissions to NIST LwC). We show a tight security bound for Transform-then-Permute based on $b$-bit permutation, which reduces to finding an exact estimation of the expected number of multi-chains (defined in this paper). As a corollary of our general result, authenticated encryption advantage of Beetle and SpoC is about $\frac{T(D+r2^r)}{2^b}$ where $T$, $D$ and $r$ denotes the number of offline queries (related to time complexity of the attack), number of construction queries (related to data complexity) and rate of the construction (related to efficiency). Previously the same bound has been proved for Beetle under the limitation that $ T << min\{2^r, 2^{b/2}\} $ (that compels to choose larger permutation with higher rate). In the context of NIST LwC requirement, SpoC based on $ 192 $-bit permutation achieves the desired security with $ 64 $-bit rate, which is not achieved by either duplex or Beetle (as per the previous analysis).
Last updated:  2020-04-16
Remove Some Noise: On Pre-processing of Side-channel Measurements with Autoencoders
Lichao Wu, Stjepan Picek
In the profiled side-channel analysis, deep learning-based techniques proved to be very successful even when attacking targets protected with countermeasures. Still, there is no guarantee that deep learning attacks will always succeed. Various countermeasures make attacks significantly more complicated, and those countermeasures can be further combined to make the attacks even more challenging. An intuitive solution to improve the performance of attacks would be to reduce the effect of countermeasures. In this paper, we investigate whether we can consider certain types of hiding countermeasures as noise and then use a deep learning technique called the denoising autoencoder to remove that noise. We conduct a detailed analysis of five different types of noise and countermeasures either separately or combined and show that in all scenarios, denoising autoencoder improves the attack performance significantly.
Last updated:  2020-04-09
Splitting the Interpose PUF: A Novel Modeling Attack Strategy
Nils Wisiol, Christopher Mühl, Niklas Pirnay, Phuong Ha Nguyen, Marian Margraf, Jean-Pierre Seifert, Marten van Dijk, Ulrich Rührmair
We demonstrate that the Interpose PUF proposed at CHES 2019, an Arbiter PUF based design for so-called Strong Physical Unclonable Functions (PUFs), can be modeled by novel machine learning strategies up to very substantial sizes and complexities. Our attacks require in the most difficult cases considerable, but realistic, numbers of CRPs, while consuming only moderate computation times, ranging from few seconds to few days. The attacks build on a new divide-and-conquer approach that allows us to model the two building blocks of the Interpose PUF separately. For non-reliability based Machine Learning (ML) attacks, this eventually leads to attack times on \((k_\text{up},k_\text{down})\)-Interpose PUFs that are comparable to the ones against \(\max\{k_\text{up}, k_\text{down}\}\)-XOR Arbiter PUFs, refuting the original claim that Interpose PUFs provide security similar to $(k_\text{down}+\frac{k_\text{up}}{2})$-XOR Arbiter PUFs (CHES 2019). On the technical side, our novel divide-and-conquer technique might also be useful in analyzing other designs where XOR Arbiter PUF challenge bits are unknown to the attacker.
Last updated:  2019-12-23
Efficient Fully Secure Leakage-Deterring Encryption
Jan Camenisch, Maria Dubovitskaya, Patrick Towa
Encryption is an indispensable tool for securing digital infra- structures as it reduces the problem of protecting the data to just protecting decryption keys. Unfortunately, this also makes it easier for users to share protected data by simply sharing decryption keys. Kiayias and Tang (ACM CCS 2013) were the first to address this important issue pre-emptively rather than a posteriori like traitor tracing schemes do. They proposed leakage-deterring encryption schemes that work as follows. For each user, a piece of secret information valuable to her is embedded into her public key. As long as she does not share her ability to decrypt with someone else, her secret is safe. As soon as she does, her secret is revealed to her beneficiaries. However, their solution suffers from serious drawbacks: (1) their model requires a fully-trusted registration authority that is privy to user secrets; (2) it only captures a CPA-type of privacy for user secrets, which is a very weak guarantee; (3) in their construction which turns any public-key encryption scheme into a leakage-deterring one, the new public keys consist of linearly (in the bit-size of the secrets) many public keys of the original scheme, and the ciphertexts are large. In this paper, we redefine leakage-deterring schemes. We remove the trust in the authority and guarantee full protection of user secrets under CCA attacks. Furthermore, in our construction, all keys and ciphertexts are short and constant in the size of the secrets. We achieve this by taking a different approach: we require users to periodically refresh their secret keys by running a protocol with a third party. Users do so anonymously, which ensures that they cannot be linked, and that the third party cannot perform selective failure attacks. We then leverage this refresh protocol to allow for the retrieval of user secrets in case they share their decryption capabilities. This refresh protocol also allows for the revocation of user keys and for the protection of user secrets in case of loss or theft of a decryption device. We provide security definitions for our new model as well as efficient instantiations that we prove secure.
Last updated:  2019-12-23
A Privacy-Enhancing Framework for Internet of Things Services
Lukas Malina, Gautam Srivastava, Petr Dzurenda, Jan Hajny, Sara Ricci
The world has seen an influx of connected devices through both smart devices and smart cities, paving the path forward for the Internet of Things (IoT). These emerging intelligent infrastructures and applications based on IoT can be beneficial to users only if essential private and secure features are assured. However, with constrained devices being the norm in IoT, security and privacy are often minimized. In this paper, we first categorize various existing privacy-enhancing technologies (PETs) and assessment of their suitability for privacy-requiring services within IoT. We also categorize potential privacy risks, threats, and leakages related to various IoT use cases. Furthermore, we propose a simple novel privacy-preserving framework based on a set of suitable privacy-enhancing technologies in order to maintain security and privacy within IoT services. Our study can serve as a baseline of privacy-by-design strategies applicable to IoT based services, with a particular focus on smart things, such as safety equipment.
Last updated:  2020-11-16
PESTO: Proactively Secure Distributed Single Sign-On, or How to Trust a Hacked Server
Carsten Baum, Tore K. Frederiksen, Julia Hesse, Anja Lehmann, Avishay Yanai
Single Sign-On (SSO) is becoming an increasingly popular authentication method for users that leverages a trusted Identity Provider (IdP) to bootstrap secure authentication tokens from a single user password. It alleviates some of the worst security issues of passwords, as users no longer need to memorize individual passwords for all service providers, and it removes the burden of these service to properly protect huge password databases. However, SSO also introduces a single point of failure. If compromised, the IdP can impersonate all users and learn their master passwords. To remedy this risk while preserving the advantages of SSO, Agrawal et al. (CCS'18) recently proposed a distributed realization termed PASTA (password-authenticated threshold authentication) which splits the role of the IdP across $n$ servers. While PASTA is a great step forward and guarantees security as long as not all servers are corrupted, it uses a rather inflexible corruption model: servers cannot be corrupted adaptively and --- even worse --- cannot recover from corruption. The latter is known as proactive security and allows servers to re-share their keys, thereby rendering all previously compromised information useless. In this work, we improve upon the work of PASTA and propose a distributed SSO protocol with proactive and adaptive security (PESTO), guaranteeing security as long as not all servers are compromised at the same time. We prove our scheme secure in the UC framework which is known to provide the best security guarantees for password-based primitives. The core of our protocol are two new primitives we introduce: partially-oblivious distributed PRFs and a class of distributed signature schemes. Both allow for non-interactive refreshs of the secret key material and tolerate adaptive corruptions. We give secure instantiations based on the gap one-more BDH and RSA assumption respectively, leading to a highly efficient 2-round PESTO protocol. We also present an implementation and benchmark of our scheme in Java, realizing OAuth-compatible bearer tokens for SSO, demonstrating the viability of our approach.
Last updated:  2019-12-23
The Influence of LWE/RLWE Parameters on the Stochastic Dependence of Decryption Failures
Georg Maringer, Tim Fritzmann, Johanna Sepúlveda
Learning with Errors (LWE) and Ring-LWE (RLWE) problems allow the construction of efficient key exchange and public-key encryption schemes. However, while improving the security through the use of error distributions with large standard deviations, the decryption failure rate increases as well. Currently, the independence of individual coefficient failures is assumed to estimate the overall decryption failure rate of many LWE/RLWE schemes. However, previous work has shown that this assumption is not correct. This assumption leads to wrong estimates of the decryption failure probability and consequently of the security level of the LWE/RLWE cryptosystem. An exploration of the influence of the LWE/RLWE parameters on the stochastic dependence among the coefficients is still missing. In this paper, we propose a method to analyze the stochastic dependence between decryption failures in LWE/RLWE cryptosystems. We present two main contributions. First, we use statistical methods to analyze the influence of fixing the norm of the error distribution on the stochastic dependence among decryption failures. The results have shown that fixing the norm of the error distribution indeed reduces the stochastic dependence of decryption failures. Therefore, the independence assumption gives a very close approximation to the true behavior of the cryptosystem. Second, we analyze and explore the influence of the LWE/RLWE parameters on the stochastic dependence. This exploration gives designers of LWE/RLWE based schemes the opportunity to compare different schemes with respect to the inaccuracy made by using the independence assumption. This work shows that the stochastic dependence depends on three LWE/RLWE parameters in different ways: i) it increases with higher lattice dimensions ($n$) and higher standard deviations of the error distribution ($\sqrt{k/2}$); and ii) it decreases with higher modulus ($q$).
Last updated:  2019-12-23
A New Trapdoor over Module-NTRU Lattice and its Application to ID-based Encryption
Jung Hee Cheon, Duhyeong Kim, Taechan Kim, Yongha Son
A trapdoor over NTRU lattice proposed by Ducas, Lyubashevsky and Prest~(ASIACRYPT 2014) has been widely used in various crytographic primitives such as identity-based encryption~(IBE) and digital signature, due to its high efficiency compared to previous lattice trapdoors. However, the most of applications use this trapdoor with the power-of-two cyclotomic rings, and hence to obtain higher security level one should double the ring dimension which results in a huge loss of efficiency. In this paper, we give a new way to overcome this problem by introducing a generalized notion of NTRU lattices which we call \emph{Module-NTRU}~(MNTRU) lattices, and show how to efficiently generate a trapdoor over MNTRU lattices. Moreover, beyond giving parameter flexibility, we further show that the Gram-Schmidt norm of the trapdoor can be reached to about $q^{1/d},$ where MNTRU covers $d \ge 2$ cases while including NTRU as $d = 2$ case. Since the efficiency of trapdoor-based IBE is closely related to the Gram-Schmidt norm of trapdoor, our trapdoor over MNTRU lattice brings more efficient IBE scheme than the previously best one of Ducas, Lyubashevsky and Prest, while providing the same security level.
Last updated:  2019-12-23
Distributed Web Systems Leading to Hardware Oriented Cryptography and Post-Quantum Cryptologic Methodologies
Uncategorized
Andrew M. K. Nassief
Show abstract
Uncategorized
Distributed computational networks allow for effective hardware encryption systems and the rise of Quantum level encryption as well for Qubit based processing. Part of the reason distributed architecture can lead to Qubit level encryption is similar mechanisms applied to cryptographic hashing. In the work presented in this paper, we will look at the decentralized-internet SDK and protocol, grid computing architecture, and mathematical approaches to parallel Qubit-based processing. The utilization for hardware oriented cryptography, modeled around distributed computing, will allow for an even more secure approach to Quantum authentication. The importance of works such as these, are due to the lack of security classical computing has in relation to encryption. Once mathematical formalities surpass NP-hardness, classical encryption mechanisms can be easily surpassed. However, a latent model for increased complexity in post-quantum level encryption likely forbids this trade-off. Given that Quantum Algorithms speed up superpolynomially, than deterministic NP-hardness would likely pose less harm to quantum encryption networks. Furthermore, with Qubit-based parallel processing, complexity models for encryption can harden in difficulty over time.
Last updated:  2019-12-23
A Note on the Instantiability of the Quantum Random Oracle
Edward Eaton, Fang Song
In a highly influential paper from fifteen years ago, Canetti, Goldreich, and Halevi showed a fundamental separation between the Random Oracle Model (ROM) and the Standard Model. They constructed a signature scheme which can be shown to be secure in the ROM, but is insecure when instantiated with any hash function (and thus insecure in the standard model). In 2011, Boneh et al. defined the notion of the Quantum Random Oracle Model (QROM), where queries to the random oracle may be made in quantum superposition. Because the QROM generalizes the ROM, a proof of security in the QROM is stronger than one in the ROM. This leaves open the possibility that security in the QROM could imply security in the standard model. In this work, we show that this is not the case, and that security in the QROM cannot imply standard model security. We do this by showing that the original schemes that show a separation between the standard model and the ROM are also secure in the QROM. We consider two schemes that establish such a separation, one with length-restricted messages, and one without, and show both to be secure in the QROM. Our results give further understanding to the landscape of proofs in the ROM versus the QROM or standard model, and point towards the QROM and ROM being much closer to each other than either is to standard model security.
Last updated:  2019-12-18
An optimist's Poisson model of cryptanalysis
Daniel R. L. Brown
Simplistic assumptions, modeling attack discovery by a Poisson point process, lead to quantifiable statistical estimates for security assurances, supporting the wisdom that more independent effort spent on cryptanalysis leads to better security assurance, but hinting security assurance also relies significantly upon general optimism. The estimates also suggest somewhat better security assurance from compounding two independent cryptosystems, but perhaps not enough to outweigh the extra cost.
Last updated:  2020-06-16
New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions and Interaction
Marshall Ball, Dana Dachman-Soled, Mukul Kulkarni
We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model to ZAPs (two message witness indistinguishable proofs) with inefficient provers. While such a transformation was known for the case where the prover is efficient, the security proof breaks down if the prover is inefficient. Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally introduced in the derandomization literature. We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as $\mathsf{NC}^1$. We also present $\mathsf{NC}^1$-fine-grained NIZK in the URS model for all of $\mathsf{NP}$ from the worst-case assumption $\oplus L/\mathsf{\poly} \not\subseteq \mathsf{NC}^1$. Our techniques yield the following applications: 1. ZAPs for $\mathsf{AM}$ from Minicrypt assumptions (with super-polynomial time provers), 2. $\mathsf{NC}^1$-fine-grained ZAPs for $\mathsf{NP}$ from worst-case assumptions, 3. Protocols achieving an "offline'' notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in both the super-polynomial setting (from Minicrypt assumptions) and the $\mathsf{NC}^1$-fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs.
Last updated:  2019-12-18
Rescuing Logic Encryption in Post-SAT Era by Locking & Obfuscation
Amin Rezaei, Yuanqi Shen, Hai Zhou
The active participation of external entities in the manufacturing flow has produced numerous hardware security issues in which piracy and overproduction are likely to be the most ubiquitous and expensive ones. The main approach to prevent unauthorized products from functioning is logic encryption that inserts key-controlled gates to the original circuit in a way that the valid behavior of the circuit only happens when the correct key is applied. The challenge for the security designer is to ensure neither the correct key nor the original circuit can be revealed by different analyses of the encrypted circuit. However, in state-of-the-art logic encryption works, a lot of performance is sold to guarantee security against powerful logic and structural attacks. This contradicts the primary reason of logic encryption that is to protect a precious design from being pirated and overproduced. In this paper, we propose a bilateral logic encryption platform that maintains high degree of security with small circuit modification. The robustness against exact and approximate attacks is also demonstrated.
Last updated:  2019-12-20
Privacy-preserving greater-than integer comparison without binary decomposition
Sigurd Eskeland
Common for the overwhelming majority of privacy-preserving greater-than integer comparison schemes is that cryptographic computations are conducted in a bitwise manner. To ensure the secrecy, each bit must be encoded in such a way that nothing is revealed to the opposite party. The most noted disadvantage is that the computational and communication cost of the bitwise encoding is as best linear to the number of bits. Also, many proposed schemes have complex designs that may be difficult to implement and are not intuitive. Carlton et al. proposed in 2018 an interesting scheme that avoids bitwise decomposition and works on whole integers. % It uses a special composite RSA modulus. A variant was proposed by Bourse et al. in 2019. In this paper, we show that in particular the Bourse scheme does not provide the claimed security. Inspired by the two mentioned papers, we propose a comparison scheme with a somewhat simpler construction and with clear security reductions.
Last updated:  2019-12-18
Cryptanalysis of two recently proposed PUF based authentication protocols for IoT: PHEMAP and Salted PHEMAP
Morteza Adeli, Nasour Bagheri
Internet of Things(IoT) consists of a large number of interconnected coexist heterogeneous entities, including Radio-frequency identification(RFIDs) based devices and other sensors to detect and transfer various information such as temperature, personal health data, brightness, etc. Security, in particular, authentication, is one of the most important parts of information security infrastructure in  IoT systems. Given that an IoT system has many resource-constrained devices, a goal could be designing a proper authentication protocol that is lightweight and can resist against various common attacks, targeting such devices. Recently, using Physical Unclonable Functions (PUF) to design lightweight authentication protocols has received a lot of attention among researchers. In this paper, we analyze two recently proposed authentication protocols based on PUF chains called PHEMAP and Salted PHEMAP. We show that these protocols are vulnerable to impersonate, desynchronization and traceability attacks.
Last updated:  2020-09-12
Byzantine Fault Tolerance in Partially Synchronous Networks
Yongge Wang
The problem of Byzantine Fault Tolerance (BFT) in partial synchronous networks has received a lot of attention in the last 30 years. There are two types of widely accepted definitions for partial synchronous networks. This paper shows that several widely deployed BFT protocols would reach deadlocks in the widely accepted Type II partial synchronous networks (that is, they will not achieve liveness property). Based on the analysis of BFT security requirements for partial synchronous networks, this paper proposes a BFT protocol BDLS and proves its security in partial synchronous networks. It is shown that BDLS is one of the most efficient BFT protocols in partial synchronous networks. Specifically, during synchrony with threshold digital signature schemes, BDLS participants could reach agreement in 4 steps with linear communication/authenticator complexity. It is noted that best existing linear communication/authenticator complexity protocols require at least 7 steps to achieve agreement. The BDLS protocol could be used in several application scenarios such as state machine replication or as blockchain finality gadgets. The GO-Language implementation of the BDLS protocol could be found at https://github.com/Sperax/bdls.
Last updated:  2020-07-16
Side Channel Information Set Decoding using Iterative Chunking
Norman Lahr, Ruben Niederhagen, Richard Petri, Simona Samardjiska
This paper presents an attack based on side-channel information and Information Set Decoding (ISD) on the Niederreiter cryptosystem and an evaluation of the practicality of the attack using an electromagnetic side channel. First, we describe a basic plaintext-recovery attack on the decryption algorithm of the Niederreiter cryptosystem. In case the cryptosystem is used as Key-Encapsulation Mechanism (KEM) in a key exchange, the plaintext corresponds to a session key. Our attack is an adaptation of the timing side-channel plaintext-recovery attack by Shoufan et al. from 2010 on the McEliece cryptosystem using the non-constant time Patterson’s decoding algorithm to the Niederreiter cryptosystem using the constant time Berlekamp-Massey decoding algorithm. We then enhance our attack by utilizing an ISD approach to support the basic attack and we introduce iterative column chunking to further significantly reduce the number of required side-channel measurements. We theoretically show that our attack improvements have a significant impact on reducing the number of required side-channel measurements. Our practical evaluation of the attack targets the FPGA-implementation of the Niederreiter cryptosystem in the NIST submission "Classic McEliece" with a constant time decoding algorithm and is feasible for all proposed parameters sets of this submission. For example, for the 256bit-security parameter set kem/mceliece6960119 we improve the basic attack that requires 5415 measurements to on average of about 560 measurements to mount a successful plaintext recovery attack. Further reductions can be achieved at increasing cost of the ISD computations.
Last updated:  2020-04-02
Out-of-Band Authenticated Group Key Exchange: From Strong Authentication to Immediate Key Delivery
Moni Naor, Lior Rotem, Gil Segev
Given the inherent ad-hoc nature of popular communication platforms, out-of-band authenticated key-exchange protocols are becoming widely deployed: Key exchange protocols that enable users to detect man-in-the-middle attacks by manually authenticating one short value. In this work we put forward the notion of immediate key delivery for such protocols, requiring that even if some users participate in the protocol but do not complete it (e.g., due to losing data connectivity or to other common synchronicity issues), then the remaining users should still agree on a shared secret. A property of a similar flavor was introduced by Alwen, Coretti and Dodis (EUROCRYPT '19) asking for immediate decryption of messages in user-to-user messaging while assuming that a shared secret has already been established -- but the underlying issue is crucial already during the initial key exchange and goes far beyond the context of messaging. Equipped with our immediate key delivery property, we formalize strong notions of security for out-of-band authenticated group key exchange, and demonstrate that the existing protocols either do not satisfy our notions of security or are impractical (these include, in particular, the protocols deployed by Telegram, Signal and WhatsApp). Then, based on the existence of any passively-secure key-exchange protocol (e.g., the Diffie-Hellman protocol), we construct an out-of-band authenticated group key-exchange protocol satisfying our notions of security. Our protocol is inspired by techniques that have been developed in the context of fair string sampling in order to minimize the effect of adversarial aborts, and offers the optimal tradeoff between the length of its out-of-band value and its security.
Last updated:  2022-02-25
Fast and Secure Updatable Encryption
Colin Boyd, Gareth T. Davies, Kristian Gjøsteen, Yao Jiang
Updatable encryption allows a client to outsource ciphertexts to some untrusted server and periodically rotate the encryption key. The server can update ciphertexts from an old key to a new key with the help of an update token, received from the client, which should not reveal anything about keys or plaintexts to an adversary. We provide a new and highly efficient suite of updatable encryption schemes that we collectively call SHINE. In the variant designed for short messages, ciphertext generation consists of applying one permutation and one exponentiation (per message block), while updating ciphertexts requires just one exponentiation. Variants for longer messages provide much stronger security guarantees than prior work that has comparable efficiency. We present a new confidentiality notion for updatable encryption schemes that implies prior notions. We prove that SHINE is secure under our new confidentiality definition while also providing ciphertext integrity.
Last updated:  2024-11-01
ModFalcon: compact signatures based on module NTRU lattices
Chitchanok Chuengsatiansup, Thomas Prest, Damien Stehlé, Alexandre Wallet, and Keita Xagawa
Lattices lead to promising practical post-quantum digital signatures, combining asymptotic efficiency with strong theoretical security guarantees. However, tuning their parameters into practical instantiations is a delicate task. On the one hand, NIST round 2 candidates based on Lyubashevsky's design (such as Dilithium and qTesla) allow several tradeoffs between security and efficiency, but at the expense of a large bandwidth consumption. On the other hand, the hash-and-sign falcon signature is much more compact and is still very efficient, but it allows only two security levels, with large compactness and security gaps between them. We introduce a new family of signature schemes based on the Falcon design, which relies on module lattices. Our concrete instantiation enjoys the compactness and efficiency of Falcon, and allows an intermediate security level. It leads to the most compact lattice-based signature achieving a quantum security above 128 bits.
Last updated:  2019-12-19
Generic Construction of Server-Aided Revocable Hierarchical Identity-Based Encryption with Decryption Key Exposure Resistance
Yanyan Liu, Yiru Sun
In this paper, we extend the notion of server-aided revocable identity-based encryption (SR-IBE) to the hierarchical IBE (HIBE) setting and propose a generic construction of server-aided revocable hierarchical IBE (SR-HIBE) schemes with decryption key exposure resistance (DKER) from any (weak) L-level revocable HIBE scheme without DKER and (L+1)-level HIBE scheme. In order to realize the server-aided revocation mechanism, we use the “double encryption” technique, and this makes our construction has short ciphertext size. Furthermore, when the maximum hierarchical depth is one, we obtain a generic construction of SR-IBE schemes with DKER from any IBE scheme and two-level HIBE scheme.
Last updated:  2019-12-18
Practical Relativistic Zero-Knowledge for NP
Claude Crépeau, Arnaud Massenet, Louis Salvail, Lucas Stinchcombe, Nan Yang
In this work we consider the following problem: in a Multi-Prover environment, how close can we get to prove the validity of an NP statement in Zero-Knowledge ? We exhibit a set of two novel Zero-Knowledge protocols for the 3-COLorability problem that use two (local) provers or three (entangled) provers and only require them to reply two trits each. This greatly improves the ability to prove Zero-Knowledge statements on very short distances with very minimal equipment.
Last updated:  2019-12-17
Saber on ESP32
Bin Wang, Xiaozhuo Gu, Yingshan Yang
Saber, a CCA-secure lattice-based post-quantum key encapsulation scheme, is one of the second round candidate algorithms in the post-quantum cryptography standardization process of the US National Institute of Standards and Technology (NIST) in 2019. In this work, we provide an efficient implementation of Saber on ESP32, an embedded microcontroller designed for IoT environment with WiFi and Bluetooth support. RSA coprocessor was used to speed up the polynomial multiplications for Kyber variant in a CHES 2019 paper. We propose an improved implementation utilizing the big integer coprocessor for the polynomial multiplications in Saber, which contains significant lower software overhead and takes a better advantage of the big integer coprocessor on ESP32. By using the fast implementation of polynomial multiplications, our single-core version implementation of Saber takes 1639K, 2123K, 2193K clock cycles on ESP32 for key generation, encapsulation and decapsulation respectively. Benefiting from the dual core feature on ESP32, we speed up the implementation of Saber by rearranging the computing steps and assigning proper tasks to two cores executing in parallel. Our dual-core version implementation takes 1176K, 1625K, 1514K clock cycles for key generation, encapsulation and decapsulation respectively.
Last updated:  2019-12-16
Leakage-Resilient Lattice-Based Partially Blind Signatures
D. Papachristoudis, D. Hristu-Varsakelis, F. Baldimtsi, G. Stephanides
Blind signature schemes (BSS) play a pivotal role in privacy-oriented cryptography. However, with blind signature schemes, the signed message remains unintelligible to the signer, giving them no guarantee that the blinded message he signed actually contained valid information. Partially-blind signature schemes (PBSS) were introduced to address precisely this problem. In this paper we present the first leakage-resilient, lattice-based partially-blind signature scheme in the literature. Our construction is provably secure in the random oracle model (ROM) and offers quasilinear complexity w.r.t. key/signature sizes and signing speed. In addition, it offers statistical partial blindness and its unforgeability is based on the computational hardness of worst-case ideal lattice problems for approximation factors in $˜ O(n^4)$ in dimension $n$. Our scheme benefits from the subexponential hardness of ideal lattice problems and remains secure even if a (1-o(1)) fraction of the signer’s secret key leaks to an adversary via arbitrary side-channels. Several extensions of the security model, such as honest-user unforgeability and selective failure blindness, are also considered and concrete parameters for instantiation are proposed.
Last updated:  2020-01-09
Tight bound on NewHope failure probability
Thomas Plantard, Arnaud Sipasseuth, Willy Susilo, Vincent Zucca
NewHope Key Encapsulation Mechanism (KEM) has been presented at USENIX 2016 by Alchim et al. and is one of the remaining lattice-based candidates to the post-quantum standardization initiated by the NIST. However, despite the relative simplicity of the protocol, the bound on the decapsulation failure probability resulting from the original analysis is not tight. In this work we refine this analysis to get a tight upper-bound on this probability which happens to be much lower than what was originally evaluated. As a consequence we propose a set of alternnative parameters, increasing the security and the compactness of the scheme. However using a smaller modulus prevent the use of a full NTT algorithm to perform multiplications of elements in dimension 512 or 1024. Nonetheless, similarly to previous works, we combine different multiplication algorithms and show that our new parameters are competitive on a constant time vectorized implementation. Our most compact parameters bring a speed- up of 17% (resp. 11%) in performance but allow to gain more than 19% over the bandwidth requirements and to increase the security of 10% (resp. 7%) in dimension 512 (resp. 1024).
Last updated:  2019-12-16
Extractors for Adversarial Sources via Extremal Hypergraphs
Eshan Chattopadhyay, Jesse Goodman, Vipul Goyal, Xin Li
Randomness extraction is a fundamental problem that has been studied for over three decades. A well-studied setting assumes that one has access to multiple independent weak random sources, each with some entropy. However, this assumption is often unrealistic in practice. In real life, natural sources of randomness can produce samples with no entropy at all or with unwanted dependence. Motivated by this and applications from cryptography, we initiate a systematic study of randomness extraction for the class of adversarial sources defined as follows. A weak source $\mathbf{X}$ of the form $\mathbf{X}_1,...,\mathbf{X}_N$, where each $\mathbf{X}_i$ is on $n$ bits, is an $(N,K,n,k)$-source of locality $d$ if the following hold: (1) Somewhere good sources: at least $K$ of the $\mathbf{X}_i$'s are independent, and each contains min-entropy at least $k$. We call these $\mathbf{X}_i$'s good sources, and their locations are unknown. (2) Bounded dependence: each remaining (bad) source can depend arbitrarily on at most $d$ good sources. We focus on constructing extractors with negligible error, in the regime where most of the entropy is contained within a few sources instead of across many (i.e., $k$ is at least polynomial in $K$). In this setting, even for the case of $0$-locality, very little is known prior to our work. For $d \geq 1$, essentially no previous results are known. We present various new extractors for adversarial sources in a wide range of parameters, and some of our constructions work for locality $d = K^{\Omega(1)}$. As an application, we also give improved extractors for small-space sources. The class of adversarial sources generalizes several previously studied classes of sources, and our explicit extractor constructions exploit tools from recent advances in extractor machinery, such as two-source non-malleable extractors and low-error condensers. Thus, our constructions can be viewed as a new application of non-malleable extractors. In addition, our constructions combine the tools from extractor theory in a novel way through various sorts of explicit extremal hypergraphs. These connections leverage recent progress in combinatorics, such as improved bounds on cap sets and explicit constructions of Ramsey graphs, and may be of independent interest.
Last updated:  2019-12-16
Formalising Oblivious Transfer in the Semi-Honest and Malicious Model in CryptHOL
David Butler, David Aspinall, Adria Gascon
Multi-Party Computation (MPC) allows multiple parties to compute a function together while keeping their inputs private. Large scale implementations of MPC protocols are becoming practical thus it is important to have strong guarantees for the whole development process, from the underlying cryptography to the implementation. Computer aided proofs are a way to provide such guarantees. We use CryptHOL to formalise a framework for reasoning about two party protocols using the security definitions for MPC. In particular we consider protocols for 1-out-of-2 Oblivious Transfer ($OT^1_2$) --- a fundamental MPC protocol --- in both the semi-honest and malicious models. We then extend our semi-honest formalisation to $OT^1_4$ which is a building block for our proof of security for the two party GMW protocol --- a protocol that can securely compute any Boolean circuit. The semi-honest $OT^1_2$ protocol we formalise is constructed from Extended Trapdoor Permutations (ETP), we first prove the general construction secure and then instantiate for the RSA collection of functions --- a known ETP. Our general proof assumes only the existence of ETPs, meaning any instantiated results come without needing to prove any security properties, only that the requirements of an ETP are met.
Last updated:  2020-04-15
Investigating Profiled Side-Channel Attacks Against the DES Key Schedule
Johann Heyszl, Katja Miller, Florian Unterstein, Marc Schink, Alexander Wagner, Horst Gieser, Sven Freud, Tobias Damm, Dominik Klein, Dennis Kügler
Recent publications describe profiled side-channel attacks (SCAs) against the DES key-schedule of a “commercially available security controller”. They report a significant reduction of the average remaining entropy of cryptographic keys after the attack, with large, key-dependent variations and results as low as a few bits using only a single attack trace. Unfortunately, they leave important questions unanswered: Is the reported wide distribution of results plausible? Are the results device-specific or more general? What is the impact on the security of 3-key triple DES? In this contribution, we systematically answer those and several other questions. We also analyze two commercial security controllers reproducing reported results, while explaining details of algorithmic choices. We verified the overall reduction and large variations in single DES key security levels (49.4 bit mean and 0.9 % of keys < 40 bit) and observe a fraction of keys with exceptionally low security levels, called weak keys. A simplified simulation of device leakage shows that the distribution of security levels is predictable to some extend given a leakage model. We generalize results to other leakage models by attacking the hardware DES accelerator of a general purpose microcontroller. We conclude that weaker keys are mainly caused by switching noise, which is always present in template attacks on any key-schedule, regardless of the algorithm and implementation. Further, we describe a sound approach to estimate 3-key triple-DES security levels from empirical single DES results and find that the impact on the security of 3-key triple-DES is limited (96.1 bit mean and 0.24 % of key-triples < 80 bit).
Last updated:  2020-02-06
Benchmarking Post-Quantum Cryptography in TLS
Christian Paquin, Douglas Stebila, Goutam Tamvada
Post-quantum cryptographic primitives have a range of trade-offs compared to traditional public key algorithms, either having slower computation or larger public keys and ciphertexts/signatures, or both. While the performance of these algorithms in isolation is easy to measure and has been a focus of optimization techniques, performance in realistic network conditions has been less studied. Google and Cloudflare have reported results from running experiments with post-quantum key exchange algorithms in the Transport Layer Security (TLS) protocol with real users' network traffic. Such experiments are highly realistic, but cannot be replicated without access to Internet-scale infrastructure, and do not allow for isolating the effect of individual network characteristics. In this work, we develop and make use of a framework for running such experiments in TLS cheaply by emulating network conditions using networking features of the Linux kernel. Our testbed allows us to independently control variables such as link latency and packet loss rate, and then examine the impact on TLS connection establishment performance of various post-quantum primitives, specifically hybrid elliptic curve/post-quantum key exchange and post-quantum digital signatures, based on implementations from the Open Quantum Safe project. Among our key results, we observe that packet loss rates above 3-5% start to have a significant impact on post-quantum algorithms that fragment across many packets, such as those based on unstructured lattices. The results from this emulation framework are also complemented by results on the latency of loading entire web pages over TLS in real network conditions, which show that network latency hides most of impact from algorithms with slower computations (such as supersingular isogenies).
Last updated:  2019-12-12
Boolean functions for homomorphic-friendly stream ciphers
Claude Carlet, Pierrick Méaux
The proliferation of small embedded devices having growing but still limited computing and data storage facilities, and the related development of cloud services with extensive storage and computing means, raise nowadays new privacy issues because of the outsourcing of data processing. This has led to a need for symmetric cryptosystems suited for hybrid symmetric-FHE encryption protocols, ensuring the practicability of the FHE solution. Recent ciphers meant for such use have been introduced, such as LowMC, Kreyvium, FLIP, and Rasta. The introduction of stream ciphers devoted to symmetric-FHE frameworks such as FLIP and its recent modification has in its turn posed new problems on the Boolean functions to be used in them as filter functions. We recall the state of the art in this matter and present further studies (without proof).
Last updated:  2020-11-19
Rosita: Towards Automatic Elimination of Power-Analysis Leakage in Ciphers
Madura A Shelton, Niels Samwel, Lejla Batina, Francesco Regazzoni, Markus Wagner, Yuval Yarom
Since their introduction over two decades ago, side-channel attacks have presented a serious security threat. While many ciphers' implementations employ masking techniques to protect against such attacks, they often leak secret information due to unintended interactions in the hardware. We present Rosita, a code rewrite engine that uses a leakage emulator which we amend to correctly emulate the micro-architecture of a target system. We use Rosita to automatically protect masked implementations of AES, ChaCha, and Xoodoo. For AES and Xoodoo, we show the absence of observable leakage at 1,000,000 traces with less than 21% penalty to the performance. For ChaCha, which has significantly more leakage, Rosita eliminates over 99% of the leakage, at a performance cost of 64%.
Last updated:  2019-12-12
Compact Storage of Superblocks for NIPoPoW Applications
Kostis Karantias, Aggelos Kiayias, Nikos Leonardos, Dionysis Zindros
Blocks in proof-of-work (PoW) blockchains satisfy the PoW equation $H(B) \leq T$. If additionally a block satisfies $H(B) \leq T2^{-\mu}$, it is called a $\mu$-superblock. Superblocks play an important role in the construction of compact blockchain proofs which allows the compression of PoW blockchains into so-called Non-Interactive Proofs of Proof-of-Work (NIPoPoWs). These certificates are essential for the construction of superlight clients, which are blockchain wallets that can synchronize exponentially faster than traditional SPV clients. In this work, we measure the distribution of superblocks in the Bitcoin blockchain. We find that the superblock distribution within the blockchain follows expectation, hence we empirically verify that the distribution of superblocks within the Bitcoin blockchain has not been adversarially biased. NIPoPoWs require that each block in a blockchain points to a sample of previous blocks in the blockchain. These pointers form a data structure called the interlink. We give efficient ways to store the interlink data structure. Repeated superblock references within an interlink can be omitted with no harm to security. Hence, it is more efficient to store a set of superblocks rather than a list. We show that, in honest executions, this simple observation reduces the number of superblock references by approximately a half in expectation. We then verify our theoretical result by measuring the improvement over existing blockchains in terms of the interlink sizes (which we improve by $79\%$) and the sizes of succinct NIPoPoWs (which we improve by $25\%$). As such, we show that deduplication allows superlight clients to synchronize $25\%$ faster.
Last updated:  2019-12-12
CAS-Unlock: Unlocking CAS-Lock without Access to a Reverse-Engineered Netlist
Abhrajit Sengupta, Ozgur Sinanoglu
CAS-Lock (cascaded locking) is a SAT-resilient locking technique, which can simultaneously thwart SAT and bypass attack, while maintaining non-trivial output corruptibility. Despite all of its theoretical guarantees, in this report we expose a serious flaw in its design that can be exploited to break CAS-Lock. Further, this attack neither requires access to a reverse-engineered netlist, nor it requires a working oracle with the correct key loaded onto the chip's memory. We demonstrate that we can activate any CAS-Locked IC without knowing the secret key.
Last updated:  2021-08-03
Server-Aided Revocable Identity-Based Encryption Revisited
Fei Meng
Efficient user revocation has always been a challenging problem in identity-based encryption (IBE). Boldyreva et al. (CCS 2008) first proposed and formalized the notion of revocable IBE (RIBE) based on a tree-based revocation method. In their scheme, each user is required to store a number of long-term secret keys and all non-revoked users have to communicate with the key generation center periodically to update its decryption key. To reduce the workload on the user side, Qin et al. (ESORICS 2015) proposed a new system model, server-aided revocable IBE (SR-IBE). In SR-IBE model, each user is required to keep only one private key $\prid$ and unnecessary to communicate with the key generation center or the server during key updating. However, in their security model, the challenge identity $\starid$ must be revoked once the private key $\mathsf{Priv}_{\starid}$ was revealed to the adversary. This is too restrictive since decrypting a ciphertext requires both the private key $\prid$ and the long-term transformation key $\skid$. In this paper, we first revisit Qin et al.'s security model and propose a stronger one called SSR-sID-CPA security. Specifically, $\starid$ is revoked only when both $\sskid$ and $\sprid$ are revealed and the adversary is allowed to access short-term transformation keys oracle. We also prove that Qin et al.'s scheme is insecure under our new security model. Second, we construct a lattice-based SR-IBE scheme based on Katsumata's RIBE scheme (PKC 19), and show that our lattice-based SR-IBE scheme is SSR-sID-CPA secure. Finally, we propose a generic construction of SR-IBE scheme by combining a RIBE and a 2-level HIBE scheme. The security of the generic SR-IBE scheme inherits those of the underlying building blocks.
Last updated:  2019-12-12
A Code-specific Conservative Model for the Failure Rate of Bit-flipping Decoding of LDPC Codes with Cryptographic Applications
Paolo Santini, Alessandro Barenghi, Gerardo Pelosi, Marco Baldi, Franco Chiaraluce
Characterizing the decoding failure rate of iteratively decoded Low- and Moderate-Density Parity Check (LDPC/MDPC) codes is paramount to build cryptosystems based on them, able to achieve indistinguishability under adaptive chosen ciphertext attacks. In this paper, we provide a statistical worst-case analysis of our proposed iterative decoder obtained through a simple modification of the classic in-place bit-flipping decoder. This worst case analysis allows both to derive the worst-case behavior of an LDPC/MDPC code picked among the family with the same length, rate and number of parity checks, and a code-specific bound on the decoding failure rate. The former result allows us to build a code-based cryptosystem enjoying the $\delta$-correctness property required by IND-CCA2 constructions, while the latter result allows us to discard code instances which may have a decoding failure rate significantly different from the average one (i.e., representing weak keys), should they be picked during the key generation procedure.
Last updated:  2020-10-21
Winkle: Foiling Long-Range Attacks in Proof-of-Stake Systems
Sarah Azouvi, George Danezis, Valeria Nikolaenko
Winkle protects any validator-based byzantine fault tolerant consensus mechanisms, such as those used in modern Proof-of-Stake blockchains, against long-range attacks where old validators’ signature keys get compromised. Winkle is a decentralized secondary layer of client-based validation, where a client includes a single additional field into a transaction that they sign: a hash of the previously sequenced block. The block that gets a threshold of signatures (confirmations) weighted by clients’ coins is called a “confirmed” checkpoint. We show that under plausible and flexible security assumptions about clients the confirmed checkpoints can not be equivocated. We discuss how client key rotation increases security, how to accommodate for coins’ minting and how delegation allows for faster checkpoints. We evaluate checkpoint latency experimentally using Bitcoin and Ethereum transaction graphs, with and without delegation of stake.
Last updated:  2019-12-10
Cryptanalysis of a pairing-free certificate-based proxy re-encryption scheme for secure data sharing in public clouds
S. Sharmila Deva Selvi, Irene Miriam Isaac, C. Pandu Rangan
Proxy re-encryption(PRE) is a primitive that is used to facilitate secure access delegation in the cloud. Proxy re-encryption allows a proxy server to transform ciphertexts encrypted under one user's public key to that under another user's public key without learning anything about the underlying message or the secret key. Over the years proxy re-encryption schemes have been proposed in different settings. In this paper we restrict our analysis to certificate based proxy re-encryption. The first CCA secure certificate based PRE without bilinear pairings was proposed by Lu and Li in Future Generation Computer Systems, 2016. In this paper we present a concrete attack on their scheme and prove that it is not CCA secure.
Last updated:  2020-11-30
A new method for Searching Optimal Differential and Linear Trails in ARX Ciphers
Zhengbin Liu, Yongqiang Li, Lin Jiao, Mingsheng Wang
In this paper, we propose an automatic tool to search for optimal differential and linear trails in ARX ciphers. It's shown that a modulo addition can be divided into sequential small modulo additions with carry bit, which turns an ARX cipher into an S-box-like cipher. From this insight, we introduce the concepts of carry-bit-dependent difference distribution table (CDDT) and carry-bit-dependent linear approximation table (CLAT). Based on them, we give efficient methods to trace all possible output differences and linear masks of a big modulo addition, with returning their differential probabilities and linear correlations simultaneously. Then an adapted Matsui's algorithm is introduced, which can find the optimal differential and linear trails in ARX ciphers. Besides, the superiority of our tool's potency is also confirmed by experimental results for round-reduced versions of HIGHT and SPECK. More specifically, we find the optimal differential trails for up to 10 rounds of HIGHT, reported for the first time. We also find the optimal differential trails for 10, 12, 16, 8 and 8 rounds of SPECK32/48/64/96/128, and report the provably optimal differential trails for SPECK48 and SPECK64 for the first time. The optimal linear trails for up to 9 rounds of HIGHT are reported for the first time, and the optimal linear trails for 22, 13, 15, 9 and 9 rounds of SPECK32/48/64/96/128 are also found respectively. These results evaluate the security of HIGHT and SPECK against differential and linear cryptanalysis. Also, our tool is useful to estimate the security in the design of ARX ciphers.
Last updated:  2020-04-22
Reverse Outsourcing: Reduce the Cloud's Workload in Outsourced Attribute-Based Encryption Scheme
Fei Meng, Mingqiang Wang
Attribute-based encryption (ABE) is a cryptographic technique known for ensuring fine-grained access control on encrypted data. One of the main drawbacks of ABE is the time required to decrypt the ciphertext is considerably expensive, since it grows with the complexity of access policy. Green et al. [USENIX, 2011] provided the outsourced ABE scheme, in which most computational overhead of ciphertext decryption is outsourced from end user to the cloud. However, their method inevitably increases the computational burden of the cloud. While millions of users are enjoying cloud computing services simultaneously, it may cause huge congestion and latency. In this paper, we propose a heuristic primitive called reverse outsourcing to reduce the cloud's workload. Specifically, the cloud is allowed to transform the ciphertext decryption outsourced by the end user into several computing tasks and dispatches them to idle users, who have some smart devices connected to the internet but not in use. These devices can provide computing resources for the cloud, just like the cloud hires many employees to complete the computing work. Besides, the computing results returned by the idle users should be verified by the cloud. We propose a reverse outsourced CP-ABE scheme in the rational idle user model, where idle users will be rewarded by the cloud after returning the correct computing results and they prefer to get rewards instead of saving resources. According to the Nash equilibrium, we prove that the best strategy for idle users is to follow our protocol honestly, because the probability of deceiving the cloud with incorrect computing results is negligible. Therefore, in our scheme, most computational overhead of ciphertext decryption is shifted from the cloud to idle users, leaving a constant number of operations for the cloud.
Last updated:  2020-02-19
Algebraic and Euclidean Lattices: Optimal Lattice Reduction and Beyond
Paul Kirchner, Thomas Espitau, Pierre-Alain Fouque
We introduce a framework generalizing lattice reduction algorithms to module lattices in order to practically and efficiently solve the $\gamma$-Hermite Module-SVP problem over arbitrary cyclotomic fields. The core idea is to exploit the structure of the subfields for designing a doubly-recursive strategy of reduction: both recursive in the rank of the module and in the field we are working in. Besides, we demonstrate how to leverage the inherent symplectic geometry existing in the tower of fields to provide a significant speed-up of the reduction for rank two modules. The recursive strategy over the rank can also be applied to the reduction of Euclidean lattices, and we can perform a reduction in asymptotically almost the same time as matrix multiplication. As a byproduct of the design of these fast reductions, we also generalize to all cyclotomic fields and provide speedups for many previous number theoretical algorithms. Quantitatively, we show that a module of rank 2 over a cyclotomic field of degree $n$ can be heuristically reduced within approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is the bitlength of the entries. For $B$ large enough, this complexity shrinks to $\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it goes below the estimate of $n^2B$ swaps given by the classical analysis of the LLL algorithm using the so-called potential. Finally, all this framework is fully parallelizable, and we provide a full implementation. We apply it to break multilinear cryptographic candidates on concrete proposed parameters. We were able to reduce matrices of dimension 4096 with 6675-bit integers in 4 days, which is more than a million times faster than previous state-of-the-art implementations. Eventually, we demonstrate a quasicubic time for the Gentry-Szydlo algorithm which finds a generator given the relative norm and a basis of an ideal. This algorithm is important in cryptanalysis and requires efficient ideal multiplications and lattice reductions; as such we can practically use it in dimension 1024.
Last updated:  2020-03-23
Confidential Assets on MimbleWimble
Yi Zheng, Howard Ye, Patrick Dai, Tongcheng Sun, Vladislav Gelfer
This paper proposes a solution for implementing Confidential Assets on MimbleWimble, which allows users to issue and transfer multiple assets on a blockchain without showing transaction addresses, amounts, and asset types. We first introduce the basic principles of MimbleWimble and then describe the implementation in detail.
Last updated:  2019-12-10
About Low DFR for QC-MDPC Decoding
Nicolas Sendrier, Valentin Vasseur
McEliece-like code-based key exchange mechanisms using QC-MDPC codes can reach IND-CPA security under hardness assumptions from coding theory, namely quasi-cyclic syndrome decoding and quasi-cyclic codeword finding. To reach higher security requirements, like IND-CCA security, it is necessary in addition to prove that the decoding failure rate (DFR) is negligible, for some decoding algorithm and a proper choice of parameters. Getting a formal proof of a low DFR is a difficult task. Instead, we propose to ensure this low DFR under some additional security assumption on the decoder. This assumption relates to the asymptotic behavior of the decoder and is supported by several other works. We define a new decoder, Backflip, which features a low DFR. We evaluate the Backflip decoder by simulation and extrapolate its DFR under the decoder security assumption. We also measure the accuracy of our simulation data, in the form of confidence intervals, using standard techniques from communication systems.
Last updated:  2019-12-10
T0RTT: Non-Interactive Immediate Forward-Secret Single-Pass Circuit Construction
Sebastian Lauer, Kai Gellert, Robert Merget, Tobias Handirk, Jörg Schwenk
Maintaining privacy on the Internet with the presence of powerful adversaries such as nation-state attackers is a challenging topic, and the Tor project is currently the most important tool to protect against this threat. The circuit construction protocol (CCP) negotiates cryptographic keys for Tor circuits, which overlay TCP/IP by routing Tor cells over n onion routers. The current circuit construction protocol provides strong security guarantees such as forward secrecy by exchanging O(n^2) messages. For several years it has been an open question if the same strong security guarantees could be achieved with less message overhead, which is desirable because of the inherent latency in overlay networks. Several publications described CCPs which require only O(n) message exchanges, but significantly reduce the security of the resulting Tor circuit. It was even conjectured that it is impossible to achieve both message complexity O(n) and forward secrecy immediately after circuit construction (so-called immediate forward secrecy). Inspired by the latest advancements in zero round-trip time key exchange (0-RTT), we present a new CCP protocol Tor 0-RTT (T0RTT). Using modern cryptographic primitives such as puncturable encryption allow to achieve immediate forward secrecy using only O(n) messages. We implemented these new primitives to give a first indication of possible problems and how to overcome them in order to build practical CCPs with O(n) messages and immediate forward secrecy in the future.
Last updated:  2022-03-15
A Generic View on the Unified Zero-Knowledge Protocol and its Applications
Diana Maimut, George Teseleanu
We present a generalization of Maurer's unified zero-knowledge (UZK) protocol, namely a unified generic zero-knowledge (UGZK) construction. We prove the security of our UGZK protocol and discuss special cases. Compared to UZK, the new protocol allows to prove knowledge of a vector of secrets instead of only one secret. We also provide the reader with a hash variant of UGZK and the corresponding security analysis. Last but not least, we extend Cogliani \emph{et al.}'s lightweight authentication protocol by describing a new distributed unified authentication scheme suitable for wireless sensor networks and, more generally, the Internet of Things.
Last updated:  2020-03-05
Cross-Chain Communication Using Receipts
Arasu Arun, C. Pandu Rangan
The functioning of blockchain networks can be analyzed and abstracted into simple properties that allow for their usage as blackboxes in cryptographic protocols. One such abstraction is that of the growth of the blockchain over time. In this work, we build on the analysis of Garay et al. to develop an interface of functions that allow us to predict which block a submitted transaction will be added by. For cross-chain applications, we develop similar prediction functions for submitting related transactions to multiple independent networks in parallel. We then define a general ``receipt functionality'' for blockchains that provides a proof, in the form of a short string, that a particular transaction was added to the blockchain. We use these tools to obtain an efficient solution to the Train-and-Hotel Problem, which asks for a cross-chain booking protocol that allows a user to atomically book a train ticket on one blockchain and a hotel room on another. We formally prove that our protocol satisfies atomicity and liveness. We further highlight the versatility of blockchain receipts by discussing their applicability to general cross-chain communication and multi-party computation. We then detail a construction of ``Proof-of-Work receipts'' for Proof-of-Work blockchains using efficient and compact zero-knowledge proofs for arithmetic circuits.
Last updated:  2019-12-10
On the Impossibility of Probabilistic Proofs in Relativized Worlds
Alessandro Chiesa, Siqi Liu
We initiate the systematic study of probabilistic proofs in relativized worlds. The goal is to understand, for a given oracle, if there exist "non-trivial" probabilistic proofs for checking deterministic or nondeterministic computations that make queries to the oracle. This question is intimately related to a recent line of work that builds cryptographic primitives (e.g., hash functions) via constructions that are "friendly" to known probabilistic proofs. This improves the efficiency of probabilistic proofs for computations calling these primitives. We prove that "non-trivial" probabilistic proofs relative to several natural oracles do not exist. Our results provide strong complexity-theoretic evidence that certain functionalities cannot be treated as black boxes, and thus investing effort to instantiate these functionalities via constructions tailored to known probabilistic proofs may be inherent.
Last updated:  2020-02-10
Secret Sharing Schemes : A Fine Grained Analysis
Shion Samadder Chaudhury, Sabyasachi Dutta, Kouichi Sakurai
In this paper we prove that embedding parity bits and other function outputs in share string enables us to construct a secret sharing scheme (over binary alphabet) robust against a resource bounded adversary. Constructing schemes robust against adversaries in higher complexity classes requires an increase in the share size and increased storage. By connecting secret sharing with the randomized decision tree of a Boolean function we construct a scheme which is robust against an infinitely powerful adversary while keeping the constructions in a very low complexity class, viz. $AC^0$. As an application, we construct a robust secret sharing scheme in $AC^0$ that can accommodate new participants (dynamically) over time. Our construction requires a new redistribution of secret shares and can accommodate a bounded number of new participants.
Last updated:  2020-02-12
$AC^0$ Constructions for Evolving Secret Sharing Schemes and Redistribution of Secret Shares
Shion Samadder Chaudhury, Sabyasachi Dutta, Kouichi Sakurai
Classical secret sharing schemes are built on the assumptions that the number of participants and the access structure remain fixed over time. Evolving secret sharing addresses the question of accommodating new participants with changeable access structures. One goal of this article is to initiate the study of evolving secret sharing sharing such that both share generation and reconstruction algorithms can be implemented by $AC^0$ circuits. We give a concrete construction with some minor storage assumption. Furthermore, allowing certain trade-offs we consider the novel problem of robust redistribution of secret shares (in $AC^0$) in the spirit of dynamic access structure by suitably modifying a construction of Cheng-Ishai-Li (TCC $2017$). A naive solution to the problem is to increase the alphabet size. We avoid this by modifying shares of some of the old participants. This modification is also necessary to make newly added participant(s) non-redundant to the secret sharing scheme.
Last updated:  2019-12-10
On the Relationship between Resilient Boolean Functions and Linear Branch Number of S-boxes
Sumanta Sarkar, Kalikinkar Mandal, Dhiman Saha
Differential branch number and linear branch number are critical for the security of symmetric ciphers. The recent trend in the designs like PRESENT block cipher, ASCON authenticated encryption shows that applying S-boxes that have nontrivial differential and linear branch number can significantly reduce the number of rounds. As we see in the literature that the class of 4 x 4 S-boxes have been well-analysed, however, a little is known about the n x n S-boxes for n >= 5. For instance, the complete classification of 5 x 5 affine equivalent S-boxes is still unknown. Therefore, it is challenging to obtain “the best” S-boxes with dimension >= 5 that can be used in symmetric cipher designs. In this article, we present a novel approach to construct S-boxes that identifies classes of n x n S-boxes (n = 5, 6) with differential branch number 3 and linear branch number 3, and ensures other cryptographic properties. To the best of our knowledge, we are the first to report 6 x 6 S-boxes with linear branch number 3, differential branch number 3, and with other good cryptographic properties such as nonlinearity 24 and differential uniformity 4.
Last updated:  2019-12-10
On asymptotically optimal tests for random number generators
Boris Ryabko
The problem of constructing effective statistical tests for random number generators (RNG) is considered. Currently, statistical tests for RNGs are a mandatory part of cryptographic information protection systems, but their effectiveness is mainly estimated based on experiments with various RNGs. We find an asymptotic estimate for the p-value of an optimal test in the case where the alternative hypothesis is a known stationary ergodic source, and then describe a family of tests each of which has the same asymptotic estimate of the p-value for any (unknown) stationary ergodic source.
Last updated:  2019-12-10
HIBEChain: A Hierarchical Identity-based Blockchain System for Large-Scale IoT
Zhiguo Wan, Wei Liu, Hui Cui
Internet-of-Things enables interconnection of billions of devices, which perform autonomous operations and collect various types of data. These things, along with their generated huge amount of data, need to be handled efficiently and securely. Centralized solutions are not desired due to security concerns and scalability issue. In this paper, we propose HIBEChain, a hierarchical blockchain system that realizes scalable and accountable management of IoT devices and data. HIBEChain consists of multiple permissioned blockchains that form a hierarchical tree structure. To support the hierarchical structure of HIBEChain, we design a decentralized hierarchical identity-based signature (DHIBS) scheme, which enables IoT devices to use their identities as public keys. Consequently, HIBEChain achieves high scalability through parallel processing as blockchain sharding schemes, and it also implements accountability by use of identity-base keys. Identity-based keys not only make HIBEChain more user-friendly, they also allow private key recovery by validators when necessary. We provide detailed analysis of its security and performance, and implement HIBEChain based on Ethereum source code. Experiment results show that a 6-ary, (7,10)-threshold, 4-level HIBEChain can achieve 32,000 TPS, and it needs only 9 seconds to confirm a transaction.
Last updated:  2019-12-10
Efficient Side-Channel Secure Message Authentication with Better Bounds
Chun Guo, François-Xavier Standaert, Weijia Wang, Yu Yu
We investigate constructing message authentication schemes from symmetric cryptographic primitives, with the goal of achieving security when most intermediate values during tag computation and verification are leaked (i.e., mode-level leakage-resilience). Existing efficient proposals typically follow the plain Hash-then-MAC paradigm $T=MAC_K(H(M))$. When the domain of the MAC function $MAC_K$ is $\{0,1\}^{128}$, e.g., when instantiated with the AES, forgery is possible within time $2^{64}$ and data complexity 1. To dismiss such cheap attacks, we propose two modes: LRW1-based Hash-then-MAC (LRWHM) that is built upon the LRW1 tweakable blockcipher of Liskov, Rivest, and Wagner, and Rekeying Hash-then-MAC (RHM) that employs internal rekeying. Built upon secure AES implementations, LRWHM is provably secure up to (beyond-birthday) $2^{78.3}$ time complexity, while RHM is provably secure up to $2^{121}$ time. Thus in practice, their main security threat is expected to be side-channel key recovery attacks against the AES implementations. Finally, we benchmark the performance of instances of our modes based on the AES and SHA3 and confirm their efficiency.
Last updated:  2019-12-10
QC-MDPC decoders with several shades of gray
Nir Drucker, Shay Gueron, Dusan Kostic
QC-MDPC code-based KEMs rely on decoders that have a small or even negligible Decoding Failure Rate (DFR). These decoders should be efficient and implementable in constant-time. One example for a QC-MDPC KEM is the Round-2 candidate of the NIST PQC standardization project, "BIKE". We have recently shown that the Black-Gray decoder achieves the required properties. In this paper, we deffine several new variants of the Black-Gray decoder. One of them, called Black-Gray-Flip, needs only 7 steps to achieve a smaller DFR than Black-Gray with 9 steps, for the same block size. On current AVX512 platforms, our BIKE-1 (Level-1) constant-time decapsulation is 1:9x faster than the previous decapsulation with Black-Gray. We also report an additional 1:25x decapsulating speedup using the new AVX512-VBMI2 and vector-PCLMULQDQ instructions available on "Ice-Lake" micro-architecture.
Last updated:  2021-02-12
IPDL: A Probabilistic Dataflow Logic for Cryptography
Xiong Fan, Joshua Gancher, Greg Morrisett, Elaine Shi, Kristina Sojakova
While there have been many successes in verifying cryptographic security proofs of noninter- active primitives such as encryption and signatures, less attention has been paid to interactive cryptographic protocols. Interactive protocols introduce the additional verification challenge of concurrency, which is notoriously hard to reason about in a cryptographically sound manner. When proving the (approximate) observational equivalance of protocols, as is required by simulation based security in the style of Universal Composability (UC), a bisimulation is typically performed in order to reason about the nontrivial control flows induced by concurrency. Unfortunately, bisimulations are typically very tedious to carry out manually and do not capture the high-level intuitions which guide informal proofs of UC security on paper. Because of this, there is currently a large gap of formality between proofs of cryptographic protocols on paper and in mechanized theorem provers. We work towards closing this gap through a new methodology for iteratively constructing bisimulations in a manner close to on-paper intuition. We present this methodology through Interactive Probabilistic Dependency Logic (IPDL), a simple calculus and proof system for specifying and reasoning about (a certain subclass of) distributed probabilistic computations. The IPDL framework exposes an equational logic on protocols; proofs in our logic consist of a number of rewriting rules, each of which induce a single low-level bisimulation between protocols. We show how to encode simulation-based security in the style of UC in our logic, and evaluate our logic on a number of case studies; most notably, a semi-honest secure Oblivious Transfer protocol, and a simple multiparty computation protocol robust to Byzantine faults. Due to the novel design of our logic, we are able to deliver mechanized proofs of protocols which we believe are comprehensible to cryptographers without verification expertise. We provide a mechanization in Coq of IPDL and all case studies presented in this work.
Last updated:  2019-12-10
Extending NIST's CAVP Testing of Cryptographic Hash Function Implementations
Nicky Mouha, Christopher Celi
This paper describes a vulnerability in Apple's CoreCrypto library, which affects 11 out of the 12 implemented hash functions: every implemented hash function except MD2 (Message Digest 2), as well as several higher-level operations such as the Hash-based Message Authentication Code (HMAC) and the Ed25519 signature scheme. The vulnerability is present in each of Apple's CoreCrypto libraries that are currently validated under FIPS 140-2 (Federal Information Processing Standard). For inputs of about $2^{32}$ bytes (4 GiB) or more, the implementations do not produce the correct output, but instead enter into an infinite loop. The vulnerability shows a limitation in the Cryptographic Algorithm Validation Program (CAVP) of the National Institute of Standards and Technology (NIST), which currently does not perform tests on hash functions for inputs larger than 65 535 bits. To overcome this limitation of NIST's CAVP, we introduce a new test type called the Large Data Test (LDT). The LDT detects vulnerabilities similar to that in CoreCrypto in implementations submitted for validation under FIPS 140-2.
Last updated:  2019-12-10
A Non-Interactive Shuffle Argument With Low Trust Assumptions
Antonis Aggelakis, Prastudy Fauzi, Georgios Korfiatis, Panos Louridas, Foteinos Mergoupis-Anagnou, Janno Siim, Michal Zajac
A shuffle argument is a cryptographic primitive for proving correct behaviour of mix-networks without leaking any private information. Several recent constructions of non-interactive shuffle arguments avoid the random oracle model but require the public key to be trusted. We augment the most efficient argument by Fauzi et al. [Asiacrypt 2017] with a distributed key generation protocol that assures soundness of the argument if at least one party in the protocol is honest and additionally provide a key verification algorithm which guarantees zero-knowledge even if all the parties are malicious. Furthermore, we simplify their construction and improve security by using weaker assumptions while retaining roughly the same level of efficiency. We also provide an implementation to the distributed key generation protocol and the shuffle argument.
Last updated:  2019-12-10
Image PUF: A Physical Unclonable Function for Printed Electronics based on Optical Variation of Printed Inks
Ahmet Turan Erozan, Michael Hefenbrock, Michael Beigl, Jasmin Aghassi-Hagmann, Mehdi B. Tahoori
Printed Electronics (PE) has a rapidly growing market, thus, the counterfeiting/overbuilding of PE components is anticipated to grow. The common solution for the counterfeiting is Physical Unclonable Functions (PUFs). In PUFs, a unique fingerprint is extracted from (irreproducible) process variations in the production and used in the authentication of valid components. Many commonly used PUFs are electrical PUFs by leveraging the impact of process variations on electrical properties of devices, circuits and chips. Hence, they add overhead to the production which results in additional costs. While such costs may be negligible for many application domains targeted by silicon-based VLSI technologies, they are detrimental to the ultra-low-cost PE applications. In this paper, we propose an optical PUF (iPUF) extracting a fingerprint from the optically visible variation of printed inks in the PE components. Since iPUF does not require any additional circuitry, the PUF production cost consists of merely acquisition, processing and saving an image of the circuit components, matching the requirements of ultra-low-cost margin applications of PE. To further decrease the storage costs for iPUF, we utilize image downscaling resulting in a compression rate of 484x, while still preserving the reliability and uniqueness of the fingerprints. The proposed fingerprint extraction methodology is applied to four datasets for evaluation. The results show that the process variation of the optical shapes of printed inks is suitable as an optical PUF to prevent counterfeiting in PE.
Last updated:  2020-06-04
Designated-ciphertext Searchable Encryption
Zi-Yuan Liu, Yi-Fan Tseng, Raylin Tso, Masahiro Mambo
Public-key encryption with keyword search (PEKS), proposed by Boneh \textit{et al.}, allows users to search encrypted keywords without losing data privacy. Although extensive studies have been conducted on this topic, only a few have focused on insider keyword guessing attacks (IKGA) that can reveal a user's sensitive information. In particular, after receiving a trapdoor used to search ciphertext from a user, a malicious insider (\textit{e.g}., a server) can randomly encrypt possible keywords using a user's public key, and then test whether the trapdoor corresponds to the selected keyword. This paper introduces a new concept called \textit{designated-ciphertext searchable encryption} (DCSE), which provides the same desired functionality as a PEKS scheme and prevents IKGA. Each trapdoor in DCSE is designated to a specific ciphertext, and thus malicious insiders cannot perform IKGA. We further propose a generic DCSE scheme that employs identity-based encryption and a key encapsulation mechanism. We provide formal proofs to demonstrate that the generic construction satisfies the security requirements. Moreover, we provide a lattice-based instantiation whose security is based on NTRU and ring-learning with errors assumptions; the proposed scheme is thus considered to be resistant to the quantum-computing attacks.
Last updated:  2020-07-26
CSIDH on Other Form of Elliptic Curves
Xuejun Fan, Song Tian, Bao Li, Xiu Xu
Isogenies on elliptic curves are of great interest in post-quantum cryptography and appeal to more and more researchers. Many protocols have been proposed such as OIDH, SIDH and CSIDH with their own advantages. We now focus on the CSIDH which based on the Montgomery curves in finite fields Fp with p=3 mod 8 whose endomorphism ring is O. We try to change the form of elliptic curves into y^2=x^3+Ax^2-x and the characteristic of the prime field into p=7 mod 8 , which induce the endomorphism ring becomes O_K. Moreover, many propositions&#65292;including the formula of isogenies between elliptic curves of the special form and the unique of the representation of Fp-isomorphism class, are given to illustrate the rationality of our idea. An important point to notice that the efficiency can't be reduced because the only difference between our formula of isogenies and that of CSIDH is the sign of some items. Furthermore, we also give a proposition that the protocol based on our case can avoid the collision proposed in [17].
Last updated:  2020-11-10
The Signal Private Group System and Anonymous Credentials Supporting Efficient Verifiable Encryption
Melissa Chase, Trevor Perrin, Greg Zaverucha
In this paper we present a system for maintaining a membership list of users in a group, designed for use in the Signal Messenger secure messaging app. The goal is to support \(\mathit{private}\) \(\mathit{groups}\) where membership information is readily available to all group members but hidden from the service provider or anyone outside the group. In the proposed solution, a central server stores the group membership in the form of encrypted entries. Members of the group authenticate to the server in a way that reveals only that they correspond to some encrypted entry, then read and write the encrypted entries. Authentication in our design uses a primitive called a keyed-verification anonymous credential (KVAC), and we construct a new KVAC scheme based on an algebraic MAC, instantiated in a group \(\mathbb{G}\) of prime order. The benefit of the new KVAC is that attributes may be elements in \(\mathbb{G}\), whereas previous schemes could only support attributes that were integers modulo the order of \(\mathbb{G}\). This enables us to encrypt group data using an efficient Elgamal-like encryption scheme, and to prove in zero-knowledge that the encrypted data is certified by a credential. Because encryption, authentication, and the associated proofs of knowledge are all instantiated in \(\mathbb{G}\) the system is efficient, even for large groups.
Last updated:  2019-12-06
Toward A More Efficient Gröbner-based Algebraic Cryptanalysis
Hossein Arabnezhad-Khanoki, Babak Sadeghiyan
In this paper, we propose a new method to launch a more efficient algebraic cryptanalysis. Algebraic cryptanalysis aims at finding the secret key of a cipher by solving a collection of polynomial equations that describe the internal structure of the cipher, while chosen correlated plaintexts, as what appear in higher order differential cryptanalysis and its derivatives such as cube attack or integral cryptanalysis, forces many linear relation between intermediate state bits in the cipher. In this paper, we take these polynomial relations into account, so it become possible to simplify the equation system arising from algebraic cryptanalysis, and consequently solve the polynomial system more efficiently. We take advantage of Universal Proning technique to provide an efficient method to recover such linear polynomials. Another important parameter in algebraic cryptanalysis of ciphers is to effectively describe the cipher. We employ FWBW representation of S-boxes together with Universal Proning to help provide a more powerful algebraic cryptanalysis based on Gröbner-basis computation. We show our method is more efficient than doing algebraic cryptanalysis with MQ representation, and also than employing MQ together with Universal Proning. To show the effectiveness of our approach, we applied it for the cryptanalysis of several light weight block ciphers. A by-product of employing this approach is that we have achieved such an efficiency to algebraic cryptanalyse 12-round LBlock, 6-round MIBS, 7-round PRESENT and 9-round SKINNY light-weight block ciphers, so far.
Last updated:  2019-12-12
A New Encryption Scheme Based On Subset Identifying Problem
Muhammad Rezal Kamel Ariffin
In this article we put forward an encryption mechanism that dwells on the problem of identifying the correct subset of primes from a known set. By utilizing our specially constructed public key when computing the ciphertext equation, the decryption mechanism can correctly output the shared secret parameter. The scheme has short key length, no decryption failure issues, plaintext-to-ciphertext expansion of one-to-two as well as uses \simple" mathematics in order to achieve maximum simplicity in design, such that even practitioners with limited mathematical background will be able to understand the arithmetic. Due to in-existence of efficient algorithms running upon a quantum computer to obtain the roots of our ciphertext equation and also to retrieve the private key from the public key, our encryption mechanism can be a probable candidate for seamless post quantum drop-in replacement for current traditional asymmetric schemes.
Last updated:  2019-12-06
Strong Authenticity with Leakage under Weak and Falsifiable Physical Assumptions
Francesco Berti, Chun Guo, Olivier Pereira, Thomas Peters, François-Xavier Standaert
Authenticity can be compromised by information leaked via side-channels (e.g., power consumption). Examples of attacks include direct key recoveries and attacks against the tag verification which may lead to forgeries. At FSE 2018, Berti et al. described two authenticated encryption schemes which provide authenticity assuming a “leak-free implementation” of a Tweakable Block Cipher (TBC). Precisely, security is guaranteed even if all the intermediate computations of the target implementation are leaked in full but the TBC long-term key. Yet, while a leak-free implementation reasonably models strongly protected implementations of a TBC, it remains an idealized physical assumption that may be too demanding in many cases, in particular, if hardware engineers mitigate the leakage to a good extent but (due to performance constraints) do not reach leak-freeness. In this paper, we get rid of this important limitation by introducing the notion of “Strong Unpredictability with Leakage” for BC's and TBC's. It captures the hardness for an adversary to provide a fresh and valid input/output pair for a (T)BC, even having oracle access to the (T)BC, its inverse and their leakages. This definition is game-based and may be verified/falsified by laboratories. Based on it, we then provide two Message Authentication Codes (MAC) which are secure if the (T)BC on which they rely are implemented in a way that maintains a sufficient unpredictability. Thus, we improve the theoretical foundations of leakage-resilient MAC and extend them towards engineering constraints that are easier to achieve in practice.
Last updated:  2019-12-21
Cryptanalysis and Improvement of Smart-ID's Clone Detection Mechanism
Augustin P. Sarr
At ESORICS 2017, Buldas et al. proposed an efficient (software only) server supported signature scheme, geared to mobile devices, termed Smart-ID. A major component of their design is a clone detection mechanism, which allows a server to detect the existence of clones of a client's private key share. We point out a flaw in this mechanism. We show that, under a realistic race condition, an attacker which holds a password camouflaged private share can lunch an online dictionary attack such that (i)if all its password guesses are wrong, it is very likely that the attack will not be detected, and (ii) if one of its guesses is correct, it can generate signatures on messages of its choice, and the attack will \emph{not} be detected. We propose an improvement of Smart-ID to thwart the attack we present.
Last updated:  2020-08-25
Isochronous Gaussian Sampling: From Inception to Implementation
James Howe, Thomas Prest, Thomas Ricosset, Mélissa Rossi
Gaussian sampling over the integers is a crucial tool in lattice-based cryptography, but has proven over the recent years to be surprisingly challenging to perform in a generic, efficient and provable secure manner. In this work, we present a modular framework for generating discrete Gaussians with arbitrary center and standard deviation. Our framework is extremely simple, and it is precisely this simplicity that allowed us to make it easy to implement, provably secure, portable, efficient, and provably resistant against timing attacks. Our sampler is a good candidate for any trapdoor sampling and it is actually the one that has been recently implemented in the Falcon signature scheme. Our second contribution aims at systematizing the detection of implementation errors in Gaussian samplers. We provide a statistical testing suite for discrete Gaussians called SAGA (Statistically Acceptable GAussian). In a nutshell, our two contributions take a step towards trustable and robust Gaussian sampling real-world implementations.
Last updated:  2019-12-07
Withdrawn
Withdrawn
Withdrawn
Last updated:  2020-05-26
Cloud-assisted Asynchronous Key Transport with Post-Quantum Security
Gareth T. Davies, Herman Galteland, Kristian Gjøsteen, Yao Jiang
In cloud-based outsourced storage systems, many users wish to securely store their files for later retrieval, and additionally to share them with other users. These retrieving users may not be online at the point of the file upload, and in fact they may never come online at all. In this asynchoronous environment, key transport appears to be at odds with any demands for forward secrecy. Recently, Boyd et al. (ISC 2018) presented a protocol that allows an initiator to use a modified key encapsulation primitive, denoted a blinded KEM (BKEM), to transport a file encryption key to potentially many recipients via the (untrusted) storage server, in a way that gives some guarantees of forward secrecy. Until now all known constructions of BKEMs are built using RSA and DDH, and thus are only secure in the classical setting. We further the understanding of the use of blinding in post-quantum cryptography in two aspects. First, we show how to generically build blinded KEMs from homomorphic encryption schemes with certain properties. Second, we construct the first post-quantum secure blinded KEMs, and the security of our constructions are based on hard lattice problems.
Last updated:  2019-12-05
The group of automorphisms of the set of self-dual bent functions
Aleksandr Kutsenko
A bent function is a Boolean function in even number of variables which is on the maximal Hamming distance from the set of affine Boolean functions. It is called self-dual if it coincides with its dual. It is called anti-self-dual if it is equal to the negation of its dual. A mapping of the set of all Boolean functions in n variables to itself is said to be isometric if it preserves the Hamming distance. In this paper we study isometric mappings which preserve self-duality and anti-self-duality of a Boolean bent function. The complete characterization of these mappings is obtained for n>2. Based on this result, the set of isometric mappings which preserve the Rayleigh quotient of the Sylvester Hadamard matrix, is characterized. The Rayleigh quotient measures the Hamming distnace between bent function and its dual, so as a corollary, all isometric mappings which preserve bentness and the Hamming distance between bent function and its dual are described.
Last updated:  2019-12-05
Incrementally Verifiable Computation via Incremental PCPs
Uncategorized
Moni Naor, Omer Paneth, Guy N. Rothblum
Show abstract
Uncategorized
If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called ``knowledge assumptions''). In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17]. Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results.
Last updated:  2019-12-05
Efficient, Coercion-free and Universally Verifiable Blockchain-based Voting
Tassos Dimtiriou
Most electronic voting systems today satisfy the basic requirements of privacy, unreusability, eligibility and fairness in a natural and rather straightforward way. However, receipt-freeness, incoercibility and universal verifiability are much harder to implement and in many cases they require a large amount of computation and communication overhead. In this work, we propose a blockchain-based voting system which achieves all the properties expected from secure elections without requiring too much from the voter. Coercion resistance and receipt-freeness are ensured by means of a randomizer token -- a tamper-resistance source of randomness which acts as a black box in constructing the ballot for the user. Universal verifiability is ensured by the append-only structure of the blockchain, thus minimizing the trust placed in election authorities. Additionally, the system has linear overhead when tallying the votes, hence it is scalable and practical for large scale elections.
Last updated:  2019-12-05
Revisiting Higher-Order Computational Attacks against White-Box Implementations
Uncategorized
Houssem Maghrebi, Davide Alessio
Show abstract
Uncategorized
White-box cryptography was first introduced by Chow et al. in $2002$ as a software technique for implementing cryptographic algorithms in a secure way that protects secret keys in an untrusted environment. Ever since, Chow et al.'s design has been subject to the well-known Differential Computation Analysis (DCA). To resist DCA, a natural approach that white-box designers investigated is to apply the common side-channel countermeasures such as masking. In this paper, we suggest applying the well-studied leakage detection methods to assess the security of masked white-box implementations. Then, we extend some well-known side-channel attacks (i.e. the bucketing computational analysis, the mutual information analysis, and the collision attack) to the higher-order case to defeat higher-order masked white-box implementations. To illustrate the effectiveness of these attacks, we perform a practical evaluation against a first-order masked white-box implementation. The obtained results have demonstrated the practicability of these attacks in a real-world scenario.
Last updated:  2020-01-31
CSIDH on the surface
Wouter Castryck, Thomas Decru
For primes \(p \equiv 3 \bmod 4\), we show that setting up CSIDH on the surface, i.e., using supersingular elliptic curves with endomorphism ring \(Z[(1 + \sqrt{-p})/2]\), amounts to just a few sign switches in the underlying arithmetic. If \(p \equiv 7 \bmod 8\) then the availability of very efficient horizontal 2-isogenies allows for a noticeable speed-up, e.g., our resulting CSURF-512 protocol runs about 5.68% faster than CSIDH-512. This improvement is completely orthogonal to all previous speed-ups, constant-time measures and construction of cryptographic primitives that have appeared in the literature so far. At the same time, moving to the surface gets rid of the redundant factor \(Z_3\) of the acting ideal-class group, which is present in the case of CSIDH and offers no extra security.
Last updated:  2019-12-14
No RISC, no Fun: Comparison of Hardware Accelerated Hash Functions for XMSS
Ingo Braun, Fabio Campos, Steffen Reith, Marc Stöttinger
We investigate multiple implementations of a hash-based digital signature scheme in software and hardware for a RISC-V processor. For this, different instantiations of XMSS by leveraging SHA-256 and SHA-3 are considered. Moreover, we propose various optimisations for accelerating the signature scheme on resource-constrained FPGAs. Compared to the pure software version, the implemented hardware accelerators for SHA-256 and SHA-3 achieve a significant speedup of 25x and 87x respectively for generating 2^10 key pairs. Signing and verifying with such key pairs achieves a speedup of 17x and 10x in the case of SHA-256 and respectively 55x and 20x for SHA-3. Recently, Wang et al. presented an XMSS-specific software-hardware co-design, resulting in significant speedups. Our general-purpose hardware accelerator for SHA-256 further reduces the calculation cost for signing by 26%, and by 28% for verifying in comparison to results of Wang et al., and achieves as well a better time-area product for signing (3.3x) and verifying (2.5x).
Last updated:  2019-12-04
Automatize parameter tuning in Ring-Learning-With-Errors-based leveled homomorphic cryptosystem implementations
Vincent HERBERT
Lattice-based cryptography offers quantum-resistant cryptosystems but there is not yet official recommendations to choose parameters with standard security levels. Some of these cryptosystems permit secure computations and aim at a wider audience than cryptographic community. We focus on one of them, a leveled homomorphic cryptosystem (LHE): Brakersi/Fan-Vercauteren's (BFV) one. The family of LHE cryptosystems needs to be well-instantiated not only to protect input and output ciphertexts and to perform efficiently computations, but also, for them, parametrization constrains the quantity of homomorphic computations that can be performed with guarantee of correctness. It demands to choose parameters accordingly. In addition, each implementation brings external constraints to optimize performance. All of this makes it tedious for the non-expert user to choose parameters. To solve this, we have developed CinguParam to help user to instantiate implementations of BFV in different libraries: Cingulata, FV-NFLlib and Microsoft SEAL. CinguParam permits to generate an up-to-date database of parameter sets in function of computation budget, security parameters and implementation choices. This tool includes a notion of budget to ensure correct homomorphic computations and the one of BKZ reduction cost model to grasp the gap from concrete security, nowadays. It makes use of the LWE-Estimator to obtain up-to-date security estimations. CinguParam permits to select automatically a suitable parameter set with Cingulata and it can be used to generate code snippets to set parameters with FV-NFLlib and Microsoft SEAL.
Last updated:  2019-12-04
SMChain: A Scalable Blockchain Protocol for Secure Metering Systems in Distributed Industrial Plants
Gang Wang, Zhijie Jerry Shi, Mark Nixon, Song Han
Metering is a critical process in large-scale distributed industrial plants, which enables multiple plants to collaborate to offer mutual services without outside interference. When distributed plants measure the data from a shared common source, e.g., flow metering in an oil pipeline, trustworthiness and immutability must be guaranteed among them. In this paper, we propose a hierarchical and scalable blockchain-based secure metering system, \textit{SMChain}, to provide strong security, trustworthy guarantee, and immutable services. {\em SMChain} adopts a two-layer blockchain structure, consisting of independent local blockchains stored at individual plants and one state blockchain stored in the cloud. To deal with the scalability issues within each plant, we propose a novel scalable Byzantine Fault Tolerance (BFT) consensus protocol based on \textit{(k, n)}-threshold signature scheme to deal with the Byzantine faults and reduce the intra-plant communication complexity from $O(n^2)$ to $O(n)$. For the state blockchain, we use a cloud-based service to synchronize and integrate the local blockchains into one state blockchain, which can further be distributed back to each plant.
Last updated:  2022-09-09
RedShift: Transparent SNARKs from List Polynomial Commitments
Assimakis Kattis, Konstantin Panarin, Alexander Vlasov
We introduce an efficient transformation from univariate polynomial commitment based zk-SNARKs to their transparent counterparts. The transformation is achieved with the help of a new IOP primitive which we call a list polynomial commitment. This primitive is applicable for preprocessing zk-SNARKs over both prime and binary fields. We present the primitive itself along with a soundness analysis of the transformation and instantiate it with an existing universal proof system. We also present benchmarks for a proof of concept implementation alongside a comparison with the current non-transparent state-of-the-art. Our results show competitive efficiency both in terms of proof size and generation times. At the 80-bit security level, our benchmarks provide proof generation times of about a minute and proof sizes of around 515 KB for a circuit with one million gates.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.