https://eprint.iacr.org/rss/atom.xml?order=recentCryptology ePrint Archive2023-12-01T01:32:44+00:00None of your businesshttps://iacr.org/img/logo/iacrlogo_small.pngMetadata is available under the CC0 license https://creativecommons.org/publicdomain/zero/1.0/. Each article has a PDF with different license specified for each one.The Cryptology ePrint Archive provides rapid access to recent
research in cryptology. Papers have been placed here by the
authors and did not undergo any refereeing process other than
verifying that the work seems to be within the scope of
cryptology and meets some minimal acceptance criteria and
publishing conditions.https://eprint.iacr.org/2023/1849Lattice-based Programmable Hash Functions and Applications2023-12-01T01:32:44+00:00Jiang ZhangYu ChenZhenfeng ZhangDriven by the open problem raised by Hofheinz and Kiltz
(Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive.
We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the
number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q, which is large for typical parameters.
To overcome the above limitations, we also give a refined way of
using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of B¨ohl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15), and allow us to achieve much tighter security from weaker hardness assumptions.Driven by the open problem raised by Hofheinz and Kiltz
(Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give three types of concrete constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is a collision-resistant hash function, which gives a direct application of this new primitive.
We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain new short signature schemes and IBE schemes from (ideal) lattices. Specifically, by instantiating the generic constructions with our Type-II and Type-III PHF constructions, we immediately obtain two short signatures and two IBE schemes with asymptotically much shorter keys. A major downside which inherits from our Type-II and Type-III PHF constructions is that we can only prove the security of the new signatures and IBEs in the bounded security model that the
number Q of the adversary’s queries is required to be known in advance. Another downside is that the computational time of our new signatures and IBEs is a linear function of Q, which is large for typical parameters.
To overcome the above limitations, we also give a refined way of
using Type-II and Type-III PHFs to construct lattice-based short signatures with short verification keys in the full security model. In particular, our methods depart from the confined guessing technique of B¨ohl et al. (Eurocrypt’13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto’14) and by Alperin-Sheriff (PKC’15), and allow us to achieve much tighter security from weaker hardness assumptions.2023-12-01T01:32:44+00:00https://creativecommons.org/licenses/by-nc/4.0/Jiang ZhangYu ChenZhenfeng Zhanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/1848Breach Extraction Attacks: Exposing and Addressing the Leakage in Second Generation Compromised Credential Checking Services2023-12-01T09:39:18+00:00Dario PasquiniDanilo FrancatiGiuseppe AtenieseEvgenios M. KornaropoulosCredential tweaking attacks use breached passwords to generate semantically similar passwords and gain access to victims' services.
These attacks sidestep the first generation of compromised credential checking (C3) services. The second generation of compromised credential checking services, called "Might I Get Pwned" (MIGP), is a privacy-preserving protocol that defends against credential tweaking attacks by allowing clients to query whether a password or a semantically similar variation is present in the server's compromised credentials dataset.
The desired privacy requirements include not revealing the user's entered password to the server and ensuring that no compromised credentials are disclosed to the client.
In this work, we formalize the cryptographic leakage of the MIGP protocol and perform a security analysis to assess its impact on the credentials held by the server. We focus on how this leakage aids breach extraction attacks, where an honest-but-curious client interacts with the server to extract information about the stored credentials. Furthermore, we discover additional leakage that arises from the implementation of Cloudflare's deployment of MIGP. We evaluate how the discovered leakage affects the guessing capability of an attacker in relation to breach extraction attacks. Finally, we propose MIGP 2.0, a new iteration of the MIGP protocol designed to minimize data leakage and prevent the introduced attacks.Credential tweaking attacks use breached passwords to generate semantically similar passwords and gain access to victims' services.
These attacks sidestep the first generation of compromised credential checking (C3) services. The second generation of compromised credential checking services, called "Might I Get Pwned" (MIGP), is a privacy-preserving protocol that defends against credential tweaking attacks by allowing clients to query whether a password or a semantically similar variation is present in the server's compromised credentials dataset.
The desired privacy requirements include not revealing the user's entered password to the server and ensuring that no compromised credentials are disclosed to the client.
In this work, we formalize the cryptographic leakage of the MIGP protocol and perform a security analysis to assess its impact on the credentials held by the server. We focus on how this leakage aids breach extraction attacks, where an honest-but-curious client interacts with the server to extract information about the stored credentials. Furthermore, we discover additional leakage that arises from the implementation of Cloudflare's deployment of MIGP. We evaluate how the discovered leakage affects the guessing capability of an attacker in relation to breach extraction attacks. Finally, we propose MIGP 2.0, a new iteration of the MIGP protocol designed to minimize data leakage and prevent the introduced attacks.2023-11-30T19:04:57+00:00https://creativecommons.org/licenses/by/4.0/Dario PasquiniDanilo FrancatiGiuseppe AtenieseEvgenios M. Kornaropouloshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1847Cycle Structure and Observability of Two Types of Galois NFSRs2023-11-30T18:57:27+00:00Xianghan WangJianghua ZhongDongdai LinNonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. One security criterion for the design of a stream cipher is to assure its keystream has a long period. To meet this criterion, the NFSR used in a stream cipher must have a long state cycle. Further, to simultaneously avoid equivalent keys, the keystream's period is not compressed compared to the NFSR's state cycle length, which can be guaranteed if the NFSR is observable in the sense that any two distinct initial states are distinguishable from their resulting output sequences. The cycle structure of a general NFSR remains an open hard problem. Constructing Fibonacci NFSRs with maximum state cycles has therefore attracted much attention, but so far such Fibonacci NFSRs with known feedback functions have been found only for their stage numbers no greater than 33.
Considering that Galois NFSRs may decrease the area and increase the throughput compared to Fibonacci NFSRs, this paper studies two types of $n$-stage Galois NFSRs, whose state transition matrices are circulant matrices with only one nonzero element of 1 in each column. The cycle structure and observability of both types are disclosed using the semi-tensor product based Boolean network approach. In the first type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is even. It has the maximum state cycle with an arbitrary stage number and an explicit feedback functions. It is observable if and only if its output function is dependent on the first state bit. In the second type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is $2^m+1$ with positive integer $m\leq n-1$ for the NFSR's stage number $n$. It has $2^m$ cycles of length $2^{n-m}$, and it is observable if its output function is dependent on all the state bits whose indices are no smaller than $n-m+1$.Nonlinear feedback shift registers (NFSRs) are used in many stream ciphers as their main building blocks. One security criterion for the design of a stream cipher is to assure its keystream has a long period. To meet this criterion, the NFSR used in a stream cipher must have a long state cycle. Further, to simultaneously avoid equivalent keys, the keystream's period is not compressed compared to the NFSR's state cycle length, which can be guaranteed if the NFSR is observable in the sense that any two distinct initial states are distinguishable from their resulting output sequences. The cycle structure of a general NFSR remains an open hard problem. Constructing Fibonacci NFSRs with maximum state cycles has therefore attracted much attention, but so far such Fibonacci NFSRs with known feedback functions have been found only for their stage numbers no greater than 33.
Considering that Galois NFSRs may decrease the area and increase the throughput compared to Fibonacci NFSRs, this paper studies two types of $n$-stage Galois NFSRs, whose state transition matrices are circulant matrices with only one nonzero element of 1 in each column. The cycle structure and observability of both types are disclosed using the semi-tensor product based Boolean network approach. In the first type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is even. It has the maximum state cycle with an arbitrary stage number and an explicit feedback functions. It is observable if and only if its output function is dependent on the first state bit. In the second type, each Galois NFSR has the state transition matrix, in which the position of the element 1 in the first column is $2^m+1$ with positive integer $m\leq n-1$ for the NFSR's stage number $n$. It has $2^m$ cycles of length $2^{n-m}$, and it is observable if its output function is dependent on all the state bits whose indices are no smaller than $n-m+1$.2023-11-30T18:57:27+00:00https://creativecommons.org/licenses/by/4.0/Xianghan WangJianghua ZhongDongdai Linhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1846New Security Proofs and Complexity Records for Advanced Encryption Standard2023-11-30T17:14:14+00:00Orhun KaraCommon block ciphers like AES specified by the NIST or KASUMI (A5/3) of GSM are extensively utilized by billions of individuals globally to protect their privacy and maintain confidentiality in daily communications. However, these ciphers lack comprehensive security proofs against the vast majority of known attacks. Currently, security proofs are limited to differential and linear attacks for both AES and KASUMI. For instance, the consensus on the security of AES is not based on formal mathematical proofs but on intensive cryptanalysis over its reduced rounds spanning several decades. In this work, we introduce new security proofs for AES against another attack method: impossible differential (ID) attacks. We classify ID attacks as reciprocal and nonreciprocal ID attacks. We show that sharp and generic lower bounds can be imposed on the data complexities of reciprocal ID attacks on substitution permutation networks. We prove that the minimum data required for a reciprocal ID attack on AES using a conventional ID characteristic is $2^{66}$ chosen plaintexts whereas a nonreciprocal ID attack involves at least $2^{88}$ computational steps. We mount a nonreciprocal ID attack on 6-round AES for 192-bit and 256-bit keys, which requires only $2^{18}$ chosen plaintexts and outperforms the data complexity of any attack. Given its marginal time complexity, this attack does not pose a substantial threat to the security of AES. However, we have made enhancements to the integral attack on 6-round AES, thereby surpassing the longstanding record for the most efficient attack after a period of 23 years.Common block ciphers like AES specified by the NIST or KASUMI (A5/3) of GSM are extensively utilized by billions of individuals globally to protect their privacy and maintain confidentiality in daily communications. However, these ciphers lack comprehensive security proofs against the vast majority of known attacks. Currently, security proofs are limited to differential and linear attacks for both AES and KASUMI. For instance, the consensus on the security of AES is not based on formal mathematical proofs but on intensive cryptanalysis over its reduced rounds spanning several decades. In this work, we introduce new security proofs for AES against another attack method: impossible differential (ID) attacks. We classify ID attacks as reciprocal and nonreciprocal ID attacks. We show that sharp and generic lower bounds can be imposed on the data complexities of reciprocal ID attacks on substitution permutation networks. We prove that the minimum data required for a reciprocal ID attack on AES using a conventional ID characteristic is $2^{66}$ chosen plaintexts whereas a nonreciprocal ID attack involves at least $2^{88}$ computational steps. We mount a nonreciprocal ID attack on 6-round AES for 192-bit and 256-bit keys, which requires only $2^{18}$ chosen plaintexts and outperforms the data complexity of any attack. Given its marginal time complexity, this attack does not pose a substantial threat to the security of AES. However, we have made enhancements to the integral attack on 6-round AES, thereby surpassing the longstanding record for the most efficient attack after a period of 23 years.2023-11-30T17:14:14+00:00https://creativecommons.org/licenses/by/4.0/Orhun Karahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1845Efficient Issuer-Hiding Authentication, Application to Anonymous Credential2023-11-30T16:07:39+00:00Olivier SandersJacques TraoréAnonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer.
In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, whereas these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer.
In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property at a very low cost. This results in an efficient anonymous credential system that indeed provide a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.2023-11-30T16:07:39+00:00https://creativecommons.org/licenses/by/4.0/Olivier SandersJacques Traoréhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1844Unconditionally Secure Commitments with Quantum Auxiliary Inputs2023-11-30T14:45:49+00:00Tomoyuki MorimaeBarak NehoranTakashi YamakawaWe show the following unconditional results on quantum commitments in two related yet different models:
1. We revisit the notion of quantum auxiliary-input commitments introduced by Chailloux, Kerenidis, and Rosgen (Comput. Complex. 2016) where both the committer and receiver take the same quantum state, which is determined by the security parameter,
as quantum auxiliary inputs. We show that computationally-hiding and statistically-binding quantum auxiliary-input commitments exist unconditionally, i.e., without relying on any unproven assumption, while Chailloux et al. assumed a complexity-theoretic assumption,
${\bf QIP}\not\subseteq{\bf QMA}$. On the other hand, we observe that achieving both statistical hiding and statistical binding at the same time is impossible even in the quantum auxiliary-input setting.
To the best of our knowledge, this is the first example of unconditionally proving computational security of any form of (classical or quantum) commitments for which statistical security is impossible. As intermediate steps toward our construction, we introduce and unconditionally construct post-quantum sparse pseudorandom distributions and quantum auxiliary-input EFI pairs which may be of independent interest.
2. We introduce a new model which we call the common reference quantum state (CRQS) model where both the committer and receiver take the same quantum state that is randomly sampled by an efficient setup algorithm. We unconditionally prove that there exist statistically hiding and statistically binding commitments in the CRQS model, circumventing the impossibility in the plain model.
We also discuss their applications to zero-knowledge proofs, oblivious transfers, and multi-party computations.We show the following unconditional results on quantum commitments in two related yet different models:
1. We revisit the notion of quantum auxiliary-input commitments introduced by Chailloux, Kerenidis, and Rosgen (Comput. Complex. 2016) where both the committer and receiver take the same quantum state, which is determined by the security parameter,
as quantum auxiliary inputs. We show that computationally-hiding and statistically-binding quantum auxiliary-input commitments exist unconditionally, i.e., without relying on any unproven assumption, while Chailloux et al. assumed a complexity-theoretic assumption,
${\bf QIP}\not\subseteq{\bf QMA}$. On the other hand, we observe that achieving both statistical hiding and statistical binding at the same time is impossible even in the quantum auxiliary-input setting.
To the best of our knowledge, this is the first example of unconditionally proving computational security of any form of (classical or quantum) commitments for which statistical security is impossible. As intermediate steps toward our construction, we introduce and unconditionally construct post-quantum sparse pseudorandom distributions and quantum auxiliary-input EFI pairs which may be of independent interest.
2. We introduce a new model which we call the common reference quantum state (CRQS) model where both the committer and receiver take the same quantum state that is randomly sampled by an efficient setup algorithm. We unconditionally prove that there exist statistically hiding and statistically binding commitments in the CRQS model, circumventing the impossibility in the plain model.
We also discuss their applications to zero-knowledge proofs, oblivious transfers, and multi-party computations.2023-11-30T14:45:49+00:00https://creativecommons.org/licenses/by/4.0/Tomoyuki MorimaeBarak NehoranTakashi Yamakawahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1843Zero-day vulnerability prevention with recursive feature elimination and ensemble learning2023-11-30T13:10:59+00:00Mike Nkongolo Wa NkongoloThis study focuses on spotting and stopping new types of online threats by improving the UGRansome dataset to detect unusual activity in real-time. By blending different machine learning methods, like naïve tree-based ensemble learning and recursive feature elimination (RFE), the research achieves a high accuracy rate of 97%. Naïve Bayes (NB) stands out as the most effective classifier. The suggested setup, combining gradient boosting (GB) and random forest (RF) with NB, effectively identifies and prevents unknown vulnerabilities in computer systems. UGRansome successfully blocks over 100 kilobits per second (kbps) of harmful online traffic by using details pinpointed by the RFE method, specifically uniform resource locators (URLs). This outperforms existing Intrusion Detection System (IDS) datasets. It's particularly good at stopping secure shell attacks, proving the dataset's usefulness in making networks safer. This research marks significant progress in detecting intrusions. The NB model excels in accuracy, precision, and remembering patterns, especially in identifying new threats. Moreover, the suggested naïve tree-based ensemble model shows outstanding accuracy, standing out as the best-performing technique among all models studied. Applying the UGRansome properties-based rule noticeably changes how traffic is sorted, decreasing unknown traffic while increasing unclassified traffic, which requires more investigation.This study focuses on spotting and stopping new types of online threats by improving the UGRansome dataset to detect unusual activity in real-time. By blending different machine learning methods, like naïve tree-based ensemble learning and recursive feature elimination (RFE), the research achieves a high accuracy rate of 97%. Naïve Bayes (NB) stands out as the most effective classifier. The suggested setup, combining gradient boosting (GB) and random forest (RF) with NB, effectively identifies and prevents unknown vulnerabilities in computer systems. UGRansome successfully blocks over 100 kilobits per second (kbps) of harmful online traffic by using details pinpointed by the RFE method, specifically uniform resource locators (URLs). This outperforms existing Intrusion Detection System (IDS) datasets. It's particularly good at stopping secure shell attacks, proving the dataset's usefulness in making networks safer. This research marks significant progress in detecting intrusions. The NB model excels in accuracy, precision, and remembering patterns, especially in identifying new threats. Moreover, the suggested naïve tree-based ensemble model shows outstanding accuracy, standing out as the best-performing technique among all models studied. Applying the UGRansome properties-based rule noticeably changes how traffic is sorted, decreasing unknown traffic while increasing unclassified traffic, which requires more investigation.2023-11-30T13:10:59+00:00https://creativecommons.org/licenses/by/4.0/Mike Nkongolo Wa Nkongolohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1842Leverage Staking with Liquid Staking Derivatives (LSDs): Opportunities and Risks2023-12-01T12:21:25+00:00Xihan XiongZhipeng WangXi ChenWilliam KnottenbeltMichael HuthLido, the leading Liquidity Staking Derivative (LSD) provider on Ethereum, allows users to stake an arbitrary amount of ETH to receive stETH, which can be integrated with Decentralized Finance (DeFi) protocols such as Aave. The composability between Lido and Aave enables a novel strategy called “leverage staking”, where users stake ETH on Lido to acquire stETH, utilize stETH as collateral on Aave to borrow ETH, and then restake the borrowed ETH on Lido. Users can iteratively execute this process to optimize potential returns based on their risk profile.
This paper systematically studies the opportunities and risks associated with leverage staking. We are the first to formalize the stETH-ETH leverage staking strategy within the Lido-Aave ecosystem. Our empirical study identifies 262 leverage staking positions on Ethereum, with an aggregated staking amount of 295,243 ETH (482M USD). We discover that 90.13% of leverage staking positions have achieved higher returns than conventional staking. Furthermore, we perform stress tests to evaluate the risk introduced by leverage staking under extreme conditions. We find that leverage staking significantly amplifies the risk of cascading liquidations. We hope this paper can inform and encourage the development of robust risk management approaches to protect the Lido-Aave LSD ecosystem.Lido, the leading Liquidity Staking Derivative (LSD) provider on Ethereum, allows users to stake an arbitrary amount of ETH to receive stETH, which can be integrated with Decentralized Finance (DeFi) protocols such as Aave. The composability between Lido and Aave enables a novel strategy called “leverage staking”, where users stake ETH on Lido to acquire stETH, utilize stETH as collateral on Aave to borrow ETH, and then restake the borrowed ETH on Lido. Users can iteratively execute this process to optimize potential returns based on their risk profile.
This paper systematically studies the opportunities and risks associated with leverage staking. We are the first to formalize the stETH-ETH leverage staking strategy within the Lido-Aave ecosystem. Our empirical study identifies 262 leverage staking positions on Ethereum, with an aggregated staking amount of 295,243 ETH (482M USD). We discover that 90.13% of leverage staking positions have achieved higher returns than conventional staking. Furthermore, we perform stress tests to evaluate the risk introduced by leverage staking under extreme conditions. We find that leverage staking significantly amplifies the risk of cascading liquidations. We hope this paper can inform and encourage the development of robust risk management approaches to protect the Lido-Aave LSD ecosystem.2023-11-30T11:03:01+00:00https://creativecommons.org/licenses/by/4.0/Xihan XiongZhipeng WangXi ChenWilliam KnottenbeltMichael Huthhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1841Unclonable Cryptography with Unbounded Collusions2023-11-30T07:08:05+00:00Alper ÇakanVipul GoyalQuantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of $k$ such states cannot create $k+1$ working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve.
In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure $\mathcal{iO}$, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works.
Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]).
We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from $1\to2$ secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest.
Along the way, we also construct a puncturable functional encryption scheme whose master secret key can be punctured at all functions $f$ such that $f(m_0) \neq f(m_1)$. This might also be of independent interest.Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of $k$ such states cannot create $k+1$ working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve.
In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure $\mathcal{iO}$, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works.
Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]).
We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from $1\to2$ secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest.
Along the way, we also construct a puncturable functional encryption scheme whose master secret key can be punctured at all functions $f$ such that $f(m_0) \neq f(m_1)$. This might also be of independent interest.2023-11-30T07:08:05+00:00https://creativecommons.org/licenses/by/4.0/Alper ÇakanVipul Goyalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1840Unconditionally secure quantum commitments with preprocessing2023-11-30T01:26:44+00:00Luowen QianWe demonstrate how to build computationally secure commitment schemes with the aid of quantum auxiliary inputs without unproven complexity assumptions. Furthermore, the quantum auxiliary input can be prepared either (1) efficiently through a trusted setup similar to the classical common random string model, or (2) strictly between the two involved parties in uniform exponential time. Classically this remains impossible without first proving $\mathsf{P} \neq \mathsf{NP}$.We demonstrate how to build computationally secure commitment schemes with the aid of quantum auxiliary inputs without unproven complexity assumptions. Furthermore, the quantum auxiliary input can be prepared either (1) efficiently through a trusted setup similar to the classical common random string model, or (2) strictly between the two involved parties in uniform exponential time. Classically this remains impossible without first proving $\mathsf{P} \neq \mathsf{NP}$.2023-11-30T01:26:44+00:00https://creativecommons.org/licenses/by/4.0/Luowen Qianhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1839Ring-LWE Hardness Based on Ideals of Hidden Orders of Number Fields2023-11-30T00:32:10+00:00Charanjit S JutlaChengyu LinWe extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices.
In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. Further, the hard lattice problems need not be given the order $O$ itself as input. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible.
Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.We extend the known pseudorandomness of Ring-LWE to be based on lattices that do not correspond to any ideal of any order in the underlying number field. In earlier works of Lyubashevsky et al (EUROCRYPT 2010) and Peikert et al (STOC 2017), the hardness of RLWE was based on ideal lattices of ring of integers of number fields, which are known to be Dedekind domains. While these works extended Regev's (STOC 2005) quantum polynomial-time reduction for LWE, thus allowing more efficient and more structured cryptosystems, the additional algebraic structure of ideals of Dedekind domains leaves open the possibility that such ideal lattices are not as hard as general lattices.
In this work we show that hardness of $q$-Ring-LWE can be based on worst-case hardness of ideal lattices in arbitrary orders $O$, as long as the order $O$ satisfies the property that $\frac{1}{m}\cdot O$ contains the ring of integers, for some $m$ co-prime to $q$. Further, the hard lattice problems need not be given the order $O$ itself as input. The reduction requires that the noise be a factor $m$ more than the original Ring-LWE reduction. We also show that for the power-of-two cyclotomic number fields, there exist orders with $m=4$ such that non-trivial ideals of the order, which are not contained in the conductor, are non-invertible.
Another reduction shows that hardness of $q$-Ring-LWE can be based on worst-case hardness of lattices that correspond to sum of ideal-lattices in arbitrary and different orders in the number field, as long as the (set of) orders $\{O_i\}$ satisfy the property that $\frac{1}{m}\cdot O_i$ contains the ring of integers, for some $m$ co-prime to $q$. We also show that for the power-of-two cyclotomic number fields, there exist orders $O_1, O_2$ with $m=8$ such that there are ideals $I_1, I_2$ of $O_1, O_2$ resp. with $I_1+ I_2$ not an ideal of any order in the number field.2023-11-30T00:32:10+00:00https://creativecommons.org/licenses/by-sa/4.0/Charanjit S JutlaChengyu Linhttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/1838Quantifying risks in cryptographic selection processes2023-11-29T17:40:00+00:00Daniel J. BernsteinThere appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.There appears to be a widespread belief that some processes of selecting cryptosystems are less risky than other processes. As a case study of quantifying the difference in risks, this paper compares the currently-known-failure rates of three large groups of cryptosystems: (1) the round-1 submissions to the NIST Post-Quantum Cryptography Standardization Project, (2) the round-1 submissions not broken by the end of round 1, and (3) the round-1 submissions selected by NIST for round 2 of the same project. These groups of cryptosystems turn out to have currently-known-failure rates that are strikingly high, and that include statistically significant differences across the groups, not matching the pattern of differences that one might expect. Readers are cautioned that the actual failure rates could be much higher than the currently-known-failure rates.2023-11-29T17:40:00+00:00https://creativecommons.org/licenses/by/4.0/Daniel J. Bernsteinhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1837More forging (and patching) of tropical signatures2023-11-29T16:13:03+00:00Daniel R. L. BrownChris MonicoPanny [3] described how to forge the “tropical signatures” proposed by Chen, Grigoriev and Shpilrain [1]. (These signatures are loosely related to the NP-complete problem of factoring tropical polynomials).
We describe more methods to forge these tropical signatures. We also describe some patches that thwart all but one of these forgery methods (which we summarize as re-hashing an honest signature).Panny [3] described how to forge the “tropical signatures” proposed by Chen, Grigoriev and Shpilrain [1]. (These signatures are loosely related to the NP-complete problem of factoring tropical polynomials).
We describe more methods to forge these tropical signatures. We also describe some patches that thwart all but one of these forgery methods (which we summarize as re-hashing an honest signature).2023-11-29T16:13:03+00:00https://creativecommons.org/licenses/by/4.0/Daniel R. L. BrownChris Monicohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1836An Incremental PoSW for General Weight Distributions2023-11-29T15:53:00+00:00Hamza AbusalahValerio CiniA proof of sequential work (PoSW) scheme allows the prover to convince a verifier that it computed a certain number of computational steps sequentially.
Very recently, graph-labeling PoSW schemes, found applications in light-client blockchain protocols, most notably bootstrapping. A bootstrapping protocol allows a light client, with minimal information about the blockchain, to hold a commitment to its stable prefix. An incremental PoSW (iPoSW) scheme allows the prover to non-trivially increment proofs: given $\chi,\pi_1$ and integers $N_1,N_2$ such that $\pi_1$ is a valid proof for $N_1$, it generates a valid proof $\pi$ for $N_1+N_2$.
In this work, we construct an iPoSW scheme based on the skiplist-based PoSW scheme of Abusalah et al. and prove its security in the random oracle model by employing the powerful on-the-fly sampling technique of Döttling et al. Moreover, unlike the iPoSW scheme of Döttling et al., ours is the first iPoSW scheme which is suitable for constructing incremental non-interactive arguments of chain knowledge (SNACK) schemes, which are at the heart of space and time efficient blockchain light-client protocols. In particular, our scheme works for general weight distributions, which we characterize as incrementally sampleable distributions. Our general treatment recovers the distribution underlying the scheme of Döttling et al. as well as the distribution underlying SNACK-enabled bootstrapping application as special cases. In realizing our general construction, we develop a new on-the-fly sampling technique.A proof of sequential work (PoSW) scheme allows the prover to convince a verifier that it computed a certain number of computational steps sequentially.
Very recently, graph-labeling PoSW schemes, found applications in light-client blockchain protocols, most notably bootstrapping. A bootstrapping protocol allows a light client, with minimal information about the blockchain, to hold a commitment to its stable prefix. An incremental PoSW (iPoSW) scheme allows the prover to non-trivially increment proofs: given $\chi,\pi_1$ and integers $N_1,N_2$ such that $\pi_1$ is a valid proof for $N_1$, it generates a valid proof $\pi$ for $N_1+N_2$.
In this work, we construct an iPoSW scheme based on the skiplist-based PoSW scheme of Abusalah et al. and prove its security in the random oracle model by employing the powerful on-the-fly sampling technique of Döttling et al. Moreover, unlike the iPoSW scheme of Döttling et al., ours is the first iPoSW scheme which is suitable for constructing incremental non-interactive arguments of chain knowledge (SNACK) schemes, which are at the heart of space and time efficient blockchain light-client protocols. In particular, our scheme works for general weight distributions, which we characterize as incrementally sampleable distributions. Our general treatment recovers the distribution underlying the scheme of Döttling et al. as well as the distribution underlying SNACK-enabled bootstrapping application as special cases. In realizing our general construction, we develop a new on-the-fly sampling technique.2023-11-29T15:53:00+00:00https://creativecommons.org/licenses/by/4.0/Hamza AbusalahValerio Cinihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1835ID-CAKE: Identity-based Cluster Authentication and Key Exchange Scheme for Message Broadcasting and Batch Verification in VANETs2023-11-29T15:06:58+00:00Apurva K VangujarAlia UmraniPaolo PalmieriVehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance.
This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.Vehicle Ad Hoc Networks (VANETs) play a pivotal role in intelligent transportation systems, offering dynamic communication between vehicles, Road Side Units (RSUs), and the internet. Given the open-access nature of VANETs and the associated threats, such as impersonation and privacy violations, ensuring the security of these communications is of utmost importance.
This paper presents the Identity-based Cluster Authentication and Key Exchange (ID-CAKE) scheme, a new approach to address security challenges in VANETs. The ID-CAKE scheme integrates the Cluster Consensus Identity-based Identification (CCIBI) with Zero-Knowledge (ZK) proofs and the Identity-based Multireceiver Key Exchange Mechanism (ID-mKEM) signature scheme. This integration provides robust authorization via CCIBI, while ID-mKEM signatures ensure message integrity, and guarantee both non-repudiation and unforgeability through mKEM for message broadcasting. The scheme employs a novel three-party ZK proof for batch verification using mKEM, which significantly reduces computational burdens. Our scheme also ensures anonymity and unlinkability by introducing pseudo-identities to all users in the cluster. The rigorous security proofs provided confirm the resilience of the ID-CAKE scheme against potential attacks, adhering to the different scenarios, against the hardness of the elliptic curve computational Diffie-Hellman under the random oracle model. The ID-CAKE scheme establishes a robust security framework for VANETs, and its introduction highlights potential pathways for future exploration in the realm of VANET security.2023-11-29T15:06:58+00:00https://creativecommons.org/publicdomain/zero/1.0/Apurva K VangujarAlia UmraniPaolo Palmierihttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1834BBB PRP Security of the Lai-Massey Mode2023-11-29T12:07:33+00:00Ritam BhaumikMohammad Amin RaeisiIn spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point-of-view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.In spite of being a popular technique for designing block ciphers, Lai-Massey networks have received considerably less attention from a security analysis point-of-view than Feistel networks and Substitution-Permutation networks. In this paper we study the beyond-birthday-bound (BBB) security of Lai-Massey networks with independent random round functions against chosen-plaintext adversaries. Concretely, we show that five rounds are necessary and sufficient to achieve BBB security.2023-11-29T12:07:33+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Ritam BhaumikMohammad Amin Raeisihttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/1833Cryptanalysis of QARMAv22023-11-29T11:02:32+00:00Hosein HadipourYosuke TodoQARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMA with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2 and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers.
This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.QARMAv2 is a general-purpose and hardware-oriented family of lightweight tweakable block ciphers (TBCs) introduced in ToSC 2023. QARMAv2, as a redesign of QARMA with a longer tweak and tighter security margins, is also designed to be suitable for cryptographic memory protection and control flow integrity. The designers of QARMAv2 provided a relatively comprehensive security analysis in the design specification, e.g., some bounds for the number of attacked rounds in differential and boomerang analysis, together with some concrete impossible differential, zero-correlation, and integral distinguishers. As one of the first third-party cryptanalysis of QARMAv2, Hadipour et al. significantly improved the integral distinguishers of QARMAv2 and provided the longest concrete distinguishers of QARMAv2 up to now. However, they provided no key recovery attack based on their distinguishers.
This paper delves into the cryptanalysis of QARMAv2 to enhance our understanding of its security. Given that the integral distinguishers of QARMAv2 are the longest concrete distinguishers for this cipher so far, we focus on integral attack. To this end, we first further improve the automatic tool introduced by Hadipour et al., for finding integral distinguishers of TBCs following the TWEAKEY framework. This new tool exploits the MixColumns property of QARMAv2 to find integral distinguishers more suitable for key recovery attacks. Then, we combine several techniques for integral key recovery attacks, e.g., Meet-in-the-middle and partial-sum techniques to build a fine-grained integral key recovery attack on QARMAv2. Notably, we demonstrate how to leverage the low data complexity of the integral distinguishers of QARMAv2 to reduce the memory complexity of the meet-in-the-middle technique. As a result, we managed to propose the first concrete key recovery attacks on reduced-round versions of QARMAv2 by attacking 13 rounds of QARMAv2-64-128 with a single tweak block, 14 rounds of QARMAv2-64-128 with two independent tweak blocks, and 16 rounds of QARMAv2-128-256 with two independent tweak blocks. Our attacks do not compromise the claimed security of QARMAv2, but they shed more light on the cryptanalysis of this cipher.2023-11-29T11:02:32+00:00https://creativecommons.org/licenses/by/4.0/Hosein HadipourYosuke Todohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1832A Note On the Universality of Black-box MKtP Solvers2023-11-29T10:41:57+00:00Noam MazorRafael PassThe relationships between various meta-complexity problems are not well understood in the worst-case regime, including whether the search version is harder than the decision version, whether the hardness scales with the ``threshold", and how the hardness of different meta complexity problems relate to one another, and to the task of function inversion.
In this note, we present resolutions to some of these questions with respect to the \emph{black-box} analog of these problems. In more detail, let $MK^t_MP[s]$ denote the language consisting of strings $x$ with $K_{M}^t(x) < s(|x|)$, where $K_M^t(x)$ denotes the $t$-bounded Kolmogorov complexity of $x$ with $M$ as the underlying (Universal) Turing machine, and let $search-MK^t_MP[s]$ denote the search version of the same problem.
We show that if there for every Universal Turing machine $U$ there exists a $2^{\alpha n}poly(n)$-size $U$-oracle aided circuit deciding $MK^t_UP [n-O(1)]$, then for every function $s$, and every not necessarily universal Turing machine $M$, there exists a $2^{\alpha s(n)}poly(n)$ size $M$-oracle aided circuit solving $search-MK^t_MP[s(n)]$; this in turn yields circuits of roughly the same size for both the Minimum Circuit Size Problem (MCSP), and the function inversion problem, as they can be thought of as instantiating $MK^t_MP$ with particular choices of (a non universal) TMs $M$ (the circuit emulator for the case of MCSP, and the function evaluation in the case of function inversion).
As a corollary of independent interest, we get that the complexity of black-box function inversion is (roughly) the same as the complexity of black-box deciding $MK^t_UP[n-O(1)]$ for any universal TM $U$; that is, also in the worst-case regime, black-box function inversion is ``equivalent" to black-box deciding $MKtUP$.The relationships between various meta-complexity problems are not well understood in the worst-case regime, including whether the search version is harder than the decision version, whether the hardness scales with the ``threshold", and how the hardness of different meta complexity problems relate to one another, and to the task of function inversion.
In this note, we present resolutions to some of these questions with respect to the \emph{black-box} analog of these problems. In more detail, let $MK^t_MP[s]$ denote the language consisting of strings $x$ with $K_{M}^t(x) < s(|x|)$, where $K_M^t(x)$ denotes the $t$-bounded Kolmogorov complexity of $x$ with $M$ as the underlying (Universal) Turing machine, and let $search-MK^t_MP[s]$ denote the search version of the same problem.
We show that if there for every Universal Turing machine $U$ there exists a $2^{\alpha n}poly(n)$-size $U$-oracle aided circuit deciding $MK^t_UP [n-O(1)]$, then for every function $s$, and every not necessarily universal Turing machine $M$, there exists a $2^{\alpha s(n)}poly(n)$ size $M$-oracle aided circuit solving $search-MK^t_MP[s(n)]$; this in turn yields circuits of roughly the same size for both the Minimum Circuit Size Problem (MCSP), and the function inversion problem, as they can be thought of as instantiating $MK^t_MP$ with particular choices of (a non universal) TMs $M$ (the circuit emulator for the case of MCSP, and the function evaluation in the case of function inversion).
As a corollary of independent interest, we get that the complexity of black-box function inversion is (roughly) the same as the complexity of black-box deciding $MK^t_UP[n-O(1)]$ for any universal TM $U$; that is, also in the worst-case regime, black-box function inversion is ``equivalent" to black-box deciding $MKtUP$.2023-11-29T10:41:57+00:00https://creativecommons.org/licenses/by/4.0/Noam MazorRafael Passhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1831A CP-based Automatic Tool for Instantiating Truncated Differential Characteristics - Extended Version2023-11-29T10:25:14+00:00François DelobelPatrick DerbezArthur GontierLoïc RouquetteChristine SolnonAn important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows one to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done either with ad-hoc algorithms or with CP, SAT or MILP models that are solved by generic solvers. In this paper, we present a generic tool for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive
with all the previous dedicated approaches.An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows one to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done either with ad-hoc algorithms or with CP, SAT or MILP models that are solved by generic solvers. In this paper, we present a generic tool for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive
with all the previous dedicated approaches.2023-11-29T10:25:14+00:00https://creativecommons.org/licenses/by/4.0/François DelobelPatrick DerbezArthur GontierLoïc RouquetteChristine Solnonhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1830Vector Commitments with Efficient Updates2023-11-28T23:39:11+00:00Ertem Nusret TasDan BonehDynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number $k$ of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in $k$, namely $k^\nu$ and $k^{1-\nu}$ for any $\nu \in (0,1)$. We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. For $\nu = 1/2$, our constructions outperform Verkle commitments by about a factor of $2$ in terms of both the update information size and runtime, but makes use of larger public parameters.Dynamic vector commitments that enable local updates of opening proofs have applications ranging from verifiable databases with membership changes to stateless clients on blockchains. In these applications, each user maintains a relevant subset of the committed messages and the corresponding opening proofs with the goal of ensuring a succinct global state. When the messages are updated, users are given some global update information and update their opening proofs to match the new vector commitment. We investigate the relation between the size of the update information and the runtime complexity needed to update an individual opening proof. Existing vector commitment schemes require that either the information size or the runtime scale linearly in the number $k$ of updated state elements. We construct a vector commitment scheme that asymptotically achieves both length and runtime that is sublinear in $k$, namely $k^\nu$ and $k^{1-\nu}$ for any $\nu \in (0,1)$. We prove an information-theoretic lower bound on the relation between the update information size and runtime complexity that shows the asymptotic optimality of our scheme. For $\nu = 1/2$, our constructions outperform Verkle commitments by about a factor of $2$ in terms of both the update information size and runtime, but makes use of larger public parameters.2023-11-28T23:39:11+00:00https://creativecommons.org/licenses/by/4.0/Ertem Nusret TasDan Bonehhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1829End-to-End Encrypted Zoom Meetings: Proving Security and Strengthening Liveness2023-12-01T03:21:05+00:00Yevgeniy DodisDaniel JostBalachandar KesavanAntonio MarcedoneIn May 2020, Zoom Video Communications, Inc. (Zoom) announced a multi-step plan to comprehensively support end-to-end encrypted (E2EE) group video calls and subsequently rolled out basic E2EE support to customers in October 2020. In this work we provide the first formal security analysis of Zoom's E2EE protocol, and also lay foundation to the general problem of E2EE group video communication.
We observe that the vast security literature analyzing asynchronous messaging does not translate well to synchronous video calls. Namely, while strong forms of forward secrecy and post compromise security are less important for (typically short-lived) video calls, various liveness properties become crucial. For example, mandating that participants quickly learn of updates to the meeting roster and key, media streams being displayed are recent, and banned participants promptly lose any access to the meeting. Our main results are as follows:
1. Propose a new notion of leader-based continuous group key agreement with liveness, which accurately captures the E2EE properties specific to the synchronous communication scenario.
2. Prove security of the core of Zoom's E2EE meetings protocol in the above well-defined model.
3. Propose ways to strengthen Zoom's liveness properties by simple modifications to the original protocol, which subsequently influenced updates implemented in production.In May 2020, Zoom Video Communications, Inc. (Zoom) announced a multi-step plan to comprehensively support end-to-end encrypted (E2EE) group video calls and subsequently rolled out basic E2EE support to customers in October 2020. In this work we provide the first formal security analysis of Zoom's E2EE protocol, and also lay foundation to the general problem of E2EE group video communication.
We observe that the vast security literature analyzing asynchronous messaging does not translate well to synchronous video calls. Namely, while strong forms of forward secrecy and post compromise security are less important for (typically short-lived) video calls, various liveness properties become crucial. For example, mandating that participants quickly learn of updates to the meeting roster and key, media streams being displayed are recent, and banned participants promptly lose any access to the meeting. Our main results are as follows:
1. Propose a new notion of leader-based continuous group key agreement with liveness, which accurately captures the E2EE properties specific to the synchronous communication scenario.
2. Prove security of the core of Zoom's E2EE meetings protocol in the above well-defined model.
3. Propose ways to strengthen Zoom's liveness properties by simple modifications to the original protocol, which subsequently influenced updates implemented in production.2023-11-28T17:58:18+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Yevgeniy DodisDaniel JostBalachandar KesavanAntonio Marcedonehttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/1828Sender-Anamorphic Encryption Reformulated: Achieving Robust and Generic Constructions2023-11-28T14:28:54+00:00Yi WangRongmao ChenXinyi HuangMoti YungMotivated by the violation of two fundamental assumptions in secure communication - receiver-privacy and sender-freedom - by a certain entity referred to as ``the dictator'', Persiano et al. introduced the concept of Anamorphic Encryption (AME) for public key cryptosystems (EUROCRYPT 2022). Specifically, they presented receiver/sender-AME, directly tailored to scenarios where receiver privacy and sender freedom assumptions are compromised, respectively. In receiver-AME, entities share a double key to communicate in anamorphic fashion, raising concerns about the online distribution of the double key without detection by the dictator. The sender-AME with no shared secret is a potential candidate for key distribution. However, the only such known schemes (i.e., LWE and Dual LWE encryptions) suffer from an intrinsic limitation and cannot achieve reliable distribution.
Here, we reformulate the sender-AME, present the notion of $\ell$-sender-AME and formalize the properties of (strong) security and robustness. Robustness refers to guaranteed delivery of duplicate messages to the intended receiver, ensuring that decrypting normal ciphertexts in an anamorphic way or decrypting anamorphic ciphertexts with an incorrect duplicate secret key results in an explicit abort signal. We first present a simple construction for pseudo-random and robust public key encryption that shares the similar idea of public-key stegosystem by von Ahn and Hopper (EUROCRYPT 2004). Then, inspired by Chen et al.'s malicious algorithm-substitution attack (ASA) on key encapsulation mechanisms (KEM) (ASIACRYPT 2020), we give a generic construction for hybrid PKE with special KEM that encompasses well-known schemes, including ElGamal and Cramer-Shoup cryptosystems.
The constructions of $\ell$-sender-AME motivate us to explore the relations between AME, ASA on PKE, and public-key stegosystem. The results show that a strongly secure $\ell$-sender-AME is such a strong primitive that implies reformulated receiver-AME, public-key stegosystem, and generalized ASA on PKE. By expanding the scope of sender-anamorphic encryption and establishing its robustness, as well as exploring the connections among existing notions, we advance secure communication protocols under challenging conditions.Motivated by the violation of two fundamental assumptions in secure communication - receiver-privacy and sender-freedom - by a certain entity referred to as ``the dictator'', Persiano et al. introduced the concept of Anamorphic Encryption (AME) for public key cryptosystems (EUROCRYPT 2022). Specifically, they presented receiver/sender-AME, directly tailored to scenarios where receiver privacy and sender freedom assumptions are compromised, respectively. In receiver-AME, entities share a double key to communicate in anamorphic fashion, raising concerns about the online distribution of the double key without detection by the dictator. The sender-AME with no shared secret is a potential candidate for key distribution. However, the only such known schemes (i.e., LWE and Dual LWE encryptions) suffer from an intrinsic limitation and cannot achieve reliable distribution.
Here, we reformulate the sender-AME, present the notion of $\ell$-sender-AME and formalize the properties of (strong) security and robustness. Robustness refers to guaranteed delivery of duplicate messages to the intended receiver, ensuring that decrypting normal ciphertexts in an anamorphic way or decrypting anamorphic ciphertexts with an incorrect duplicate secret key results in an explicit abort signal. We first present a simple construction for pseudo-random and robust public key encryption that shares the similar idea of public-key stegosystem by von Ahn and Hopper (EUROCRYPT 2004). Then, inspired by Chen et al.'s malicious algorithm-substitution attack (ASA) on key encapsulation mechanisms (KEM) (ASIACRYPT 2020), we give a generic construction for hybrid PKE with special KEM that encompasses well-known schemes, including ElGamal and Cramer-Shoup cryptosystems.
The constructions of $\ell$-sender-AME motivate us to explore the relations between AME, ASA on PKE, and public-key stegosystem. The results show that a strongly secure $\ell$-sender-AME is such a strong primitive that implies reformulated receiver-AME, public-key stegosystem, and generalized ASA on PKE. By expanding the scope of sender-anamorphic encryption and establishing its robustness, as well as exploring the connections among existing notions, we advance secure communication protocols under challenging conditions.2023-11-28T14:28:54+00:00https://creativecommons.org/licenses/by/4.0/Yi WangRongmao ChenXinyi HuangMoti Yunghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1827Key Exchange in the Post-Snowden Era: UC Secure Subversion-Resilient PAKE2023-11-28T09:51:06+00:00Suvradip ChakrabortyLorenzo MaglioccoBernardo MagriDaniele VenturiPassword-Authenticated Key Exchange (PAKE) allows two parties to establish a common high-entropy secret from a possibly low-entropy pre-shared secret such as a password. In this work, we provide the first PAKE protocol with subversion resilience in the framework of universal composability (UC), where the latter roughly means that UC security still holds even if one of the two parties is malicious and the honest party's code has been subverted (in an undetectable manner).
We achieve this result by sanitizing the PAKE protocol from oblivious transfer (OT) due to Canetti et al. (PKC'12) via cryptographic reverse firewalls in the UC framework (Chakraborty et al., EUROCRYPT'22). This requires new techniques, which help us uncover new cryptographic primitives with sanitation-friendly properties along the way (such as OT, dual-mode cryptosystems, and signature schemes).
As an additional contribution, we delve deeper in the backbone of communication required in the subversion-resilient UC framework, extending it to the unauthenticated setting, in line with the work of Barak et al. (CRYPTO'05).Password-Authenticated Key Exchange (PAKE) allows two parties to establish a common high-entropy secret from a possibly low-entropy pre-shared secret such as a password. In this work, we provide the first PAKE protocol with subversion resilience in the framework of universal composability (UC), where the latter roughly means that UC security still holds even if one of the two parties is malicious and the honest party's code has been subverted (in an undetectable manner).
We achieve this result by sanitizing the PAKE protocol from oblivious transfer (OT) due to Canetti et al. (PKC'12) via cryptographic reverse firewalls in the UC framework (Chakraborty et al., EUROCRYPT'22). This requires new techniques, which help us uncover new cryptographic primitives with sanitation-friendly properties along the way (such as OT, dual-mode cryptosystems, and signature schemes).
As an additional contribution, we delve deeper in the backbone of communication required in the subversion-resilient UC framework, extending it to the unauthenticated setting, in line with the work of Barak et al. (CRYPTO'05).2023-11-28T09:51:06+00:00https://creativecommons.org/licenses/by/4.0/Suvradip ChakrabortyLorenzo MaglioccoBernardo MagriDaniele Venturihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1826Load-Balanced Server-Aided MPC in Heterogeneous Computing2023-11-28T08:59:01+00:00Yibiao LuBingsheng ZhangKui RenMost existing MPC protocols consider the homogeneous setting, where all the MPC players are assumed to have identical communication and computation resources. In practice, the weakest player often becomes the bottleneck of the entire MPC protocol execution. In this work, we initiate the study of so-called load-balanced MPC in the heterogeneous computing. A load-balanced MPC protocol can adjust the workload of each player accordingly to maximize the overall resource utilization. In particular, we propose new notions called composite circuit and composite garbling scheme, and construct two efficient server-aided protocols with malicious security and semi-honest security, respectively. Our maliciously secure protocol is over 400$\times$ faster than the authenticated garbling protocol (CCS'17); our semi-honest protocol is up to 173$\times$ faster than the optimized BMR protocol (CCS'16).Most existing MPC protocols consider the homogeneous setting, where all the MPC players are assumed to have identical communication and computation resources. In practice, the weakest player often becomes the bottleneck of the entire MPC protocol execution. In this work, we initiate the study of so-called load-balanced MPC in the heterogeneous computing. A load-balanced MPC protocol can adjust the workload of each player accordingly to maximize the overall resource utilization. In particular, we propose new notions called composite circuit and composite garbling scheme, and construct two efficient server-aided protocols with malicious security and semi-honest security, respectively. Our maliciously secure protocol is over 400$\times$ faster than the authenticated garbling protocol (CCS'17); our semi-honest protocol is up to 173$\times$ faster than the optimized BMR protocol (CCS'16).2023-11-28T08:59:01+00:00https://creativecommons.org/licenses/by/4.0/Yibiao LuBingsheng ZhangKui Renhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1825Unclonable Cryptography in the Plain Model2023-11-28T07:53:46+00:00Céline ChevalierPaul HermouetQuoc-Huy VuBy leveraging the no-cloning principle of quantum mechanics, unclonable cryptography enables us to achieve novel cryptographic protocols that are otherwise impossible classically. Two most notable examples of unclonable cryptography are quantum copy-protection and unclonable encryption. Despite receiving a lot of attention in recent years, two important open questions still remain: copy- protection for point functions in the plain model, which is usually considered as feasibility demonstration, and unclonable encryption with unclonable indistinguishability security in the plain model.
In this work, by relying on previous works of Coladangelo, Liu, Liu, and Zhandry (Crypto’21) and Culf and Vidick (Quantum’22), we establish a new monogamy-of-entanglement property for subspace coset states, which allows us to obtain the following new results:
• We show that copy-protection of point functions exists in the plain model, with different challenge distributions (including arguably the most natural ones).
• We show, for the first time, that unclonable encryption with unclonable indistinguishability security exists in the plain model.By leveraging the no-cloning principle of quantum mechanics, unclonable cryptography enables us to achieve novel cryptographic protocols that are otherwise impossible classically. Two most notable examples of unclonable cryptography are quantum copy-protection and unclonable encryption. Despite receiving a lot of attention in recent years, two important open questions still remain: copy- protection for point functions in the plain model, which is usually considered as feasibility demonstration, and unclonable encryption with unclonable indistinguishability security in the plain model.
In this work, by relying on previous works of Coladangelo, Liu, Liu, and Zhandry (Crypto’21) and Culf and Vidick (Quantum’22), we establish a new monogamy-of-entanglement property for subspace coset states, which allows us to obtain the following new results:
• We show that copy-protection of point functions exists in the plain model, with different challenge distributions (including arguably the most natural ones).
• We show, for the first time, that unclonable encryption with unclonable indistinguishability security exists in the plain model.2023-11-28T07:53:46+00:00https://creativecommons.org/licenses/by/4.0/Céline ChevalierPaul HermouetQuoc-Huy Vuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1824Learning with Errors over Group Rings Constructed by Semi-direct Product2023-12-01T15:16:05+00:00Jiaqi LiuFang-Wei FuThe Learning with Errors (LWE) problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the LWE problem called Group ring LWE (GR-LWE). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-LWE problem described in \cite{lyubashevsky2010ideal}, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-LWE, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem (SIVP) in ideal lattices with polynomial approximate factor to the search version of GR-LWE. This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case SIVP problem is directly reduced to the (average-case) decision GR-LWE problem. The pseudorandomness of GR-LWE samples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems.The Learning with Errors (LWE) problem has been widely utilized as a foundation for numerous cryptographic tools over the years. In this study, we focus on an algebraic variant of the LWE problem called Group ring LWE (GR-LWE). We select group rings (or their direct summands) that underlie specific families of finite groups constructed by taking the semi-direct product of two cyclic groups. Unlike the Ring-LWE problem described in \cite{lyubashevsky2010ideal}, the multiplication operation in the group rings considered here is non-commutative. As an extension of Ring-LWE, it maintains computational hardness and can be potentially applied in many cryptographic scenarios. In this paper, we present two polynomial-time quantum reductions. Firstly, we provide a quantum reduction from the worst-case shortest independent vectors problem (SIVP) in ideal lattices with polynomial approximate factor to the search version of GR-LWE. This reduction requires that the underlying group ring possesses certain mild properties; Secondly, we present another quantum reduction for two types of group rings, where the worst-case SIVP problem is directly reduced to the (average-case) decision GR-LWE problem. The pseudorandomness of GR-LWE samples guaranteed by this reduction can be consequently leveraged to construct semantically secure public-key cryptosystems.2023-11-28T01:23:26+00:00https://creativecommons.org/publicdomain/zero/1.0/Jiaqi LiuFang-Wei Fuhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1823PQC-NN: Post-Quantum Cryptography Neural Network2023-11-27T14:50:49+00:00Abel C. H. ChenIn recent years, quantum computers and Shor’s quantum algorithm have been able to effectively solve NP (Non-deterministic Polynomial-time) problems such as prime factorization and discrete logarithm problems, posing a threat to current mainstream asymmetric cryptography, including RSA and Elliptic Curve Cryptography (ECC). As a result, the National Institute of Standards and Technology (NIST) in the United States call for Post-Quantum Cryptography (PQC) methods that include lattice-based cryptography methods, code-based cryptography methods, multivariate cryptography methods, and hash-based cryptography methods for resisting quantum computing attacks. Therefore, this study proposes a PQC neural network (PQC-NN) that maps a code-based PQC method to a neural network structure and enhances the security of ciphertexts with non-linear activation functions, random perturbation of ciphertexts, and uniform distribution of ciphertexts. The main innovations of this study include: (1) constructing a neural network structure that complies with code-based PQC, where the weight sets between the input layer and the ciphertext layer can be used as a public key for encryption, and the weight sets between the ciphertext layer and the output layer can be used as a private key for decryption; (2) adding random perturbations to the ciphertext layer, which can be removed during the decryption phase to restore the original plaintext; (3) constraining the output values of the ciphertext layer to follow a uniform distribution with a significant similarity by adding the cumulative distribution function (CDF) values of the chi-square distribution to the loss function, ensuring that the neural network produces sufficiently uniform distribution for the output values of the ciphertext layer. In practical experiments, this study uses cellular network signals as a case study to demonstrate that encryption and decryption can be performed by the proposed PQC neural network with the uniform distribution of ciphertexts. In the future, the proposed PQC neural network could be applied to various applications.In recent years, quantum computers and Shor’s quantum algorithm have been able to effectively solve NP (Non-deterministic Polynomial-time) problems such as prime factorization and discrete logarithm problems, posing a threat to current mainstream asymmetric cryptography, including RSA and Elliptic Curve Cryptography (ECC). As a result, the National Institute of Standards and Technology (NIST) in the United States call for Post-Quantum Cryptography (PQC) methods that include lattice-based cryptography methods, code-based cryptography methods, multivariate cryptography methods, and hash-based cryptography methods for resisting quantum computing attacks. Therefore, this study proposes a PQC neural network (PQC-NN) that maps a code-based PQC method to a neural network structure and enhances the security of ciphertexts with non-linear activation functions, random perturbation of ciphertexts, and uniform distribution of ciphertexts. The main innovations of this study include: (1) constructing a neural network structure that complies with code-based PQC, where the weight sets between the input layer and the ciphertext layer can be used as a public key for encryption, and the weight sets between the ciphertext layer and the output layer can be used as a private key for decryption; (2) adding random perturbations to the ciphertext layer, which can be removed during the decryption phase to restore the original plaintext; (3) constraining the output values of the ciphertext layer to follow a uniform distribution with a significant similarity by adding the cumulative distribution function (CDF) values of the chi-square distribution to the loss function, ensuring that the neural network produces sufficiently uniform distribution for the output values of the ciphertext layer. In practical experiments, this study uses cellular network signals as a case study to demonstrate that encryption and decryption can be performed by the proposed PQC neural network with the uniform distribution of ciphertexts. In the future, the proposed PQC neural network could be applied to various applications.2023-11-27T14:50:49+00:00https://creativecommons.org/publicdomain/zero/1.0/Abel C. H. Chenhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1822Rectangular Attack on VOX2023-11-27T09:46:09+00:00Gilles Macario-RatJacques PatarinBenoit CogliatiJean-Charles FaugèrePierre-Alain FouqueLouis GouinRobin LarrieuBrice MinaudVOX has been submitted to the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition in June 2023. VOX is a strengthened variant of UOV which uses the Quotient-Ring (QR) setting to reduce the public-key size.
At the end of August 2023, Furue and Ikamatsu posted on the NIST mailing-list a post, indicating that the parameters of VOX can be attacked efficiently using the rectangular attack in the QR setting.
In this note, we explain the attack in the specific case of VOX, we detail the complexity, and show that as Furue and Ikematsu indicated, the attack can be completely avoided by adding one more constraint on the parameter selection. Finally, we show that this constraint does not increase the sizes of the public keys or signature.VOX has been submitted to the NIST Round 1 Additional Signature of the Post-Quantum Signature Competition in June 2023. VOX is a strengthened variant of UOV which uses the Quotient-Ring (QR) setting to reduce the public-key size.
At the end of August 2023, Furue and Ikamatsu posted on the NIST mailing-list a post, indicating that the parameters of VOX can be attacked efficiently using the rectangular attack in the QR setting.
In this note, we explain the attack in the specific case of VOX, we detail the complexity, and show that as Furue and Ikematsu indicated, the attack can be completely avoided by adding one more constraint on the parameter selection. Finally, we show that this constraint does not increase the sizes of the public keys or signature.2023-11-27T09:46:09+00:00https://creativecommons.org/licenses/by/4.0/Gilles Macario-RatJacques PatarinBenoit CogliatiJean-Charles FaugèrePierre-Alain FouqueLouis GouinRobin LarrieuBrice Minaudhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1821Cryptanalysis of TS-Hash2023-11-26T12:35:42+00:00Aleksei UdovenkoThis note presents attacks on the lightweight hash function TS-Hash proposed by Tsaban, including a polynomial-time preimage attack for short messages (at most n/2 bits), high-probability differentials, a general subexponential-time preimage attack, and linearization techniques.This note presents attacks on the lightweight hash function TS-Hash proposed by Tsaban, including a polynomial-time preimage attack for short messages (at most n/2 bits), high-probability differentials, a general subexponential-time preimage attack, and linearization techniques.2023-11-26T12:35:42+00:00https://creativecommons.org/licenses/by/4.0/Aleksei Udovenkohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1820Chipmunk: Better Synchronized Multi-Signatures from Lattices2023-11-27T10:06:03+00:00Nils FleischhackerGottfried HeroldMark SimkinZhenfei ZhangMulti-signatures allow for compressing many signatures for the same message that were generated under independent keys into one small aggregated signature.
This primitive is particularly useful for proof-of-stake blockchains, like Ethereum, where the same block is signed by many signers, who vouch for the block's validity.
Being able to compress all signatures for the same block into a short string significantly reduces the on-chain storage costs, which is an important efficiency metric for blockchains.
In this work, we consider multi-signatures in the synchronized setting, where the signing algorithm takes an additional time parameter as input and it is only required that signatures for the same time step are aggregatable.
The synchronized setting is simpler than the general multi-signature setting, but is sufficient for most blockchain related applications, as signers are naturally synchronized by the length of the chain.
We present Chipmunk, a concretely efficient lattice-based multi-signature scheme in the synchronized setting that allows for signing an a-priori bounded number of messages.
Chipmunk allows for non-interactive aggregation of signatures and is secure against rogue-key attacks.
The construction is plausibly secure against quantum adversaries as our security relies on the assumed hardness of the short integer solution problem.
We significantly improve upon the previously best known construction in this setting by Fleischhacker, Simkin, and Zhang (CCS 2022).
Our aggregate signature size is $5.6 \times$ smaller and for $112$ bits of security our construction allows for compressing 8192 individual signatures into a multi-signature of size around $136$ KB.
We provide a full implementation of Chipmunk and provide extensive benchmarks studying our construction's efficiency.Multi-signatures allow for compressing many signatures for the same message that were generated under independent keys into one small aggregated signature.
This primitive is particularly useful for proof-of-stake blockchains, like Ethereum, where the same block is signed by many signers, who vouch for the block's validity.
Being able to compress all signatures for the same block into a short string significantly reduces the on-chain storage costs, which is an important efficiency metric for blockchains.
In this work, we consider multi-signatures in the synchronized setting, where the signing algorithm takes an additional time parameter as input and it is only required that signatures for the same time step are aggregatable.
The synchronized setting is simpler than the general multi-signature setting, but is sufficient for most blockchain related applications, as signers are naturally synchronized by the length of the chain.
We present Chipmunk, a concretely efficient lattice-based multi-signature scheme in the synchronized setting that allows for signing an a-priori bounded number of messages.
Chipmunk allows for non-interactive aggregation of signatures and is secure against rogue-key attacks.
The construction is plausibly secure against quantum adversaries as our security relies on the assumed hardness of the short integer solution problem.
We significantly improve upon the previously best known construction in this setting by Fleischhacker, Simkin, and Zhang (CCS 2022).
Our aggregate signature size is $5.6 \times$ smaller and for $112$ bits of security our construction allows for compressing 8192 individual signatures into a multi-signature of size around $136$ KB.
We provide a full implementation of Chipmunk and provide extensive benchmarks studying our construction's efficiency.2023-11-25T23:48:47+00:00https://creativecommons.org/licenses/by/4.0/Nils FleischhackerGottfried HeroldMark SimkinZhenfei Zhanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1819Beyond MPC-in-the-Head: Black-Box Constructions of Short Zero-Knowledge Proofs2023-11-24T19:41:16+00:00Carmit HazayMuthuramakrishnan VenkitasubramaniamMor WeissIn their seminal work, Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC`07) presented the MPC-in-the-Head paradigm, which shows how to design Zero-Knowledge Proofs (ZKPs) from secure Multi-Party Computation (MPC) protocols. This paradigm has since then revolutionized and modularized the design of efficient ZKP systems, with far-reaching applications beyond ZKPs. However, to the best of our knowledge, all previous instantiations relied on fully-secure MPC protocols, and have not been able to leverage the fact that the paradigm only imposes relatively weak privacy and correctness requirements on the underlying MPC.
In this work, we extend the MPC-in-the-Head paradigm to game-based cryptographic primitives supporting homomorphic computations (e.g., fully-homomorphic encryption, functional encryption, randomized encodings, homomorphic secret sharing, and more). Specifically, we present a simple yet generic compiler from these primitives to ZKPs which use the underlying primitive as a black box. We also generalize our paradigm to capture commit-and-prove protocols, and use it to devise tight black-box compilers from Interactive (Oracle) Proofs to ZKPs, assuming One-Way Functions (OWFs).
We use our paradigm to obtain several new ZKP constructions:
1. The first ZKPs for NP relations $\mathcal{R}$ computable in (polynomial-time uniform) $NC^1$, whose round complexity is bounded by a fixed constant (independent of the depth of $\mathcal{R}$'s verification circuit), with communication approaching witness length (specifically, $n\cdot poly\left(\kappa\right)$, where $n$ is the witness length, and $\kappa$ is a security parameter), assuming DCR. Alternatively, if we allow the round complexity to scale with the depth of the verification circuit, our ZKPs can make black-box use of OWFs.
2. Constant-round ZKPs for NP relations computable in bounded polynomial space, with $O\left(n\right)+o\left(m\right)\cdot poly\left(\kappa\right)$ communication assuming OWFs, where $m$ is the instance length. This gives a black-box alternative to a recent non-black-box construction of Nassar and Rothblum (CRYPTO`22).
3. ZKPs for NP relations computable by a logspace-uniform family of depth-$d\left(m\right)$ circuits, with $n\cdot poly\left(\kappa,d\left(m\right)\right)$ communication assuming OWFs. This gives a black-box alternative to a result of Goldwasser, Kalai and Rothblum (JACM).In their seminal work, Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC`07) presented the MPC-in-the-Head paradigm, which shows how to design Zero-Knowledge Proofs (ZKPs) from secure Multi-Party Computation (MPC) protocols. This paradigm has since then revolutionized and modularized the design of efficient ZKP systems, with far-reaching applications beyond ZKPs. However, to the best of our knowledge, all previous instantiations relied on fully-secure MPC protocols, and have not been able to leverage the fact that the paradigm only imposes relatively weak privacy and correctness requirements on the underlying MPC.
In this work, we extend the MPC-in-the-Head paradigm to game-based cryptographic primitives supporting homomorphic computations (e.g., fully-homomorphic encryption, functional encryption, randomized encodings, homomorphic secret sharing, and more). Specifically, we present a simple yet generic compiler from these primitives to ZKPs which use the underlying primitive as a black box. We also generalize our paradigm to capture commit-and-prove protocols, and use it to devise tight black-box compilers from Interactive (Oracle) Proofs to ZKPs, assuming One-Way Functions (OWFs).
We use our paradigm to obtain several new ZKP constructions:
1. The first ZKPs for NP relations $\mathcal{R}$ computable in (polynomial-time uniform) $NC^1$, whose round complexity is bounded by a fixed constant (independent of the depth of $\mathcal{R}$'s verification circuit), with communication approaching witness length (specifically, $n\cdot poly\left(\kappa\right)$, where $n$ is the witness length, and $\kappa$ is a security parameter), assuming DCR. Alternatively, if we allow the round complexity to scale with the depth of the verification circuit, our ZKPs can make black-box use of OWFs.
2. Constant-round ZKPs for NP relations computable in bounded polynomial space, with $O\left(n\right)+o\left(m\right)\cdot poly\left(\kappa\right)$ communication assuming OWFs, where $m$ is the instance length. This gives a black-box alternative to a recent non-black-box construction of Nassar and Rothblum (CRYPTO`22).
3. ZKPs for NP relations computable by a logspace-uniform family of depth-$d\left(m\right)$ circuits, with $n\cdot poly\left(\kappa,d\left(m\right)\right)$ communication assuming OWFs. This gives a black-box alternative to a result of Goldwasser, Kalai and Rothblum (JACM).2023-11-24T19:41:16+00:00https://creativecommons.org/licenses/by/4.0/Carmit HazayMuthuramakrishnan VenkitasubramaniamMor Weisshttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1818On the Feasibility of Unleveled Fully-Homomorphic Signatures2023-11-24T15:25:48+00:00Romain GayBogdan UrsuWe build the first unleveled fully homomorphic signature scheme in the standard model. Our scheme is not constrained by any a-priori bound on the depth of the functions that can be homomorphically evaluated, and relies on subexponentially-secure indistinguishability obfuscation, fully-homomorphic encryption and a non-interactive zero-knowledge (NIZK) proof system with composable zero-knowledge. Our scheme is also the first to satisfy the strong security notion of context-hiding for an unbounded number of levels, ensuring that signatures computed homomorphically do not leak the original messages from which they were computed. All building blocks are instantiable from falsifiable assumptions in the standard model, avoiding the need for knowledge assumptions.
The main difficulty we overcome stems from the fact that bootstrapping, which is a crucial tool for obtaining unleveled fully homomorphic encryption (FHE), has no equivalent for homomorphic signatures, requiring us to use novel techniques.We build the first unleveled fully homomorphic signature scheme in the standard model. Our scheme is not constrained by any a-priori bound on the depth of the functions that can be homomorphically evaluated, and relies on subexponentially-secure indistinguishability obfuscation, fully-homomorphic encryption and a non-interactive zero-knowledge (NIZK) proof system with composable zero-knowledge. Our scheme is also the first to satisfy the strong security notion of context-hiding for an unbounded number of levels, ensuring that signatures computed homomorphically do not leak the original messages from which they were computed. All building blocks are instantiable from falsifiable assumptions in the standard model, avoiding the need for knowledge assumptions.
The main difficulty we overcome stems from the fact that bootstrapping, which is a crucial tool for obtaining unleveled fully homomorphic encryption (FHE), has no equivalent for homomorphic signatures, requiring us to use novel techniques.2023-11-24T15:25:48+00:00https://creativecommons.org/licenses/by/4.0/Romain GayBogdan Ursuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1817Authenticating Medications with QR-Codes and Compact Digital Signatures2023-11-24T13:28:39+00:00Julien JainskyDavid NaccacheBassem OuniOfer Yifrach-StavThis paper describes a way to protect medications against falsification, a long-standing problem in the world.
We combine several existing technologies to achieve the stated goal. The building-blocks used are inherent physical randomness generated during the packaging process, artificial vision, short digital signatures and QR-codes.This paper describes a way to protect medications against falsification, a long-standing problem in the world.
We combine several existing technologies to achieve the stated goal. The building-blocks used are inherent physical randomness generated during the packaging process, artificial vision, short digital signatures and QR-codes.2023-11-24T13:28:39+00:00https://creativecommons.org/publicdomain/zero/1.0/Julien JainskyDavid NaccacheBassem OuniOfer Yifrach-Stavhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1816ASOZ: a decentralized payment system with privacy preserving and auditing on public blockchain2023-11-24T09:17:18+00:00Tianjian LiuDawei ZhangWei WangDecentralized payment system gradually get more attention in recent years. By removing the trusted third party used for accounting ledgers, it fundamentally empowers users to control their own assets. As the privacy concerns grow, some cryptocurrencies is proposed to preserve the privacy of users. However, those cryptocurrencies cause illegal transactions such as money laundering, fraudulent trading and so on. So it is necessary to design a auditing scheme. To solve this problem, many privacy-preserving and audit scheme was proposed. But there exists no scheme that effectively solve the issue of privacy-preserving and auditing on both user identity and transaction content.
In this paper, we propose a design for a decentralized payment system with privacy preserving and auditing. We use cryptographic accumulators based on Merkle trees for accounting and use a combination of Twist ElGamal, NIZK (Non-Interactive Zero-Knowledge), Bulletproofs, and zk-SNARKs for privacy preserving and auditing.Decentralized payment system gradually get more attention in recent years. By removing the trusted third party used for accounting ledgers, it fundamentally empowers users to control their own assets. As the privacy concerns grow, some cryptocurrencies is proposed to preserve the privacy of users. However, those cryptocurrencies cause illegal transactions such as money laundering, fraudulent trading and so on. So it is necessary to design a auditing scheme. To solve this problem, many privacy-preserving and audit scheme was proposed. But there exists no scheme that effectively solve the issue of privacy-preserving and auditing on both user identity and transaction content.
In this paper, we propose a design for a decentralized payment system with privacy preserving and auditing. We use cryptographic accumulators based on Merkle trees for accounting and use a combination of Twist ElGamal, NIZK (Non-Interactive Zero-Knowledge), Bulletproofs, and zk-SNARKs for privacy preserving and auditing.2023-11-24T09:17:18+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Tianjian LiuDawei ZhangWei Wanghttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/1815Accelerating Polynomial Multiplication for RLWE using Pipelined FFT2023-11-24T08:39:51+00:00Neil ThanawalaHamid NejatollahiNikil DuttThe evolution of quantum algorithms threatens to break public key cryptography in polynomial time. The development of quantum-resistant algorithms for the post-quantum era has seen a significant growth in the field of
post quantum cryptography (PQC). Polynomial multiplication is the core of
Ring Learning with Error (RLWE) lattice based cryptography (LBC) which
is one of the most promising PQC candidates. In this work, we present the
design of fast and energy-efficient pipelined Number Theoretic Transform
(NTT) based polynomial multipliers and present synthesis results on a Field
Programmable Gate Array (FPGA) to evaluate their efficacy.
NTT is performed using the pipelined R2SDF and R22SDF Fast Fourier
Transform (FFT) architectures. In addition, we propose an energy efficient
modified architecture (Modr2). The NTT-based designed polynomial multipliers employs the Modr2 architecture that achieve on average 2× better
performance over the R2SDF FFT and 2.4× over the R22SDF FFT with
similar levels of energy consumption. The proposed polynomial multiplier
with Modr2 architecture reaches 12.5× energy efficiency over the state-ofthe-art convolution-based polynomial multiplier and 4× speedup over the
systolic array NTT based polynomial multiplier for polynomial degrees of
1024, demonstrating its potential for practical deployment in future designs.The evolution of quantum algorithms threatens to break public key cryptography in polynomial time. The development of quantum-resistant algorithms for the post-quantum era has seen a significant growth in the field of
post quantum cryptography (PQC). Polynomial multiplication is the core of
Ring Learning with Error (RLWE) lattice based cryptography (LBC) which
is one of the most promising PQC candidates. In this work, we present the
design of fast and energy-efficient pipelined Number Theoretic Transform
(NTT) based polynomial multipliers and present synthesis results on a Field
Programmable Gate Array (FPGA) to evaluate their efficacy.
NTT is performed using the pipelined R2SDF and R22SDF Fast Fourier
Transform (FFT) architectures. In addition, we propose an energy efficient
modified architecture (Modr2). The NTT-based designed polynomial multipliers employs the Modr2 architecture that achieve on average 2× better
performance over the R2SDF FFT and 2.4× over the R22SDF FFT with
similar levels of energy consumption. The proposed polynomial multiplier
with Modr2 architecture reaches 12.5× energy efficiency over the state-ofthe-art convolution-based polynomial multiplier and 4× speedup over the
systolic array NTT based polynomial multiplier for polynomial degrees of
1024, demonstrating its potential for practical deployment in future designs.2023-11-24T08:39:51+00:00https://creativecommons.org/publicdomain/zero/1.0/Neil ThanawalaHamid NejatollahiNikil Dutthttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1814Easy-ABE: An Easy Ciphertext-Policy Attribute-Based Encryption2023-11-24T08:23:15+00:00Ahmad Khoureich KaAttribute-Based Encryption is widely recognized as a leap forward in the field of public key encryption. It allows to enforce an access control on encrypted data. Decryption time in ABE schemes can be long depending on the number of attributes and pairing operations. This drawback hinders their adoption on a broader scale.
In this paper, we propose a non-monotone CP-ABE scheme that has no restrictions on the size of attribute sets and
policies, allows fast decryption and is adaptively secure under the CBDH-3 assumption. To achieve this, we approached the problem from a new angle, namely using a set membership relation for access structure. We have implemented our scheme using the Java Pairing-Based Cryptography Library (JPBC) and the source code is available on GitHub.Attribute-Based Encryption is widely recognized as a leap forward in the field of public key encryption. It allows to enforce an access control on encrypted data. Decryption time in ABE schemes can be long depending on the number of attributes and pairing operations. This drawback hinders their adoption on a broader scale.
In this paper, we propose a non-monotone CP-ABE scheme that has no restrictions on the size of attribute sets and
policies, allows fast decryption and is adaptively secure under the CBDH-3 assumption. To achieve this, we approached the problem from a new angle, namely using a set membership relation for access structure. We have implemented our scheme using the Java Pairing-Based Cryptography Library (JPBC) and the source code is available on GitHub.2023-11-24T08:23:15+00:00https://creativecommons.org/licenses/by/4.0/Ahmad Khoureich Kahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1813Early Stopping for Any Number of Corruptions2023-11-23T19:24:56+00:00Julian LossJesper Buus NielsenMinimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to $t=n-1$ malicious corruptions and terminates in $O(\min\{f^2,t+1\})$ rounds for any execution with $f\leq t$ actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to $t=(1-\epsilon)n$ malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority.Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to $t=n-1$ malicious corruptions and terminates in $O(\min\{f^2,t+1\})$ rounds for any execution with $f\leq t$ actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to $t=(1-\epsilon)n$ malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority.2023-11-23T19:24:56+00:00https://creativecommons.org/licenses/by/4.0/Julian LossJesper Buus Nielsenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1812The NTT and residues of a polynomial modulo factors of $X^{2^d} + 1$2023-11-23T18:38:51+00:00Sahil SharmaThe Number Theoretic Transform (NTT) plays a central role in efficient implementations of cryptographic primitives selected for Post Quantum Cryptography. Although it certainly exists, academic papers that cite the NTT omit the connection between the NTT and residues of a polynomial modulo factors of $X^{2^d} + 1$ and mention only the final expressions of what the NTT computes. This short paper establishes that connection and, in doing so, elucidates key aspects of computing the NTT. Based on this, the specific instantiations of the NTT function used in CRYSTALS-Kyber and CRYSTALS-Dilithium are derived.The Number Theoretic Transform (NTT) plays a central role in efficient implementations of cryptographic primitives selected for Post Quantum Cryptography. Although it certainly exists, academic papers that cite the NTT omit the connection between the NTT and residues of a polynomial modulo factors of $X^{2^d} + 1$ and mention only the final expressions of what the NTT computes. This short paper establishes that connection and, in doing so, elucidates key aspects of computing the NTT. Based on this, the specific instantiations of the NTT function used in CRYSTALS-Kyber and CRYSTALS-Dilithium are derived.2023-11-23T18:38:51+00:00https://creativecommons.org/licenses/by/4.0/Sahil Sharmahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1811A note on Failing gracefully: Completing the picture for explicitly rejecting Fujisaki-Okamoto transforms using worst-case correctness2023-11-23T15:35:05+00:00Kathrin HövelmannsChristian MajenzThe Fujisaki-Okamoto (FO) transformation is used in most proposals for post-quantum secure key encapsulation mechanisms (KEMs) like, e.g., Kyber [BDK+18]. The security analysis of FO in the presence of quantum attackers has made huge progress over the last years. Recently, [HHM22] made a particular improvement by giving a security proof that is agnostic towards how invalid ciphertexts are being treated: in contrast to previous proofs, it works regardless whether invalid ciphertexts are rejected by
reporting decryption failure explicitly or implicitly (by returning pseudorandom values).
The proof in [HHM22] involves a new correctness notion for the encryption scheme that is used to encapsulate the keys. This allows in principle for a smaller additive security related to decryption failures, but requires to analyze this new notion for the encryption scheme on which a concrete KEM at hand is based.
This note offers a trade-off between [HHM22] and its predecessors: it offers a bound for both rejection variants, being mostly based on [HHM22], but uses a more established correctness notion.The Fujisaki-Okamoto (FO) transformation is used in most proposals for post-quantum secure key encapsulation mechanisms (KEMs) like, e.g., Kyber [BDK+18]. The security analysis of FO in the presence of quantum attackers has made huge progress over the last years. Recently, [HHM22] made a particular improvement by giving a security proof that is agnostic towards how invalid ciphertexts are being treated: in contrast to previous proofs, it works regardless whether invalid ciphertexts are rejected by
reporting decryption failure explicitly or implicitly (by returning pseudorandom values).
The proof in [HHM22] involves a new correctness notion for the encryption scheme that is used to encapsulate the keys. This allows in principle for a smaller additive security related to decryption failures, but requires to analyze this new notion for the encryption scheme on which a concrete KEM at hand is based.
This note offers a trade-off between [HHM22] and its predecessors: it offers a bound for both rejection variants, being mostly based on [HHM22], but uses a more established correctness notion.2023-11-23T15:35:05+00:00https://creativecommons.org/licenses/by/4.0/Kathrin HövelmannsChristian Majenzhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1810Pairing-Free Blind Signatures from Standard Assumptions in the ROM2023-11-23T14:57:00+00:00Julia KastnerKy NguyenMichael ReichleBlind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more. However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 62.19 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient relaxed range-proofs with subversion zero-knowledge and compact commitments to elements of arbitrary groups.Blind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more. However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 62.19 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient relaxed range-proofs with subversion zero-knowledge and compact commitments to elements of arbitrary groups.2023-11-23T14:57:00+00:00https://creativecommons.org/licenses/by/4.0/Julia KastnerKy NguyenMichael Reichlehttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1809PURED: A unified framework for resource-hard functions2023-11-23T13:28:08+00:00Alex BiryukovMarius Lombard-PlatetAlgorithm hardness can be described by 5 categories: hardness in computation, in sequential computation, in memory, in energy consumption (or bandwidth), in code size. Similarly, hardness can be a concern for solving or for verifying, depending on the context, and can depend on a secret trapdoor or be universally hard. Two main lines of research investigated such problems: cryptographic puzzles, that gained popularity thanks to blockchain consensus systems (where solving must be moderately hard, and verification either public or private), and white box cryptography (where solving must be hard without knowledge of the secret key). In this work, we improve upon the classification framework proposed by Biryukov and Perrin in Asiacypt 2017 and offer a united hardness framework, PURED, that can be used for measuring all these kinds of hardness, both in solving and verifying. We also propose three new constructions that fill gaps previously uncovered by the literature (namely, trapdoor proof of CMC, trapdoor proof of code, and a hard challenge in sequential time trapdoored in verification), and analyse their hardness in the PURED framework.Algorithm hardness can be described by 5 categories: hardness in computation, in sequential computation, in memory, in energy consumption (or bandwidth), in code size. Similarly, hardness can be a concern for solving or for verifying, depending on the context, and can depend on a secret trapdoor or be universally hard. Two main lines of research investigated such problems: cryptographic puzzles, that gained popularity thanks to blockchain consensus systems (where solving must be moderately hard, and verification either public or private), and white box cryptography (where solving must be hard without knowledge of the secret key). In this work, we improve upon the classification framework proposed by Biryukov and Perrin in Asiacypt 2017 and offer a united hardness framework, PURED, that can be used for measuring all these kinds of hardness, both in solving and verifying. We also propose three new constructions that fill gaps previously uncovered by the literature (namely, trapdoor proof of CMC, trapdoor proof of code, and a hard challenge in sequential time trapdoored in verification), and analyse their hardness in the PURED framework.2023-11-23T13:28:08+00:00https://creativecommons.org/licenses/by/4.0/Alex BiryukovMarius Lombard-Platethttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1808Small Stretch Problem of the DCT Scheme and How to Fix it2023-11-26T09:46:17+00:00Yuchao ChenTingting GuoLei HuLina ShangShuping MaoPeng WangDCT is a beyond-birthday-bound~(BBB) deterministic authenticated encryption~(DAE) mode proposed by Forler et al. in ACISP 2016, ensuring integrity by redundancy. The instantiation scheme of DCT employs the BRW polynomial, which is more efficient than the usual polynomial function in GCM by reducing half of the multiplication operations. However, we show that DCT suffers from a small stretch problem similar to GCM. When the stretch length $\tau$ is small, choosing a special $m$-block message, we can reduce the number of queries required by a successful forgery to $\mathcal{O}(2^{\tau}/m)$. We emphasize that this attack efficiently balances space and time complexity, but does not contradict the security bounds of DCT. Finally, we propose an improved scheme named Robust DCT~(RDCT) with a minor change to DCT, which improves the security when $\tau$ is small and makes it resist the above attack.DCT is a beyond-birthday-bound~(BBB) deterministic authenticated encryption~(DAE) mode proposed by Forler et al. in ACISP 2016, ensuring integrity by redundancy. The instantiation scheme of DCT employs the BRW polynomial, which is more efficient than the usual polynomial function in GCM by reducing half of the multiplication operations. However, we show that DCT suffers from a small stretch problem similar to GCM. When the stretch length $\tau$ is small, choosing a special $m$-block message, we can reduce the number of queries required by a successful forgery to $\mathcal{O}(2^{\tau}/m)$. We emphasize that this attack efficiently balances space and time complexity, but does not contradict the security bounds of DCT. Finally, we propose an improved scheme named Robust DCT~(RDCT) with a minor change to DCT, which improves the security when $\tau$ is small and makes it resist the above attack.2023-11-23T11:13:26+00:00https://creativecommons.org/licenses/by/4.0/Yuchao ChenTingting GuoLei HuLina ShangShuping MaoPeng Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1807Entrada to Secure Graph Convolutional Networks2023-11-23T07:55:08+00:00Nishat KotiVarsha Bhat KukkalaArpita PatraBhavish Raj GopalGraph convolutional networks (GCNs) are gaining popularity due to their powerful modelling capabilities. However, guaranteeing privacy is an issue when evaluating on inputs that contain users’ sensitive information such as financial transactions, medical records, etc. To address such privacy concerns, we design Entrada, a framework for securely evaluating GCNs that relies on the technique of secure multiparty computation (MPC). For efficiency and accuracy reasons, Entrada builds over the MPC framework of Tetrad (NDSS’22) and enhances the same by providing the necessary primitives. Moreover, Entrada leverages the GraphSC paradigm of Araki et al. (CCS’21) to further enhance efficiency. This entails designing a secure and efficient shuffle protocol specifically in the 4-party setting, which to the best of our knowledge, is done for the first time and may be of independent interest. Through extensive experiments, we showcase that the accuracy of secure GCN evaluated via Entrada is on par with its cleartext counterpart. We also benchmark efficiency of Entrada with respect to the included primitives as well as the framework as a whole. Finally, we showcase Entrada’s practicality by benchmarking GCN-based fraud detection application.Graph convolutional networks (GCNs) are gaining popularity due to their powerful modelling capabilities. However, guaranteeing privacy is an issue when evaluating on inputs that contain users’ sensitive information such as financial transactions, medical records, etc. To address such privacy concerns, we design Entrada, a framework for securely evaluating GCNs that relies on the technique of secure multiparty computation (MPC). For efficiency and accuracy reasons, Entrada builds over the MPC framework of Tetrad (NDSS’22) and enhances the same by providing the necessary primitives. Moreover, Entrada leverages the GraphSC paradigm of Araki et al. (CCS’21) to further enhance efficiency. This entails designing a secure and efficient shuffle protocol specifically in the 4-party setting, which to the best of our knowledge, is done for the first time and may be of independent interest. Through extensive experiments, we showcase that the accuracy of secure GCN evaluated via Entrada is on par with its cleartext counterpart. We also benchmark efficiency of Entrada with respect to the included primitives as well as the framework as a whole. Finally, we showcase Entrada’s practicality by benchmarking GCN-based fraud detection application.2023-11-23T07:55:08+00:00https://creativecommons.org/licenses/by/4.0/Nishat KotiVarsha Bhat KukkalaArpita PatraBhavish Raj Gopalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1806Fast and Designated-verifier Friendly zkSNARKs in the BPK Model2023-11-23T01:21:10+00:00Xudong ZhuXuyang SongYi DengIn ASIACRYPT 2016, Bellare et al. first demonstrated that it is impossible to achieve subversion soundness and standard zero knowledge simultaneously. Subsequently, there have been lots of effort to construct zero-knowledge succinct non interactive arguments of knowledge protocols (zk-SNARKs) that satisfy subversion zero knowledge (S-ZK) and standard soundness from the zk-SNARK in the common reference string (CRS) model. The various constructions could be regarded secure in the bare public key (BPK) model because of the equivalence between S-ZK in the CRS model, and uniform non-black-box zero knowledge in the BPK model has been proved by Abdolmaleki et al. in PKC 2020.
In this study, We proposed the first publicly verifiable non-uniform ZK zk-SNARK scheme in the BPK model maintaining comparable efficiency with its conventional counterpart, which can also be compatible with the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain an efficient designated-verifier zk-SNARK. We achieve this goal by only adding a constant number of elements into the CRS, and using an unconventional but natural method to transform Groth’s zkSNARK in EUROCRYPT 2016. In addition, we propose a new speed-up technique that provides a trade-off. Specifically, if a logarithmic number of elements are added into the CRS, according to different circuits, the CRS verification time in our construction could be approximately 9%-23% shorter than that in the conventional counterpart.In ASIACRYPT 2016, Bellare et al. first demonstrated that it is impossible to achieve subversion soundness and standard zero knowledge simultaneously. Subsequently, there have been lots of effort to construct zero-knowledge succinct non interactive arguments of knowledge protocols (zk-SNARKs) that satisfy subversion zero knowledge (S-ZK) and standard soundness from the zk-SNARK in the common reference string (CRS) model. The various constructions could be regarded secure in the bare public key (BPK) model because of the equivalence between S-ZK in the CRS model, and uniform non-black-box zero knowledge in the BPK model has been proved by Abdolmaleki et al. in PKC 2020.
In this study, We proposed the first publicly verifiable non-uniform ZK zk-SNARK scheme in the BPK model maintaining comparable efficiency with its conventional counterpart, which can also be compatible with the well-known transformation proposed by Bitansky et al. in TCC 2013 to obtain an efficient designated-verifier zk-SNARK. We achieve this goal by only adding a constant number of elements into the CRS, and using an unconventional but natural method to transform Groth’s zkSNARK in EUROCRYPT 2016. In addition, we propose a new speed-up technique that provides a trade-off. Specifically, if a logarithmic number of elements are added into the CRS, according to different circuits, the CRS verification time in our construction could be approximately 9%-23% shorter than that in the conventional counterpart.2023-11-23T01:21:10+00:00https://creativecommons.org/licenses/by/4.0/Xudong ZhuXuyang SongYi Denghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1805On the Security of Rate-limited Privacy Pass2023-11-24T08:46:28+00:00Hien ChuKhue DoLucjan HanzlikThe privacy pass protocol allows users to redeem anonymously issued cryptographic tokens instead of solving annoying CAPTCHAs. The issuing authority verifies the credibility of the user, who can later use the pass while browsing the web using an anonymous or virtual private network. Hendrickson et al. proposed an IETF draft (privacypass-rate-limit-tokens-00) for a rate-limiting version of the privacy pass protocol, also called rate-limited Privacy Pass (RlP). Introducing a new actor called a mediator makes both versions inherently different. The mediator applies access policies to rate-limit users’ access to the service while, at the same time, should be oblivious to the website/origin the user is trying to access. In this paper, we formally define the rate-limited Privacy Pass protocol and propose a game-based security model to capture the informal security notions introduced by Hendrickson et al.. We show a construction from simple building blocks that fulfills our security definitions and even allows for a post-quantum secure instantiation. Interestingly, the instantiation proposed in the IETF draft is a specific case of our construction. Thus, we can reuse the security arguments for the generic construction and show that the version used in practice is secure.The privacy pass protocol allows users to redeem anonymously issued cryptographic tokens instead of solving annoying CAPTCHAs. The issuing authority verifies the credibility of the user, who can later use the pass while browsing the web using an anonymous or virtual private network. Hendrickson et al. proposed an IETF draft (privacypass-rate-limit-tokens-00) for a rate-limiting version of the privacy pass protocol, also called rate-limited Privacy Pass (RlP). Introducing a new actor called a mediator makes both versions inherently different. The mediator applies access policies to rate-limit users’ access to the service while, at the same time, should be oblivious to the website/origin the user is trying to access. In this paper, we formally define the rate-limited Privacy Pass protocol and propose a game-based security model to capture the informal security notions introduced by Hendrickson et al.. We show a construction from simple building blocks that fulfills our security definitions and even allows for a post-quantum secure instantiation. Interestingly, the instantiation proposed in the IETF draft is a specific case of our construction. Thus, we can reuse the security arguments for the generic construction and show that the version used in practice is secure.2023-11-23T00:01:41+00:00https://creativecommons.org/licenses/by/4.0/Hien ChuKhue DoLucjan Hanzlikhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1804Fully Malicious Authenticated PIR2023-11-22T20:53:57+00:00Marian DietzStefano TessaroAuthenticated PIR enables a server to initially commit to a database of $N$ items, for which a client can later privately obtain individual items with complexity sublinear in $N$, with the added guarantee that the retrieved item is consistent with the committed database. A crucial requirement is privacy with abort, i.e., the server should not learn anything about a query even if it learns whether the client aborts.
This problem was recently considered by Colombo et al. (USENIX '23), who proposed solutions secure under the assumption that the database is committed to honestly. Here, we close this gap, and present a solution that tolerates fully malicious servers that provide potentially malformed commitments. Our scheme has communication and client computational complexity $\mathcal{O}_{\lambda}(\sqrt{N})$, solely relies on the DDH assumption, and does not introduce heavy machinery (e.g., generic succinct proofs). Privacy with abort holds provided the server succeeds in correctly answering $\lambda$ validation queries, which, from its perspective, are computationally indistinguishable from regular PIR queries. In fact, server side, our scheme is exactly the DDH-based scheme by Colombo et al.Authenticated PIR enables a server to initially commit to a database of $N$ items, for which a client can later privately obtain individual items with complexity sublinear in $N$, with the added guarantee that the retrieved item is consistent with the committed database. A crucial requirement is privacy with abort, i.e., the server should not learn anything about a query even if it learns whether the client aborts.
This problem was recently considered by Colombo et al. (USENIX '23), who proposed solutions secure under the assumption that the database is committed to honestly. Here, we close this gap, and present a solution that tolerates fully malicious servers that provide potentially malformed commitments. Our scheme has communication and client computational complexity $\mathcal{O}_{\lambda}(\sqrt{N})$, solely relies on the DDH assumption, and does not introduce heavy machinery (e.g., generic succinct proofs). Privacy with abort holds provided the server succeeds in correctly answering $\lambda$ validation queries, which, from its perspective, are computationally indistinguishable from regular PIR queries. In fact, server side, our scheme is exactly the DDH-based scheme by Colombo et al.2023-11-22T20:53:57+00:00https://creativecommons.org/licenses/by/4.0/Marian DietzStefano Tessarohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1803Design of a Linear Layer Optimised for Bitsliced 32-bit Implementation2023-11-22T15:08:47+00:00Gaëtan LeurentClara PernotThe linear layer of block ciphers plays an important role in their security.
In particular, ciphers designed following the wide-trail strategy use the branch number of the linear layer to derive bounds on the probability of linear and differential trails.
At FSE 2014, the LS-design construction was introduced as a simple and regular structure to design bitsliced block ciphers. It considers the internal state as a bit matrix, and applies alternatively an identical S-Box on all the columns, and an identical L-Box on all the lines. Security bounds are derived from the branch number of the L-Box.
In this paper, we focus on bitsliced linear layers inspired by the LS-design construction and the Spook AEAD algorithm. We study the construction of bitsliced linear transformations with efficient implementations using XORs and rotations (optimized for bitsliced ciphers implemented on 32-bit processors), and a high branch number.
In order to increase the density of the activity patterns, the linear layer is designed on the whole state, rather than using multiple parallel copies of an L-Box.
Our main result is a linear layer for 128-bit ciphers with branch number 21, improving upon the best 32-bit transformation with branch number 12, and the one of Spook with branch number 16.The linear layer of block ciphers plays an important role in their security.
In particular, ciphers designed following the wide-trail strategy use the branch number of the linear layer to derive bounds on the probability of linear and differential trails.
At FSE 2014, the LS-design construction was introduced as a simple and regular structure to design bitsliced block ciphers. It considers the internal state as a bit matrix, and applies alternatively an identical S-Box on all the columns, and an identical L-Box on all the lines. Security bounds are derived from the branch number of the L-Box.
In this paper, we focus on bitsliced linear layers inspired by the LS-design construction and the Spook AEAD algorithm. We study the construction of bitsliced linear transformations with efficient implementations using XORs and rotations (optimized for bitsliced ciphers implemented on 32-bit processors), and a high branch number.
In order to increase the density of the activity patterns, the linear layer is designed on the whole state, rather than using multiple parallel copies of an L-Box.
Our main result is a linear layer for 128-bit ciphers with branch number 21, improving upon the best 32-bit transformation with branch number 12, and the one of Spook with branch number 16.2023-11-22T15:08:47+00:00https://creativecommons.org/licenses/by/4.0/Gaëtan LeurentClara Pernothttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1802Sublinear-Communication Secure Multiparty Computation does not require FHE2023-11-22T11:59:44+00:00Elette BoyleGeoffroy CouteauPierre MeyerSecure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols---in particular, when communication can be sublinear in the circuit representation size of the desired function.
Significant advances have been made affirmatively answering this question within the two-party setting, based on a variety of structures and hardness assumptions. In contrast, in the multi-party setting, only one general approach is known: using Fully Homomorphic Encryption (FHE).
This remains the state of affairs even for just three parties, with two corruptions.
We present a framework for achieving secure sublinear-communication $(N+1)$-party computation, building from a particular form of Function Secret Sharing for only $N$ parties. In turn, we demonstrate implications to sublinear secure computation for various function classes in the 3-party and 5-party settings based on an assortment of assumptions not known to imply FHE.Secure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols---in particular, when communication can be sublinear in the circuit representation size of the desired function.
Significant advances have been made affirmatively answering this question within the two-party setting, based on a variety of structures and hardness assumptions. In contrast, in the multi-party setting, only one general approach is known: using Fully Homomorphic Encryption (FHE).
This remains the state of affairs even for just three parties, with two corruptions.
We present a framework for achieving secure sublinear-communication $(N+1)$-party computation, building from a particular form of Function Secret Sharing for only $N$ parties. In turn, we demonstrate implications to sublinear secure computation for various function classes in the 3-party and 5-party settings based on an assortment of assumptions not known to imply FHE.2023-11-22T11:59:44+00:00https://creativecommons.org/licenses/by/4.0/Elette BoyleGeoffroy CouteauPierre Meyerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1801ForgedAttributes: An Existential Forgery Vulnerability of CMS and PKCS#7 Signatures2023-11-22T09:26:11+00:00Falko StrenzkeThis work describes an existential signature forgery vulnerability of the current CMS and PKCS#7 signature standards. The vulnerability results from an ambiguity of how to process the signed message in the signature verification process. Specifically, the absence or presence of the so called SignedAttributes field determines whether the signature message digest receives as input the message directly or the SignedAttributes, a DER-encoded structure which contains a digest of the message. If an attacker takes a CMS or PKCS#7 signed message M which was originally signed with SignedAttributes present, then he can craft a new message M 0 that was never signed by the signer and has the DER-encoded SignedAttributes of the original message as its content and verifies correctly against the original signature of M . Due to the limited flexibility of the forged message and the limited control the attacker has over it, the fraction of vulnerable systems must be assumed to be small but due to the wide deployment of the affected protocols, such instances cannot be excluded. We propose a countermeasure based on attack-detection that prevents the attack reliably.This work describes an existential signature forgery vulnerability of the current CMS and PKCS#7 signature standards. The vulnerability results from an ambiguity of how to process the signed message in the signature verification process. Specifically, the absence or presence of the so called SignedAttributes field determines whether the signature message digest receives as input the message directly or the SignedAttributes, a DER-encoded structure which contains a digest of the message. If an attacker takes a CMS or PKCS#7 signed message M which was originally signed with SignedAttributes present, then he can craft a new message M 0 that was never signed by the signer and has the DER-encoded SignedAttributes of the original message as its content and verifies correctly against the original signature of M . Due to the limited flexibility of the forged message and the limited control the attacker has over it, the fraction of vulnerable systems must be assumed to be small but due to the wide deployment of the affected protocols, such instances cannot be excluded. We propose a countermeasure based on attack-detection that prevents the attack reliably.2023-11-22T09:26:11+00:00https://creativecommons.org/licenses/by-sa/4.0/Falko Strenzkehttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/1800Algebraic Attack on FHE-Friendly Cipher HERA Using Multiple Collisions2023-11-22T05:44:39+00:00Fukang LiuAbul KalamSantanu SarkarWilli MeierFully homomorphic encryption (FHE) is an advanced cryptography technique to allow computations (i.e., addition and multiplication) over encrypted data. After years of effort, the performance of FHE has been significantly improved and it has moved from theory to practice. The transciphering framework is another important technique in FHE to address the issue of ciphertext expansion and reduce the client-side computational overhead. Motivated by this framework, several FHE-friendly symmetric-key primitives have been proposed since the publication of LowMC at EUROCRYPT 2015. To apply the transciphering framework to the CKKS scheme, a new transciphering framework called the Real-to-Finite-Field (RtF) framework and a corresponding FHE-friendly symmetric-key primitive called HERA were proposed at ASIACRYPT 2021. Although HERA has a very similar structure to AES, it is considerably different in the following aspects: 1) the power map $x\mapsto x^3$ is used as the S-box; 2) a randomized key schedule is used; 3) it is over a prime field $\mathbb F_p$ with $p>2^{16}$. In this work, we perform the first third-party cryptanalysis of HERA, by showing how to mount new algebraic attacks with multiple collisions in the round keys. Specifically, according to the special way to randomize the round keys in HERA, we find it possible to peel off the last nonlinear layer by using collisions in the last-round key and a simple property of the power map. In this way, we could construct an overdefined system of equations of a much lower degree in the key, and efficiently solve the system via the linearization technique. As a result, for HERA with 192 and 256 bits of security, respectively, we could break some parameters under the same assumption made by designers that the algebra constant $\omega$ for Gaussian elimination is $\omega=2$, i.e., Gaussian elimination on an $n\times n$ matrix takes $\mathcal{O}(n^{\omega})$ field operations. If using more conservative choices like $\omega\in\{2.8,3\}$, our attacks can also successfully reduce the security margins of some variants of \hera to only 1 round. However, the security of HERA with 80 and 128 bits of security is not affected by our attacks due to the high cost to find multiple collisions. In any case, our attacks reveal a weakness of HERA caused by the randomized key schedule and its small state size.Fully homomorphic encryption (FHE) is an advanced cryptography technique to allow computations (i.e., addition and multiplication) over encrypted data. After years of effort, the performance of FHE has been significantly improved and it has moved from theory to practice. The transciphering framework is another important technique in FHE to address the issue of ciphertext expansion and reduce the client-side computational overhead. Motivated by this framework, several FHE-friendly symmetric-key primitives have been proposed since the publication of LowMC at EUROCRYPT 2015. To apply the transciphering framework to the CKKS scheme, a new transciphering framework called the Real-to-Finite-Field (RtF) framework and a corresponding FHE-friendly symmetric-key primitive called HERA were proposed at ASIACRYPT 2021. Although HERA has a very similar structure to AES, it is considerably different in the following aspects: 1) the power map $x\mapsto x^3$ is used as the S-box; 2) a randomized key schedule is used; 3) it is over a prime field $\mathbb F_p$ with $p>2^{16}$. In this work, we perform the first third-party cryptanalysis of HERA, by showing how to mount new algebraic attacks with multiple collisions in the round keys. Specifically, according to the special way to randomize the round keys in HERA, we find it possible to peel off the last nonlinear layer by using collisions in the last-round key and a simple property of the power map. In this way, we could construct an overdefined system of equations of a much lower degree in the key, and efficiently solve the system via the linearization technique. As a result, for HERA with 192 and 256 bits of security, respectively, we could break some parameters under the same assumption made by designers that the algebra constant $\omega$ for Gaussian elimination is $\omega=2$, i.e., Gaussian elimination on an $n\times n$ matrix takes $\mathcal{O}(n^{\omega})$ field operations. If using more conservative choices like $\omega\in\{2.8,3\}$, our attacks can also successfully reduce the security margins of some variants of \hera to only 1 round. However, the security of HERA with 80 and 128 bits of security is not affected by our attacks due to the high cost to find multiple collisions. In any case, our attacks reveal a weakness of HERA caused by the randomized key schedule and its small state size.2023-11-22T05:44:39+00:00https://creativecommons.org/licenses/by/4.0/Fukang LiuAbul KalamSantanu SarkarWilli Meierhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1799BabySpartan: Lasso-based SNARK for non-uniform computation2023-11-22T01:00:45+00:00Srinath SettyJustin ThalerLasso (Setty, Thaler, Wahby, ePrint 2023/1216) is a recent lookup argument that ensures that the prover cryptographically commits to only "small" values. This note describes BabySpartan, a SNARK for a large class of constraint systems that achieves the same property. The SNARK is a simple combination of SuperSpartan and Lasso. The specific class of constraint systems supported is a generalization of so-called Plonkish constraint systems (and a special case of customizable constraint systems (CCS)). Whereas a recent work called Jolt (Arun, Setty, and Thaler, ePrint 2023/1217) can be viewed as an application of Lasso to uniform computation, BabySpartan can be viewed as applying Lasso to non-uniform computation.Lasso (Setty, Thaler, Wahby, ePrint 2023/1216) is a recent lookup argument that ensures that the prover cryptographically commits to only "small" values. This note describes BabySpartan, a SNARK for a large class of constraint systems that achieves the same property. The SNARK is a simple combination of SuperSpartan and Lasso. The specific class of constraint systems supported is a generalization of so-called Plonkish constraint systems (and a special case of customizable constraint systems (CCS)). Whereas a recent work called Jolt (Arun, Setty, and Thaler, ePrint 2023/1217) can be viewed as an application of Lasso to uniform computation, BabySpartan can be viewed as applying Lasso to non-uniform computation.2023-11-22T01:00:45+00:00https://creativecommons.org/licenses/by/4.0/Srinath SettyJustin Thalerhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1798Somewhat Homomorphic Encryption based on Random Codes2023-11-21T16:07:42+00:00Carlos Aguilar-MelchorVictor DyserynPhilippe GaboritWe present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext absorptions as well as a fixed arbitrary number of homomorphic multiplications.
We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext absorptions only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large.
Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes.
We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed.We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext absorptions as well as a fixed arbitrary number of homomorphic multiplications.
We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext absorptions only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large.
Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes.
We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed.2023-11-21T16:07:42+00:00https://creativecommons.org/licenses/by/4.0/Carlos Aguilar-MelchorVictor DyserynPhilippe Gaborithttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1797A Modular Approach to Unclonable Cryptography2023-11-21T15:46:05+00:00Prabhanjan AnanthAmit BeheraWe explore a new pathway to designing unclonable cryptographic primitives. We propose a new notion called unclonable puncturable obfuscation (UPO) and study its implications for unclonable cryptography. Using UPO, we present modular (and in some cases, arguably, simple) constructions of many primitives in unclonable cryptography, including, public-key quantum money, quantum copy-protection for many classes of functionalities, unclonable encryption, and single-decryption encryption.
Notably, we obtain the following new results assuming the existence of UPO:
- We show that any cryptographic functionality can be copy-protected as long as this functionality satisfies a notion of security, which we term as puncturable security. Prior feasibility results focused on copy-protecting specific cryptographic functionalities.
- We show that copy-protection exists for any class of evasive functions as long as the associated distribution satisfies a preimage-sampleability condition. Prior works demonstrated copy-protection for point functions, which follows as a special case of our result.
- We show that unclonable encryption exists in the plain model. Prior works demonstrated feasibility results in the quantum random oracle model.
We put forward a candidate construction of UPO and prove two notions of security, each based on the existence of (post-quantum) sub-exponentially secure indistinguishability obfuscation and one-way functions, the quantum hardness of learning with errors, and a new conjecture called simultaneous inner product conjecture.We explore a new pathway to designing unclonable cryptographic primitives. We propose a new notion called unclonable puncturable obfuscation (UPO) and study its implications for unclonable cryptography. Using UPO, we present modular (and in some cases, arguably, simple) constructions of many primitives in unclonable cryptography, including, public-key quantum money, quantum copy-protection for many classes of functionalities, unclonable encryption, and single-decryption encryption.
Notably, we obtain the following new results assuming the existence of UPO:
- We show that any cryptographic functionality can be copy-protected as long as this functionality satisfies a notion of security, which we term as puncturable security. Prior feasibility results focused on copy-protecting specific cryptographic functionalities.
- We show that copy-protection exists for any class of evasive functions as long as the associated distribution satisfies a preimage-sampleability condition. Prior works demonstrated copy-protection for point functions, which follows as a special case of our result.
- We show that unclonable encryption exists in the plain model. Prior works demonstrated feasibility results in the quantum random oracle model.
We put forward a candidate construction of UPO and prove two notions of security, each based on the existence of (post-quantum) sub-exponentially secure indistinguishability obfuscation and one-way functions, the quantum hardness of learning with errors, and a new conjecture called simultaneous inner product conjecture.2023-11-21T15:46:05+00:00https://creativecommons.org/licenses/by/4.0/Prabhanjan AnanthAmit Beherahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1796Fault Attacks Sensitivity of Public Parameters in the Dilithium Verification2023-11-21T11:21:13+00:00Andersson Calle VieraAlexandre BerzatiKarine HeydemannThis paper presents a comprehensive analysis of the verification
algorithm of the CRYSTALS-Dilithium, focusing on a C reference
implementation. Limited research has been conducted on its susceptibility
to fault attacks, despite its critical role in ensuring the scheme’s security.
To fill this gap, we investigate three distinct fault models - randomizing
faults, zeroizing faults, and skipping faults - to identify vulnerabilities
within the verification process. Based on our analysis, we propose a
methodology for forging CRYSTALS-Dilithium signatures without knowledge
of the secret key. Instead, we leverage specific types of faults during
the verification phase and some properties about public parameters to
make these signatures accepted. Additionally, we compared different attack
scenarios after identifying sensitive operations within the verification
algorithm. The most effective requires potentially fewer fault injections
than targeting the verification check itself. Finally, we introduce a set of
countermeasures designed to thwart all the identified scenarios rendering
the verification algorithm intrinsically resistant to the presented attacks.This paper presents a comprehensive analysis of the verification
algorithm of the CRYSTALS-Dilithium, focusing on a C reference
implementation. Limited research has been conducted on its susceptibility
to fault attacks, despite its critical role in ensuring the scheme’s security.
To fill this gap, we investigate three distinct fault models - randomizing
faults, zeroizing faults, and skipping faults - to identify vulnerabilities
within the verification process. Based on our analysis, we propose a
methodology for forging CRYSTALS-Dilithium signatures without knowledge
of the secret key. Instead, we leverage specific types of faults during
the verification phase and some properties about public parameters to
make these signatures accepted. Additionally, we compared different attack
scenarios after identifying sensitive operations within the verification
algorithm. The most effective requires potentially fewer fault injections
than targeting the verification check itself. Finally, we introduce a set of
countermeasures designed to thwart all the identified scenarios rendering
the verification algorithm intrinsically resistant to the presented attacks.2023-11-21T11:21:13+00:00https://creativecommons.org/licenses/by/4.0/Andersson Calle VieraAlexandre BerzatiKarine Heydemannhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1795Efficiently Testable Circuits without Conductivity2023-11-21T09:56:12+00:00Mirza Ahad BaigSuvradip ChakrabortyStefan DziembowskiMałgorzata GałązkaTomasz LizurejKrzysztof PietrzakThe notion of ``efficiently testable circuits'' (ETC) was recently put forward by Baig et al.~(ITCS'23). Informally, an ETC compiler takes as input any Boolean circuit $C$ and outputs a circuit/inputs tuple $(C',\mathbb{T})$ where (completeness) $C'$ is functionally equivalent to $C$
and (security) if $C'$ is tampered in some restricted way, then this can be detected as $C'$ will err on at least one input in the small test set $\mathbb{T}$. The compiler of Baig et al. detects tampering even if the adversary can tamper with \emph{all} wires in the compiled circuit. Unfortunately, the model requires a strong ``conductivity'' restriction: the compiled circuit has gates with fan-out up to 3, but wires can only be tampered in one way even if they have fan-out greater than one. In this paper, we solve the main open question from their work and construct an ETC compiler without this conductivity restriction.
While Baig et al. use gadgets computing the AND and OR of particular subsets of the wires, our compiler computes inner products with random vectors. We slightly relax their security notion and only require that tampering is detected with high probability over the choice of the randomness.
Our compiler increases the size of the circuit by
only a small constant factor. For a parameter $\lambda$ (think $\lambda\le 5$), the number of additional input and output wires is $|C|^{1/\lambda}$, while the number of test queries to detect an error with constant probability is around $2^{2\lambda}$.The notion of ``efficiently testable circuits'' (ETC) was recently put forward by Baig et al.~(ITCS'23). Informally, an ETC compiler takes as input any Boolean circuit $C$ and outputs a circuit/inputs tuple $(C',\mathbb{T})$ where (completeness) $C'$ is functionally equivalent to $C$
and (security) if $C'$ is tampered in some restricted way, then this can be detected as $C'$ will err on at least one input in the small test set $\mathbb{T}$. The compiler of Baig et al. detects tampering even if the adversary can tamper with \emph{all} wires in the compiled circuit. Unfortunately, the model requires a strong ``conductivity'' restriction: the compiled circuit has gates with fan-out up to 3, but wires can only be tampered in one way even if they have fan-out greater than one. In this paper, we solve the main open question from their work and construct an ETC compiler without this conductivity restriction.
While Baig et al. use gadgets computing the AND and OR of particular subsets of the wires, our compiler computes inner products with random vectors. We slightly relax their security notion and only require that tampering is detected with high probability over the choice of the randomness.
Our compiler increases the size of the circuit by
only a small constant factor. For a parameter $\lambda$ (think $\lambda\le 5$), the number of additional input and output wires is $|C|^{1/\lambda}$, while the number of test queries to detect an error with constant probability is around $2^{2\lambda}$.2023-11-21T09:56:12+00:00https://creativecommons.org/licenses/by/4.0/Mirza Ahad BaigSuvradip ChakrabortyStefan DziembowskiMałgorzata GałązkaTomasz LizurejKrzysztof Pietrzakhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1794Secret-Shared Shuffle with Malicious Security2023-11-30T04:39:04+00:00Xiangfu SongDong YinJianli BaiChangyu DongEe-Chien ChangA secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt'20) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS'21, NDSS'22) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library.A secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt'20) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS'21, NDSS'22) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library.2023-11-21T05:19:39+00:00https://creativecommons.org/publicdomain/zero/1.0/Xiangfu SongDong YinJianli BaiChangyu DongEe-Chien Changhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1793Accountable Multi-Signatures with Constant Size Public Keys2023-11-29T02:13:37+00:00Dan BonehAditi PartapBrent WatersA multisignature scheme is used to aggregate signatures by multiple parties on a common message $m$ into a single short signature on $m$. Multisignatures are used widely in practice, most notably, in proof-of-stake consensus protocols. In existing multisignature schemes, the verifier needs the public keys of all the signers in order to verify a multisignature issued by some subset of signers.
We construct new practical multisignature schemes with three properties:
(i) the verifier only needs to store a constant size public key in order to verify a multisignature by an arbitrary subset of parties, (ii) signature size is constant beyond the description of the signing set, and (iii) signers generate their secret signing keys locally, that is, without a distributed key generation protocol. Existing schemes satisfy properties (ii) and (iii). The new capability is property (i) which dramatically reduces the verifier's memory requirements from linear in the number of signers to constant.
We give two pairing-based constructions: one in the random oracle model and one in the plain model. We also show that by relaxing property (iii), that is, allowing for a simple distributed key generation protocol, we can further improve efficiency while continuing to satisfy properties (i) and (ii). We give a pairing-based scheme and a lattice-based scheme in this relaxed model.A multisignature scheme is used to aggregate signatures by multiple parties on a common message $m$ into a single short signature on $m$. Multisignatures are used widely in practice, most notably, in proof-of-stake consensus protocols. In existing multisignature schemes, the verifier needs the public keys of all the signers in order to verify a multisignature issued by some subset of signers.
We construct new practical multisignature schemes with three properties:
(i) the verifier only needs to store a constant size public key in order to verify a multisignature by an arbitrary subset of parties, (ii) signature size is constant beyond the description of the signing set, and (iii) signers generate their secret signing keys locally, that is, without a distributed key generation protocol. Existing schemes satisfy properties (ii) and (iii). The new capability is property (i) which dramatically reduces the verifier's memory requirements from linear in the number of signers to constant.
We give two pairing-based constructions: one in the random oracle model and one in the plain model. We also show that by relaxing property (iii), that is, allowing for a simple distributed key generation protocol, we can further improve efficiency while continuing to satisfy properties (i) and (ii). We give a pairing-based scheme and a lattice-based scheme in this relaxed model.2023-11-20T19:01:30+00:00https://creativecommons.org/licenses/by/4.0/Dan BonehAditi PartapBrent Watershttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1792Sloth: Key Stretching and Deniable Encryption using Secure Elements on Smartphones2023-11-29T12:52:43+00:00Daniel HugenrothAlberto SonninoSam CutlerAlastair R. BeresfordTraditional key stretching lacks a strict time guarantee due to the ease of parallelized password guessing by attackers. This paper introduces Sloth, a key stretching method leveraging the Secure Element (SE) commonly found in modern smartphones to provide a strict rate limit on password guessing. While this would be straightforward with full access to the SE, Android and iOS only provide a very limited API. Sloth utilizes the existing developer SE API and novel cryptographic constructions to build an effective rate-limit for password guessing on recent Android and iOS devices. Our approach ensures robust security even for short, randomly-generated, six-character alpha-numeric passwords against adversaries with virtually unlimited computing resources. Our solution is compatible with approximately 96% of iPhones and 45% of Android phones and Sloth seamlessly integrates without device or OS modifications, making it immediately usable by app developers today. We formally define the security of Sloth and evaluate its performance on various devices. Finally, we present HiddenSloth, a deniable encryption scheme, leveraging Sloth and the SE to withstand multi-snapshot adversaries.Traditional key stretching lacks a strict time guarantee due to the ease of parallelized password guessing by attackers. This paper introduces Sloth, a key stretching method leveraging the Secure Element (SE) commonly found in modern smartphones to provide a strict rate limit on password guessing. While this would be straightforward with full access to the SE, Android and iOS only provide a very limited API. Sloth utilizes the existing developer SE API and novel cryptographic constructions to build an effective rate-limit for password guessing on recent Android and iOS devices. Our approach ensures robust security even for short, randomly-generated, six-character alpha-numeric passwords against adversaries with virtually unlimited computing resources. Our solution is compatible with approximately 96% of iPhones and 45% of Android phones and Sloth seamlessly integrates without device or OS modifications, making it immediately usable by app developers today. We formally define the security of Sloth and evaluate its performance on various devices. Finally, we present HiddenSloth, a deniable encryption scheme, leveraging Sloth and the SE to withstand multi-snapshot adversaries.2023-11-20T16:47:47+00:00https://creativecommons.org/licenses/by-nc-sa/4.0/Daniel HugenrothAlberto SonninoSam CutlerAlastair R. Beresfordhttps://creativecommons.org/licenses/by-nc-sa/4.0/https://eprint.iacr.org/2023/1791Decentralized Compromise-Tolerant Public Key Management Ecosystem with Threshold Validation2023-11-20T16:40:52+00:00Jamal MosakheilKan YangThis paper examines the vulnerabilities inherent in prevailing Public Key Infrastructure (PKI) systems reliant on centralized Certificate Authorities (CAs), wherein a compromise of the CA introduces risks to the integrity of public key management. We present PKChain, a decentralized and compromise-tolerant public key management system built on blockchain technology, offering transparent, tamper-resistant, and verifiable services for key operations such as registration, update, query, validation, and revocation.
Our innovative approach involves a novel threshold block validation scheme that combines a novel threshold cryptographic scheme with blockchain consensus. This scheme allows each validator to validate each public key record partially and proactively secures it before inclusion in a block. Additionally, to further validate and verify each block and to facilitate public verification of the public key records, we introduce an aggregate commitment signature scheme. Our contribution extends to the development of a new, efficient, and practical Byzantine Compromise-Tolerant and Verifiable (pBCTV) consensus model, integrating the proposed validation and signature schemes with practical Byzantine Fault Tolerance (pBFT).
Through a comprehensive examination encompassing security analysis, performance evaluation, and a prototype implementation, we substantiate that PKChain is a secure, efficient, and robust solution for public key management.This paper examines the vulnerabilities inherent in prevailing Public Key Infrastructure (PKI) systems reliant on centralized Certificate Authorities (CAs), wherein a compromise of the CA introduces risks to the integrity of public key management. We present PKChain, a decentralized and compromise-tolerant public key management system built on blockchain technology, offering transparent, tamper-resistant, and verifiable services for key operations such as registration, update, query, validation, and revocation.
Our innovative approach involves a novel threshold block validation scheme that combines a novel threshold cryptographic scheme with blockchain consensus. This scheme allows each validator to validate each public key record partially and proactively secures it before inclusion in a block. Additionally, to further validate and verify each block and to facilitate public verification of the public key records, we introduce an aggregate commitment signature scheme. Our contribution extends to the development of a new, efficient, and practical Byzantine Compromise-Tolerant and Verifiable (pBCTV) consensus model, integrating the proposed validation and signature schemes with practical Byzantine Fault Tolerance (pBFT).
Through a comprehensive examination encompassing security analysis, performance evaluation, and a prototype implementation, we substantiate that PKChain is a secure, efficient, and robust solution for public key management.2023-11-20T16:40:52+00:00https://creativecommons.org/licenses/by/4.0/Jamal MosakheilKan Yanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1790Compromising sensitive information through Padding Oracle and Known Plaintext attacks in Encrypt-then-TLS scenarios2023-11-20T15:48:15+00:00Daniel Espinoza FigueroaLet's consider a scenario where the server encrypts data using AES-CBC without authentication and then sends only the encrypted ciphertext through TLS (without IV). Then, having a padding oracle, we managed to recover the initialization vector and the sensitive data, doing a cybersecurity audit for a Chilean company.Let's consider a scenario where the server encrypts data using AES-CBC without authentication and then sends only the encrypted ciphertext through TLS (without IV). Then, having a padding oracle, we managed to recover the initialization vector and the sensitive data, doing a cybersecurity audit for a Chilean company.2023-11-20T15:48:15+00:00https://creativecommons.org/licenses/by/4.0/Daniel Espinoza Figueroahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1789Fast and Secure Oblivious Stable Matching over Arithmetic Circuits2023-11-20T08:21:49+00:00Arup MondalPriyam PandaShivam AgarwalAbdelrahaman AlyDebayan GuptaThe classic stable matching algorithm of Gale and Shapley (American Mathematical Monthly '69) and subsequent variants such as those by Roth (Mathematics of Operations Research '82) and Abdulkadiroglu et al. (American Economic Review '05) have been used successfully in a number of real-world scenarios, including the assignment of medical-school graduates to residency programs, New York City teenagers to high schools, and Norwegian and Singaporean students to schools and universities. However, all of these suffer from one shortcoming: in order to avoid strategic manipulation, they require all participants to submit their preferences to a trusted third party who performs the computation. In some sensitive application scenarios, there is no appropriate (or cost-effective) trusted party. This makes stable matching a natural candidate for secure computation. Several approaches have been proposed to overcome this, based on secure multiparty computation (MPC), fully homomorphic encryption, etc.; many of these protocols are slow and impractical for real-world use.
We propose a novel primitive for privacy-preserving stable matching using MPC (i.e., arithmetic circuits, for any number of parties). Specifically, we discuss two variants of oblivious stable matching and describe an improved oblivious stable matching on the random memory access model based on lookup tables. To explore and showcase the practicality of our proposed primitive, we present detailed benchmarks (at various problem sizes) of our constructions using two popular frameworks: SCALE-MAMBA and MP-SPDZ.The classic stable matching algorithm of Gale and Shapley (American Mathematical Monthly '69) and subsequent variants such as those by Roth (Mathematics of Operations Research '82) and Abdulkadiroglu et al. (American Economic Review '05) have been used successfully in a number of real-world scenarios, including the assignment of medical-school graduates to residency programs, New York City teenagers to high schools, and Norwegian and Singaporean students to schools and universities. However, all of these suffer from one shortcoming: in order to avoid strategic manipulation, they require all participants to submit their preferences to a trusted third party who performs the computation. In some sensitive application scenarios, there is no appropriate (or cost-effective) trusted party. This makes stable matching a natural candidate for secure computation. Several approaches have been proposed to overcome this, based on secure multiparty computation (MPC), fully homomorphic encryption, etc.; many of these protocols are slow and impractical for real-world use.
We propose a novel primitive for privacy-preserving stable matching using MPC (i.e., arithmetic circuits, for any number of parties). Specifically, we discuss two variants of oblivious stable matching and describe an improved oblivious stable matching on the random memory access model based on lookup tables. To explore and showcase the practicality of our proposed primitive, we present detailed benchmarks (at various problem sizes) of our constructions using two popular frameworks: SCALE-MAMBA and MP-SPDZ.2023-11-20T08:21:49+00:00https://creativecommons.org/licenses/by/4.0/Arup MondalPriyam PandaShivam AgarwalAbdelrahaman AlyDebayan Guptahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1788Homomorphic Multiple Precision Multiplication for CKKS and Reduced Modulus Consumption2023-11-20T04:32:22+00:00Jung Hee CheonWonhee ChoJaehyung KimDamien StehléHomomorphic Encryption (HE) schemes such as BGV, BFV, and CKKS consume some ciphertext modulus for each multiplication. Bootstrapping (BTS) restores the modulus and allows homomorphic computation to continue, but it is time-consuming and requires a significant amount of modulus. For these reasons, decreasing modulus consumption is crucial topic for BGV, BFV and CKKS, on which numerous studies have been conducted.
We propose a novel method, called $\mathsf{mult}^2$, to perform ciphertext multiplication in the CKKS scheme with lower modulus consumption. $\mathsf{mult}^2$ relies an a new decomposition of a ciphertext into a pair of ciphertexts that homomorphically performs a weak form of Euclidean division. It multiplies two ciphertexts in decomposed formats with homomorphic double precision multiplication, and its result approximately decrypts to the same value as does the ordinary CKKS multiplication. $\mathsf{mult}^2$ can perform homomorphic multiplication by consuming almost half of the modulus.
We extend it to $\mathsf{mult}^t$ for any $t\geq 2$, which relies on the decomposition of a ciphertext into $t$ components. All other CKKS operations can be equally performed on pair/tuple formats, leading to the double-CKKS (resp. tuple-CKKS) scheme enabling homomorphic double (resp. multiple) precision arithmetic.
As a result, when the ciphertext modulus and dimension are fixed, the proposed algorithms enable the evaluation of deeper circuits without bootstrapping, or allow to reduce the number of bootstrappings required for the evaluation of the same circuits. Furthermore, they can be used to increase the precision without increasing the parameters. For example, $\mathsf{mult}^2$ enables 8 sequential multiplications with 100 bit scaling factor with a ciphertext modulus of only 680 bits, which is impossible with the ordinary CKKS multiplication algorithm.Homomorphic Encryption (HE) schemes such as BGV, BFV, and CKKS consume some ciphertext modulus for each multiplication. Bootstrapping (BTS) restores the modulus and allows homomorphic computation to continue, but it is time-consuming and requires a significant amount of modulus. For these reasons, decreasing modulus consumption is crucial topic for BGV, BFV and CKKS, on which numerous studies have been conducted.
We propose a novel method, called $\mathsf{mult}^2$, to perform ciphertext multiplication in the CKKS scheme with lower modulus consumption. $\mathsf{mult}^2$ relies an a new decomposition of a ciphertext into a pair of ciphertexts that homomorphically performs a weak form of Euclidean division. It multiplies two ciphertexts in decomposed formats with homomorphic double precision multiplication, and its result approximately decrypts to the same value as does the ordinary CKKS multiplication. $\mathsf{mult}^2$ can perform homomorphic multiplication by consuming almost half of the modulus.
We extend it to $\mathsf{mult}^t$ for any $t\geq 2$, which relies on the decomposition of a ciphertext into $t$ components. All other CKKS operations can be equally performed on pair/tuple formats, leading to the double-CKKS (resp. tuple-CKKS) scheme enabling homomorphic double (resp. multiple) precision arithmetic.
As a result, when the ciphertext modulus and dimension are fixed, the proposed algorithms enable the evaluation of deeper circuits without bootstrapping, or allow to reduce the number of bootstrappings required for the evaluation of the same circuits. Furthermore, they can be used to increase the precision without increasing the parameters. For example, $\mathsf{mult}^2$ enables 8 sequential multiplications with 100 bit scaling factor with a ciphertext modulus of only 680 bits, which is impossible with the ordinary CKKS multiplication algorithm.2023-11-20T04:32:22+00:00https://creativecommons.org/licenses/by/4.0/Jung Hee CheonWonhee ChoJaehyung KimDamien Stehléhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1787Updatable Privacy-Preserving Blueprints2023-11-19T17:36:23+00:00Bernardo DavidFelix EngelmannTore FrederiksenMarkulf KohlweissElena PagninMikhail VolkhovPrivacy-preserving blueprints enable users to create escrows using the auditor's public key. An escrow encrypts the evaluation of a function $P(t,x)$, where $t$ is a secret input used to generate the auditor's key and $x$ is the user's private input to escrow generation. Nothing but $P(t,x)$ is revealed even to a fully corrupted auditor. The original definition and construction (Kohlweiss et al., EUROCRYPT'23) only support the evaluation of functions on an input $x$ provided by a single user.
We address this limitation by introducing updatable privacy-preserving blueprint schemes (UPPB), which enhance the original notion with the ability for multiple parties to non-interactively update the private value $x$ in a blueprint. Moreover, a UPPB scheme allows for verifying that a blueprint is the result of a sequence of valid updates while revealing nothing else.
We present uBlu, an efficient instantiation of UPPB for computing a comparison between private user values and a private threshold $t$ set by the auditor, where the current value $x$ is the cumulative sum of private inputs, which enables applications such as privacy-preserving anti-money laundering and location tracking. Additionally, we show the feasibility of the notion generically for all value update functions and (binary) predicates from FHE and NIZKs.
Our main technical contribution is a technique to keep the size of primary blueprint components independent of the number of updates and reasonable for practical applications. This is achieved by elegantly extending an algebraic NIZK by Couteau and Hartmann (CRYPTO'20) with an update function and making it compatible with our additive updates. This result is of independent interest and may find additional applications thanks to the concise size of our proofs.Privacy-preserving blueprints enable users to create escrows using the auditor's public key. An escrow encrypts the evaluation of a function $P(t,x)$, where $t$ is a secret input used to generate the auditor's key and $x$ is the user's private input to escrow generation. Nothing but $P(t,x)$ is revealed even to a fully corrupted auditor. The original definition and construction (Kohlweiss et al., EUROCRYPT'23) only support the evaluation of functions on an input $x$ provided by a single user.
We address this limitation by introducing updatable privacy-preserving blueprint schemes (UPPB), which enhance the original notion with the ability for multiple parties to non-interactively update the private value $x$ in a blueprint. Moreover, a UPPB scheme allows for verifying that a blueprint is the result of a sequence of valid updates while revealing nothing else.
We present uBlu, an efficient instantiation of UPPB for computing a comparison between private user values and a private threshold $t$ set by the auditor, where the current value $x$ is the cumulative sum of private inputs, which enables applications such as privacy-preserving anti-money laundering and location tracking. Additionally, we show the feasibility of the notion generically for all value update functions and (binary) predicates from FHE and NIZKs.
Our main technical contribution is a technique to keep the size of primary blueprint components independent of the number of updates and reasonable for practical applications. This is achieved by elegantly extending an algebraic NIZK by Couteau and Hartmann (CRYPTO'20) with an update function and making it compatible with our additive updates. This result is of independent interest and may find additional applications thanks to the concise size of our proofs.2023-11-19T17:36:23+00:00https://creativecommons.org/licenses/by/4.0/Bernardo DavidFelix EngelmannTore FrederiksenMarkulf KohlweissElena PagninMikhail Volkhovhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1786CASE: A New Frontier in Public-Key Authenticated Encryption2023-11-27T14:00:49+00:00Shashank AgrawalShweta AgrawalManoj PrabhakaranRajeev RaghunathJayesh SinglaWe introduce a new cryptographic primitive, called Completely Anonymous Signed Encryption (CASE). CASE is a public-key authenticated encryption primitive, that offers anonymity for senders as well as receivers. A "case-packet" should appear, without a (decryption) key for opening it, to be a blackbox that reveals no information at all about its contents. To decase a case-packet fully - so that the message is retrieved and authenticated - a verifcation key is also required.
Defining security for this primitive is subtle. We present a relatively simple Chosen Objects Attack (COA) security definition. Validating this definition, we show that it implies a comprehensive indistinguishability-preservation definition in the real-ideal paradigm. To obtain the latter definition, we extend the Cryptographic Agents framework of [2, 3] to allow maliciously created objects.
We also provide a novel and practical construction for COA-secure CASE under standard assumptions in public-key cryptography, and in the standard model. We believe CASE can be a staple in future cryptographic libraries, thanks to its robust security guarantees and efficient instantiations based on standard assumptions.We introduce a new cryptographic primitive, called Completely Anonymous Signed Encryption (CASE). CASE is a public-key authenticated encryption primitive, that offers anonymity for senders as well as receivers. A "case-packet" should appear, without a (decryption) key for opening it, to be a blackbox that reveals no information at all about its contents. To decase a case-packet fully - so that the message is retrieved and authenticated - a verifcation key is also required.
Defining security for this primitive is subtle. We present a relatively simple Chosen Objects Attack (COA) security definition. Validating this definition, we show that it implies a comprehensive indistinguishability-preservation definition in the real-ideal paradigm. To obtain the latter definition, we extend the Cryptographic Agents framework of [2, 3] to allow maliciously created objects.
We also provide a novel and practical construction for COA-secure CASE under standard assumptions in public-key cryptography, and in the standard model. We believe CASE can be a staple in future cryptographic libraries, thanks to its robust security guarantees and efficient instantiations based on standard assumptions.2023-11-19T08:03:29+00:00https://creativecommons.org/licenses/by/4.0/Shashank AgrawalShweta AgrawalManoj PrabhakaranRajeev RaghunathJayesh Singlahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1785There Is Always a Way Out! Destruction-Resistant Key Management: Formal Definition and Practical Instantiation2023-11-18T06:41:39+00:00Yuan ZhangYaqing SongShiyu LiWeijia LiZeqi LaiQiang TangA central advantage of deploying cryptosystems is that the security of large high-sensitive data sets can be reduced to the security of a very small key. The most popular way to manage keys is to use a $(t,n)-$threshold secret sharing scheme: a user splits her/his key into $n$ shares, distributes them among $n$ key servers, and can recover the key with the aid of any $t$ of them. However, it is vulnerable to device destruction: if all key servers and user's devices break down, the key will be permanently lost. We propose a $\mathrm{\underline{D}}$estruction-$\mathrm{\underline{R}}$esistant $\mathrm{\underline{K}}$ey $\mathrm{\underline{M}}$anagement scheme, dubbed DRKM, which ensures the key availability even if destruction occurs. In DRKM, a user utilizes her/his $n^{*}$ personal identification factors (PIFs) to derive a cryptographic key but can retrieve the key using any $t^{*}$ of the $n^{*}$ PIFs. As most PIFs can be retrieved by the user $\textit{per se}$ without requiring $\textit{stateful}$ devices, destruction resistance is achieved. With the integration of a $(t,n)-$threshold secret sharing scheme, DRKM also provides $\textit{portable}$ key access for the user (with the aid of any $t$ of $n$ key servers) before destruction occurs. DRKM can be utilized to construct a destruction-resistant cryptosystem (DRC) in tandem with any backup system. We formally prove the security of DRKM, implement a DRKM prototype, and conduct a comprehensive performance evaluation to demonstrate its high efficiency. We further utilize Cramer's Rule to reduce the required buffer to retrieve a key from 25 MB to 40 KB (for 256-bit security).A central advantage of deploying cryptosystems is that the security of large high-sensitive data sets can be reduced to the security of a very small key. The most popular way to manage keys is to use a $(t,n)-$threshold secret sharing scheme: a user splits her/his key into $n$ shares, distributes them among $n$ key servers, and can recover the key with the aid of any $t$ of them. However, it is vulnerable to device destruction: if all key servers and user's devices break down, the key will be permanently lost. We propose a $\mathrm{\underline{D}}$estruction-$\mathrm{\underline{R}}$esistant $\mathrm{\underline{K}}$ey $\mathrm{\underline{M}}$anagement scheme, dubbed DRKM, which ensures the key availability even if destruction occurs. In DRKM, a user utilizes her/his $n^{*}$ personal identification factors (PIFs) to derive a cryptographic key but can retrieve the key using any $t^{*}$ of the $n^{*}$ PIFs. As most PIFs can be retrieved by the user $\textit{per se}$ without requiring $\textit{stateful}$ devices, destruction resistance is achieved. With the integration of a $(t,n)-$threshold secret sharing scheme, DRKM also provides $\textit{portable}$ key access for the user (with the aid of any $t$ of $n$ key servers) before destruction occurs. DRKM can be utilized to construct a destruction-resistant cryptosystem (DRC) in tandem with any backup system. We formally prove the security of DRKM, implement a DRKM prototype, and conduct a comprehensive performance evaluation to demonstrate its high efficiency. We further utilize Cramer's Rule to reduce the required buffer to retrieve a key from 25 MB to 40 KB (for 256-bit security).2023-11-18T06:41:39+00:00https://creativecommons.org/licenses/by-nc/4.0/Yuan ZhangYaqing SongShiyu LiWeijia LiZeqi LaiQiang Tanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/1784Succinct Arguments over Towers of Binary Fields2023-12-01T22:40:17+00:00Benjamin E. DiamondJim PosenWe introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.We introduce an efficient SNARK for towers of binary fields. Adapting Brakedown (CRYPTO '23), we construct a multilinear polynomial commitment scheme suitable for polynomials over tiny fields, including that with 2 elements. Our commitment scheme, unlike those of previous works, treats small-field polynomials with zero embedding overhead. We further introduce binary-field adaptations of HyperPlonk's (EUROCRYPT '23) product and permutation checks, as well as of Lasso's lookup. Our scheme's binary PLONKish variant captures standard hash functions—like Keccak-256 and Grøstl—extremely efficiently. With recourse to thorough performance benchmarks, we argue that our scheme can efficiently generate precisely those Keccak-256-proofs which critically underlie modern efforts to scale Ethereum.2023-11-17T21:58:26+00:00https://creativecommons.org/licenses/by/4.0/Benjamin E. DiamondJim Posenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1783An efficient quantum parallel repetition theorem and applications2023-11-17T18:12:05+00:00John BostanciLuowen QianNicholas SpoonerHenry YuenWe prove a tight parallel repetition theorem for $3$-message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of $4$-message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent $3$-message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07].
As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary.We prove a tight parallel repetition theorem for $3$-message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of $4$-message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent $3$-message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07].
As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary.2023-11-17T18:12:05+00:00https://creativecommons.org/licenses/by/4.0/John BostanciLuowen QianNicholas SpoonerHenry Yuenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1782A Solution to a Conjecture on the Maps $\chi_n^{(k)}$2023-11-17T08:59:15+00:00Kamil OtalThe Boolean map $\chi_n^{(k)}:\mathbb{F}_{2^k}^n\rightarrow \mathbb{F}_{2^k}^n$, $x\mapsto u$ given by $u_i=x_i+(x_{(i+1)\ \mathrm{mod}\ n}+1)x_{(i+2)\ \mathrm{mod}\ n}$ appears in various permutations as a part of cryptographic schemes such as KECCAK-f, ASCON, Xoodoo, Rasta, and Subterranean (2.0). Schoone and Daemen investigated some important algebraic properties of $\chi_n^{(k)}$ in [IACR Cryptology ePrint Archive 2023/1708]. In particular, they showed that $\chi_n^{(k)}$ is not bijective when $n$ is even, when $n$ is odd and $k$ is even, and when $n$ is odd and $k$ is a multiple of $3$. They left the remaining cases as a conjecture. In this paper, we examine this conjecture by taking some smaller sub-cases into account by reinterpreting the problem via the Gröbner basis approach. As a result, we prove that $\chi_n^{(k)}$ is not bijective when $n$ is a multiple of 3 or 5, and $k$ is a multiple of 5 or 7. We then present an algorithmic method that solves the problem for any given arbitrary $n$ and $k$ by generalizing our approach. We also discuss the systematization of our proof and computational boundaries.The Boolean map $\chi_n^{(k)}:\mathbb{F}_{2^k}^n\rightarrow \mathbb{F}_{2^k}^n$, $x\mapsto u$ given by $u_i=x_i+(x_{(i+1)\ \mathrm{mod}\ n}+1)x_{(i+2)\ \mathrm{mod}\ n}$ appears in various permutations as a part of cryptographic schemes such as KECCAK-f, ASCON, Xoodoo, Rasta, and Subterranean (2.0). Schoone and Daemen investigated some important algebraic properties of $\chi_n^{(k)}$ in [IACR Cryptology ePrint Archive 2023/1708]. In particular, they showed that $\chi_n^{(k)}$ is not bijective when $n$ is even, when $n$ is odd and $k$ is even, and when $n$ is odd and $k$ is a multiple of $3$. They left the remaining cases as a conjecture. In this paper, we examine this conjecture by taking some smaller sub-cases into account by reinterpreting the problem via the Gröbner basis approach. As a result, we prove that $\chi_n^{(k)}$ is not bijective when $n$ is a multiple of 3 or 5, and $k$ is a multiple of 5 or 7. We then present an algorithmic method that solves the problem for any given arbitrary $n$ and $k$ by generalizing our approach. We also discuss the systematization of our proof and computational boundaries.2023-11-17T08:59:15+00:00https://creativecommons.org/licenses/by/4.0/Kamil Otalhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1781A Lattice Attack on CRYSTALS-Kyber with Correlation Power Analysis2023-11-25T18:59:54+00:00Yen-Ting KuoAtsushi TakayasuCRYSTALS-Kyber is a key-encapsulation mechanism, whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. As in its specification, Kyber prescribes the usage of the Number Theoretic Transform (NTT) for efficient polynomial multiplication. Side-channel assisted attacks against Post-Quantum Cryptography (PQC) algorithms like Kyber remain a concern in the ongoing standardization process of quantum-computer-resistant cryptosystems. Among the attacks, correlation power analysis (CPA) is emerging as a popular option because it does not require detailed knowledge about the attacked device and can reveal the secret key even if the recorded power traces are extremely noisy. In this paper, we present a two-step attack to achieve a full-key recovery on lattice-based cryptosystems that utilize NTT for efficient polynomial multiplication. First, we use CPA to recover a portion of the secret key from the power consumption of these polynomial multiplications in the decryption process. Then, using the information, we are able to fully recover the secret key by constructing an LWE problem with a smaller lattice rank and solving it with lattice reduction algorithms. Our attack can be expanded to other cryptosystems using NTT-based polynomial multiplication, including Saber. It can be further parallelized and experiments on simulated traces show that the whole process can be done within 20 minutes on a 16-core machine with 200 traces. Compared to other CPA attacks targeting NTT in the cryptosystems, our attack achieves lower runtime in practice. Furthermore, we can theoretically decrease the number of traces needed by using lattice reduction if the same measurement is used. Our lattice attack also outperforms the state-of-the-art result on integrating side-channel hints into lattices, however, the improvement heavily depends on the implementation of the NTT chosen by the users.CRYSTALS-Kyber is a key-encapsulation mechanism, whose security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices. As in its specification, Kyber prescribes the usage of the Number Theoretic Transform (NTT) for efficient polynomial multiplication. Side-channel assisted attacks against Post-Quantum Cryptography (PQC) algorithms like Kyber remain a concern in the ongoing standardization process of quantum-computer-resistant cryptosystems. Among the attacks, correlation power analysis (CPA) is emerging as a popular option because it does not require detailed knowledge about the attacked device and can reveal the secret key even if the recorded power traces are extremely noisy. In this paper, we present a two-step attack to achieve a full-key recovery on lattice-based cryptosystems that utilize NTT for efficient polynomial multiplication. First, we use CPA to recover a portion of the secret key from the power consumption of these polynomial multiplications in the decryption process. Then, using the information, we are able to fully recover the secret key by constructing an LWE problem with a smaller lattice rank and solving it with lattice reduction algorithms. Our attack can be expanded to other cryptosystems using NTT-based polynomial multiplication, including Saber. It can be further parallelized and experiments on simulated traces show that the whole process can be done within 20 minutes on a 16-core machine with 200 traces. Compared to other CPA attacks targeting NTT in the cryptosystems, our attack achieves lower runtime in practice. Furthermore, we can theoretically decrease the number of traces needed by using lattice reduction if the same measurement is used. Our lattice attack also outperforms the state-of-the-art result on integrating side-channel hints into lattices, however, the improvement heavily depends on the implementation of the NTT chosen by the users.2023-11-17T08:15:35+00:00https://creativecommons.org/licenses/by/4.0/Yen-Ting KuoAtsushi Takayasuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1780Pairing-Free Blind Signatures from CDH Assumptions2023-11-17T07:41:37+00:00Rutchathon Chairattana-ApiromStefano TessaroChenzhi ZhuThis paper presents new blind signatures for which concurrent security, in the random oracle model, can be proved from variants of the computational Diffie-Hellman (CDH) assumption in pairing-free groups without relying on the algebraic group model (AGM). With the exception of careful instantiations of generic non-black box techniques following Fischlin's paradigm (CRYPTO '06), prior works without the AGM in the pairing-free regime have only managed to prove security for a-priori bounded concurrency.
Our most efficient constructions rely on the chosen-target CDH assumption, which has been used to prove security of Blind BLS by Boldyreva (PKC '03), and can be seen as blind versions of signatures by Goh and Jarecki (EUROCRYPT '03) and Chevallier-Mames (CRYPTO'05). We also give a less efficient scheme with security based on (plain) CDH which builds on top of a natural pairing-free variant of Rai-Choo (Hanzlik, Loss, and Wagner, EUROCRYPT '23). Our schemes have signing protocols that consist of four (in order to achieve regular unforgeability) or five moves (for strong unforgeability).
The blindness of our schemes is either computational (assuming the hardness of the discrete logarithm problem), or statistical in the random oracle model.This paper presents new blind signatures for which concurrent security, in the random oracle model, can be proved from variants of the computational Diffie-Hellman (CDH) assumption in pairing-free groups without relying on the algebraic group model (AGM). With the exception of careful instantiations of generic non-black box techniques following Fischlin's paradigm (CRYPTO '06), prior works without the AGM in the pairing-free regime have only managed to prove security for a-priori bounded concurrency.
Our most efficient constructions rely on the chosen-target CDH assumption, which has been used to prove security of Blind BLS by Boldyreva (PKC '03), and can be seen as blind versions of signatures by Goh and Jarecki (EUROCRYPT '03) and Chevallier-Mames (CRYPTO'05). We also give a less efficient scheme with security based on (plain) CDH which builds on top of a natural pairing-free variant of Rai-Choo (Hanzlik, Loss, and Wagner, EUROCRYPT '23). Our schemes have signing protocols that consist of four (in order to achieve regular unforgeability) or five moves (for strong unforgeability).
The blindness of our schemes is either computational (assuming the hardness of the discrete logarithm problem), or statistical in the random oracle model.2023-11-17T07:41:37+00:00https://creativecommons.org/licenses/by/4.0/Rutchathon Chairattana-ApiromStefano TessaroChenzhi Zhuhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1779Privacy-Preserving Cross-Facility Early Warning for Unknown Epidemics2023-11-17T06:07:05+00:00Shiyu LiYuan ZhangYaqing SongFan WuFeng LyuKan YangQiang TangSyndrome-based early epidemic warning plays a vital role in preventing and controlling unknown epidemic outbreaks. It monitors the frequency of each syndrome, issues a warning if some frequency is aberrant, identifies potential epidemic outbreaks, and alerts governments as early as possible. Existing systems adopt a cloud-assisted paradigm to achieve cross-facility statistics on the syndrome frequencies. However, in these systems, all symptom data would be directly leaked to the cloud, which causes critical security and privacy issues.
In this paper, we first analyze syndrome-based early epidemic warning systems and formalize two security notions, i.e., symptom confidentiality and frequency confidentiality, according to the inherent security requirements. We propose
EpiOracle, a cross-facility early warning scheme for unknown epidemics. EpiOracle ensures that the contents and frequencies of syndromes will not be leaked to any unrelated parties; moreover, our construction uses only a symmetric-key encryption algorithm and cryptographic hash functions (e.g., [CBC]AES and SHA-3), making it highly efficient. We formally prove the security of EpiOracle in the random oracle model. We also implement an EpiOracle prototype and evaluate its performance using a set of real-world symptom lists. The evaluation results demonstrate its practical efficiency.Syndrome-based early epidemic warning plays a vital role in preventing and controlling unknown epidemic outbreaks. It monitors the frequency of each syndrome, issues a warning if some frequency is aberrant, identifies potential epidemic outbreaks, and alerts governments as early as possible. Existing systems adopt a cloud-assisted paradigm to achieve cross-facility statistics on the syndrome frequencies. However, in these systems, all symptom data would be directly leaked to the cloud, which causes critical security and privacy issues.
In this paper, we first analyze syndrome-based early epidemic warning systems and formalize two security notions, i.e., symptom confidentiality and frequency confidentiality, according to the inherent security requirements. We propose
EpiOracle, a cross-facility early warning scheme for unknown epidemics. EpiOracle ensures that the contents and frequencies of syndromes will not be leaked to any unrelated parties; moreover, our construction uses only a symmetric-key encryption algorithm and cryptographic hash functions (e.g., [CBC]AES and SHA-3), making it highly efficient. We formally prove the security of EpiOracle in the random oracle model. We also implement an EpiOracle prototype and evaluate its performance using a set of real-world symptom lists. The evaluation results demonstrate its practical efficiency.2023-11-17T06:07:05+00:00https://creativecommons.org/licenses/by-nc/4.0/Shiyu LiYuan ZhangYaqing SongFan WuFeng LyuKan YangQiang Tanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/1778Immunizing Backdoored PRGs2023-11-16T22:00:14+00:00Marshall BallYevgeniy DodisEli GoldinA backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, $pk$, for the system. Backdoored PRGs became increasingly important due to revelations about NIST’s backdoored Dual EC PRG, and later results about its practical exploitability.
Motivated by this, at Eurocrypt'15 Dodis et al. [21] initiated the question of immunizing backdoored PRGs. A $k$-immunization scheme repeatedly applies a post-processing function to the output of $k$ backdoored PRGs, to render any (unknown) backdoors provably useless. For $k=1$, [21] showed that no deterministic immunization is possible, but then constructed "seeded" $1$-immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded $1$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption.
This motivates studying $k$-immunizers for $k\ge 2$, which have an additional advantage of being deterministic (i.e., "seedless"). Indeed, prior work at CCS'17 [37] and CRYPTO'18 [7] gave supporting evidence that simple $k$-immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [37, 7] (including the XOR function [7]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure $2$-immunizer. On a negative, no (seedless) $2$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural $2$-immunizers which includes all "cryptographic hash functions."
In summary, our results show that $k$-immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a "clean" standard-model assumption.A backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, $pk$, for the system. Backdoored PRGs became increasingly important due to revelations about NIST’s backdoored Dual EC PRG, and later results about its practical exploitability.
Motivated by this, at Eurocrypt'15 Dodis et al. [21] initiated the question of immunizing backdoored PRGs. A $k$-immunization scheme repeatedly applies a post-processing function to the output of $k$ backdoored PRGs, to render any (unknown) backdoors provably useless. For $k=1$, [21] showed that no deterministic immunization is possible, but then constructed "seeded" $1$-immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded $1$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption.
This motivates studying $k$-immunizers for $k\ge 2$, which have an additional advantage of being deterministic (i.e., "seedless"). Indeed, prior work at CCS'17 [37] and CRYPTO'18 [7] gave supporting evidence that simple $k$-immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [37, 7] (including the XOR function [7]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure $2$-immunizer. On a negative, no (seedless) $2$-immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural $2$-immunizers which includes all "cryptographic hash functions."
In summary, our results show that $k$-immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a "clean" standard-model assumption.2023-11-16T22:00:14+00:00https://creativecommons.org/licenses/by-sa/4.0/Marshall BallYevgeniy DodisEli Goldinhttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/1777SoK: Collusion-resistant Multi-party Private Set Intersections in the Semi-honest Model2023-11-16T16:25:55+00:00Jelle VosMauro ContiZekeriya ErkinPrivate set intersection protocols allow two parties with private sets of data to compute the intersection between them without leaking other information about their sets. These protocols have been studied for almost 20 years, and have been significantly improved over time, reducing both their computation and communication costs. However, when more than two parties want to compute a private set intersection, these protocols are no longer applicable. While extensions exist to the multi-party case, these protocols are significantly less efficient than the two-party case. It remains an open question to design collusion-resistant multi-party private set intersection (MPSI) protocols that come close to the efficiency of two-party protocols. This work is made more difficult by the immense variety in the proposed schemes and the lack of systematization. Moreover, each new work only considers a small subset of previously proposed protocols, leaving out important developments from older works. Finally, MPSI protocols rely on many possible constructions and building blocks that have not been summarized. This work aims to point protocol designers to gaps in research and promising directions, pointing out common security flaws and sketching a frame of reference. To this end, we focus on the semi-honest model. We conclude that current MPSI protocols are not a one-size-fits-all solution, and instead there exist many protocols that each prevail in their own application setting.Private set intersection protocols allow two parties with private sets of data to compute the intersection between them without leaking other information about their sets. These protocols have been studied for almost 20 years, and have been significantly improved over time, reducing both their computation and communication costs. However, when more than two parties want to compute a private set intersection, these protocols are no longer applicable. While extensions exist to the multi-party case, these protocols are significantly less efficient than the two-party case. It remains an open question to design collusion-resistant multi-party private set intersection (MPSI) protocols that come close to the efficiency of two-party protocols. This work is made more difficult by the immense variety in the proposed schemes and the lack of systematization. Moreover, each new work only considers a small subset of previously proposed protocols, leaving out important developments from older works. Finally, MPSI protocols rely on many possible constructions and building blocks that have not been summarized. This work aims to point protocol designers to gaps in research and promising directions, pointing out common security flaws and sketching a frame of reference. To this end, we focus on the semi-honest model. We conclude that current MPSI protocols are not a one-size-fits-all solution, and instead there exist many protocols that each prevail in their own application setting.2023-11-16T16:25:55+00:00https://creativecommons.org/licenses/by-sa/4.0/Jelle VosMauro ContiZekeriya Erkinhttps://creativecommons.org/licenses/by-sa/4.0/https://eprint.iacr.org/2023/1776Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models2023-11-16T16:08:32+00:00Hanlin ZhangBenjamin L. EdelmanDanilo FrancatiDaniele VenturiGiuseppe AtenieseBoaz BarakWatermarking generative models consists of planting a statistical signal (watermark) in a model’s output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used.
Our attack is based on two assumptions: (1) The attacker has access to a “quality oracle” that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a “perturbation oracle” which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities.
We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.Watermarking generative models consists of planting a statistical signal (watermark) in a model’s output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used.
Our attack is based on two assumptions: (1) The attacker has access to a “quality oracle” that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a “perturbation oracle” which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities.
We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.2023-11-16T16:08:32+00:00https://creativecommons.org/licenses/by/4.0/Hanlin ZhangBenjamin L. EdelmanDanilo FrancatiDaniele VenturiGiuseppe AtenieseBoaz Barakhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1775Beyond Security: Achieving Fairness in Mailmen-Assisted Timed Data Delivery2023-11-16T13:20:59+00:00Shiyu LiYuan ZhangYaqing SongHongbo LiuNan ChengHongwei LiDahai TaoKan YangTimed data delivery is a critical service for time-sensitive applications that allows a sender to deliver data to a recipient, but only be accessible at a specific future time. This service is typically accomplished by employing a set of mailmen to complete the delivery mission. While this approach is commonly used, it is vulnerable to attacks from realistic adversaries, such as a greedy sender (who accesses the delivery service without paying the service charge) and malicious mailmen (who release the data prematurely without being detected). Although some research works have been done to address these adversaries, most of them fail to achieve fairness.
In this paper, we formally define the fairness requirement for mailmen-assisted timed data delivery and propose a practical scheme, dubbed DataUber, to achieve fairness. DataUber ensures that honest mailmen receive the service charge, lazy mailmen do not receive the service charge, and malicious mailmen are punished. Specifically, DataUber consists of two key techniques: 1) a new cryptographic primitive, i.e., Oblivious and Verifiable Threshold Secret Sharing (OVTSS), enabling a dealer to distribute a secret among multiple participants in a threshold and verifiable way without knowing any one of the shares, and 2) a smart-contract-based complaint mechanism, allowing anyone to become a reporter to complain about a mailman's misbehavior to a smart contract and receive a reward. Furthermore, we formally prove the security of DataUber and demonstrate its practicality through a prototype implementation.Timed data delivery is a critical service for time-sensitive applications that allows a sender to deliver data to a recipient, but only be accessible at a specific future time. This service is typically accomplished by employing a set of mailmen to complete the delivery mission. While this approach is commonly used, it is vulnerable to attacks from realistic adversaries, such as a greedy sender (who accesses the delivery service without paying the service charge) and malicious mailmen (who release the data prematurely without being detected). Although some research works have been done to address these adversaries, most of them fail to achieve fairness.
In this paper, we formally define the fairness requirement for mailmen-assisted timed data delivery and propose a practical scheme, dubbed DataUber, to achieve fairness. DataUber ensures that honest mailmen receive the service charge, lazy mailmen do not receive the service charge, and malicious mailmen are punished. Specifically, DataUber consists of two key techniques: 1) a new cryptographic primitive, i.e., Oblivious and Verifiable Threshold Secret Sharing (OVTSS), enabling a dealer to distribute a secret among multiple participants in a threshold and verifiable way without knowing any one of the shares, and 2) a smart-contract-based complaint mechanism, allowing anyone to become a reporter to complain about a mailman's misbehavior to a smart contract and receive a reward. Furthermore, we formally prove the security of DataUber and demonstrate its practicality through a prototype implementation.2023-11-16T13:20:59+00:00https://creativecommons.org/licenses/by-nc/4.0/Shiyu LiYuan ZhangYaqing SongHongbo LiuNan ChengHongwei LiDahai TaoKan Yanghttps://creativecommons.org/licenses/by-nc/4.0/https://eprint.iacr.org/2023/1774Decentralized Private Steam Aggregation from Lattices2023-11-16T12:29:26+00:00Uddipana DowerahAikaterini MitrokotsaAs various industries and government agencies increasingly seek to build quantum computers, the development of post-quantum constructions for different primitives becomes crucial. Lattice-based cryptography is one of the top candidates for constructing quantum-resistant primitives. In this paper, we propose a decentralized Private Stream Aggregation (PSA) protocol based on the Learning with Errors (LWE) problem. PSA allows secure aggregation of time-series data over multiple users without compromising the privacy of the individual data. In almost all previous constructions, a trusted entity is used for the generation of keys. We consider a scenario where the users do not want to rely on a trusted authority. We, therefore, propose a decentralized PSA (DPSA) scheme where each user generates their own keys without the need for a trusted setup. We give a concrete construction based on the hardness of the LWE problem both in the random oracle model and in the standard model.As various industries and government agencies increasingly seek to build quantum computers, the development of post-quantum constructions for different primitives becomes crucial. Lattice-based cryptography is one of the top candidates for constructing quantum-resistant primitives. In this paper, we propose a decentralized Private Stream Aggregation (PSA) protocol based on the Learning with Errors (LWE) problem. PSA allows secure aggregation of time-series data over multiple users without compromising the privacy of the individual data. In almost all previous constructions, a trusted entity is used for the generation of keys. We consider a scenario where the users do not want to rely on a trusted authority. We, therefore, propose a decentralized PSA (DPSA) scheme where each user generates their own keys without the need for a trusted setup. We give a concrete construction based on the hardness of the LWE problem both in the random oracle model and in the standard model.2023-11-16T12:29:26+00:00https://creativecommons.org/licenses/by/4.0/Uddipana DowerahAikaterini Mitrokotsahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1773Scalable and Adaptively Secure Any-Trust Distributed Key Generation and All-hands Checkpointing2023-11-17T12:16:13+00:00Hanwen FengTiancheng MaiQiang TangThe classical distributed key generation protocols (DKG) are resurging due to their widespread applications in blockchain. While efforts have been made to improve DKG communication, practical large scale deployments are still yet to come, due to various challenges including broadcast channel scalability and worst-case complaint phase.
In this paper, we propose a practical DKG for DL-based cryptosystems, with only (quasi-)linear computation/communication cost per participant, with the help of a public ledger, and beacon; Notably, our DKG only incurs constant-size blockchain storage cost for broadcast, even in the face of worst-case complaints. Moreover, our protocol satisfies adaptive security.
The key to our improvements lies in delegating the most costly operations to an Any-Trust group. This group is randomly sampled and consists of a small number of individuals. The population only trusts that at least one member in the group is honest, without knowing which one.
Additionally, we introduce an extended broadcast channel based on a blockchain and data dispersal network (such as IPFS), enabling reliable broadcasting of arbitrary-size messages at the cost of constant-size blockchain storage, which may be of independent interest.
Our DKG leads to a fully practical instantiation of Filecoin's checkpointing mechanism, in which all validators of a Proof-of-Stake (PoS) blockcahin periodically run DKG and threshold signing to create checkpoints on Bitcoin, thereby enhancing the security of the PoS chain. In comparison with another checkpointing approach of Babylon (Oakland, 2023), ours enjoys a significally smaller monetary cost of Bitcoin transaction fees. For a PoS chain with $2^{12}$ validators, our cost is merely 0.6% of that incurred by Babylon's approach.The classical distributed key generation protocols (DKG) are resurging due to their widespread applications in blockchain. While efforts have been made to improve DKG communication, practical large scale deployments are still yet to come, due to various challenges including broadcast channel scalability and worst-case complaint phase.
In this paper, we propose a practical DKG for DL-based cryptosystems, with only (quasi-)linear computation/communication cost per participant, with the help of a public ledger, and beacon; Notably, our DKG only incurs constant-size blockchain storage cost for broadcast, even in the face of worst-case complaints. Moreover, our protocol satisfies adaptive security.
The key to our improvements lies in delegating the most costly operations to an Any-Trust group. This group is randomly sampled and consists of a small number of individuals. The population only trusts that at least one member in the group is honest, without knowing which one.
Additionally, we introduce an extended broadcast channel based on a blockchain and data dispersal network (such as IPFS), enabling reliable broadcasting of arbitrary-size messages at the cost of constant-size blockchain storage, which may be of independent interest.
Our DKG leads to a fully practical instantiation of Filecoin's checkpointing mechanism, in which all validators of a Proof-of-Stake (PoS) blockcahin periodically run DKG and threshold signing to create checkpoints on Bitcoin, thereby enhancing the security of the PoS chain. In comparison with another checkpointing approach of Babylon (Oakland, 2023), ours enjoys a significally smaller monetary cost of Bitcoin transaction fees. For a PoS chain with $2^{12}$ validators, our cost is merely 0.6% of that incurred by Babylon's approach.2023-11-16T01:44:04+00:00https://creativecommons.org/licenses/by/4.0/Hanwen FengTiancheng MaiQiang Tanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1772Robust Combiners and Universal Constructions for Quantum Cryptography2023-11-16T01:43:48+00:00Taiga HirokaFuyuki KitagawaRyo NishimakiTakashi YamakawaA robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases.
Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them.
On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases.
Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them.
On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.2023-11-16T01:43:48+00:00https://creativecommons.org/licenses/by/4.0/Taiga HirokaFuyuki KitagawaRyo NishimakiTakashi Yamakawahttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1771A note on ``HAKECC: highly efficient authentication and key agreement scheme based on ECDH for RFID in IOT environment''2023-11-16T00:41:02+00:00Zhengjun CaoWe show that the Nikooghadam-Shahriari-Saeidi authentication and key agreement scheme [J. Inf. Secur. Appl., 76, 103523 (2023)]
cannot resist impersonation attack, not as claimed. An adversary can impersonate the RFID reader to cheat the RFID tag. The drawback results from its simple secret key invoking mechanism. We also find it seems difficult to revise the scheme due to the inherent flaw.We show that the Nikooghadam-Shahriari-Saeidi authentication and key agreement scheme [J. Inf. Secur. Appl., 76, 103523 (2023)]
cannot resist impersonation attack, not as claimed. An adversary can impersonate the RFID reader to cheat the RFID tag. The drawback results from its simple secret key invoking mechanism. We also find it seems difficult to revise the scheme due to the inherent flaw.2023-11-16T00:41:02+00:00https://creativecommons.org/publicdomain/zero/1.0/Zhengjun Caohttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1770On the Feasibility of E2E Verifiable Online Voting – A Case Study From Durga Puja Trial2023-11-15T23:07:42+00:00Horia DruliacMatthew BardsleyChris RichesChristian DunnLuke HarrisonBimal RoyFeng HaoIndia is the largest democracy by population and has one of the largest deployments of e-voting in the world for national elections. However, the e-voting machines used in India are not end-to-end (E2E) verifiable. The inability to verify the tallying integrity of an election by the public leaves the outcome open to disputes. E2E verifiable e-voting systems are commonly regarded as the most promising solution to address this problem, but they had not been implemented or trialed in India. It was unclear whether such systems would be usable and practical to the Indian people. Previous works such as Helios require a set of tallying authorities (TAs) to perform the decryption and tallying operations, but finding and managing TAs can prove difficult. This paper presents a TA-free E2E verifiable online voting system based on the DRE-ip protocol. In collaboration with the local authority of New Town, Kolkata, India, we conducted an online voting trial as part of the 2022 Durga Puja festival celebration, during which residents of New Town were invited to use mobile phones to vote for their favourite pujas (festival decorations) in an E2E verifiable manner. 543 participants attended the Durga Puja trial and 95 of them provided feedback by filling in an anonymous survey after voting. Based on the voter feedback, participants generally found the system easy to use. This was the first time that an E2E online voting system had been built and tested in India, suggesting its feasibility for non-statutory voting scenarios.India is the largest democracy by population and has one of the largest deployments of e-voting in the world for national elections. However, the e-voting machines used in India are not end-to-end (E2E) verifiable. The inability to verify the tallying integrity of an election by the public leaves the outcome open to disputes. E2E verifiable e-voting systems are commonly regarded as the most promising solution to address this problem, but they had not been implemented or trialed in India. It was unclear whether such systems would be usable and practical to the Indian people. Previous works such as Helios require a set of tallying authorities (TAs) to perform the decryption and tallying operations, but finding and managing TAs can prove difficult. This paper presents a TA-free E2E verifiable online voting system based on the DRE-ip protocol. In collaboration with the local authority of New Town, Kolkata, India, we conducted an online voting trial as part of the 2022 Durga Puja festival celebration, during which residents of New Town were invited to use mobile phones to vote for their favourite pujas (festival decorations) in an E2E verifiable manner. 543 participants attended the Durga Puja trial and 95 of them provided feedback by filling in an anonymous survey after voting. Based on the voter feedback, participants generally found the system easy to use. This was the first time that an E2E online voting system had been built and tested in India, suggesting its feasibility for non-statutory voting scenarios.2023-11-15T23:07:42+00:00https://creativecommons.org/licenses/by/4.0/Horia DruliacMatthew BardsleyChris RichesChristian DunnLuke HarrisonBimal RoyFeng Haohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1769A Comprehensive Survey on Non-Invasive Fault Injection Attacks2023-11-15T21:21:43+00:00Amit Mazumder ShuvoTao ZhangFarimah FarahmandiMark TehranipoorNon-invasive fault injection attacks have emerged as significant threats to a spectrum of microelectronic systems ranging from commodity devices to high-end customized processors. Unlike their invasive counterparts, these attacks are more affordable and can exploit system vulnerabilities without altering the hardware physically. Furthermore, certain non-invasive fault injection strategies allow for remote vulnerability exploitation without the requirement of physical proximity. However, existing studies lack extensive investigation into these attacks across diverse target platforms, threat models, emerging attack strategies, assessment frameworks, and mitigation approaches. In this paper, we provide a comprehensive overview of contemporary research on non-invasive fault injection attacks. Our objective is to consolidate and scrutinize the various techniques, methodologies, target systems susceptible to the attacks, and existing mitigation mechanisms advanced by the research community. Besides, we categorize attack strategies based on several aspects, present a detailed comparison among the categories, and highlight research challenges with future direction. By underlining and discussing the landscape of cutting-edge, non-invasive fault injection, we hope more researchers, designers, and security professionals examine the attacks further and take such threats into consideration while developing effective countermeasures.Non-invasive fault injection attacks have emerged as significant threats to a spectrum of microelectronic systems ranging from commodity devices to high-end customized processors. Unlike their invasive counterparts, these attacks are more affordable and can exploit system vulnerabilities without altering the hardware physically. Furthermore, certain non-invasive fault injection strategies allow for remote vulnerability exploitation without the requirement of physical proximity. However, existing studies lack extensive investigation into these attacks across diverse target platforms, threat models, emerging attack strategies, assessment frameworks, and mitigation approaches. In this paper, we provide a comprehensive overview of contemporary research on non-invasive fault injection attacks. Our objective is to consolidate and scrutinize the various techniques, methodologies, target systems susceptible to the attacks, and existing mitigation mechanisms advanced by the research community. Besides, we categorize attack strategies based on several aspects, present a detailed comparison among the categories, and highlight research challenges with future direction. By underlining and discussing the landscape of cutting-edge, non-invasive fault injection, we hope more researchers, designers, and security professionals examine the attacks further and take such threats into consideration while developing effective countermeasures.2023-11-15T21:21:43+00:00https://creativecommons.org/licenses/by/4.0/Amit Mazumder ShuvoTao ZhangFarimah FarahmandiMark Tehranipoorhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1768Homomorphic Polynomial Public Key Cryptography for Quantum-secure Digital Signature2023-11-17T22:03:49+00:00Randy KuangMaria PerepechaenkoMahmoud SayedDafu LouIn their 2022 study, Kuang et al. introduced the Multivariable Polynomial Public Key (MPPK) cryptography, a quantum-safe public key cryptosystem leveraging the mutual inversion relationship between multiplication and division. MPPK employs multiplication for key pair construction and division for decryption, generating public multivariate polynomials. Kuang and Perepechaenko expanded the cryptosystem into the Homomorphic Polynomial Public Key (HPPK), transforming product polynomials over large hidden rings using homomorphic encryption through modular multiplications. Initially designed for key encapsulation mechanism (KEM), HPPK ensures security through homomorphic encryption of public polynomials over concealed rings. This paper extends its application to a digital signature scheme. The framework of HPPK KEM can not be directly applied to the digital signatures dues to the different nature of verification procedure compared to decryption procedure. Thus, in order to use the core ideas of the HPPK KEM scheme in the framework of digital signatures, the authors introduce an extension of the Barrett reduction algorithm. This extension transforms modular multiplications over hidden rings into divisions in the verification equation, conducted over a prime field. The extended algorithm non-linearly embeds the signature into public polynomial coefficients, employing the floor function of big integer divisions. This innovative approach overcomes vulnerabilities associated with linear relationships of earlier MPPK DS schemes. The security analysis reveals exponential complexity for both private key recovery and forged signature attacks, taking into account that the bit length of the rings is twice that of the prime field size. The effectiveness of the proposed Homomorphic Polynomial Public Key Digital Signature (HPPK DS) scheme is illustrated through a practical toy example, showcasing its intricate functionality and enhanced security features.In their 2022 study, Kuang et al. introduced the Multivariable Polynomial Public Key (MPPK) cryptography, a quantum-safe public key cryptosystem leveraging the mutual inversion relationship between multiplication and division. MPPK employs multiplication for key pair construction and division for decryption, generating public multivariate polynomials. Kuang and Perepechaenko expanded the cryptosystem into the Homomorphic Polynomial Public Key (HPPK), transforming product polynomials over large hidden rings using homomorphic encryption through modular multiplications. Initially designed for key encapsulation mechanism (KEM), HPPK ensures security through homomorphic encryption of public polynomials over concealed rings. This paper extends its application to a digital signature scheme. The framework of HPPK KEM can not be directly applied to the digital signatures dues to the different nature of verification procedure compared to decryption procedure. Thus, in order to use the core ideas of the HPPK KEM scheme in the framework of digital signatures, the authors introduce an extension of the Barrett reduction algorithm. This extension transforms modular multiplications over hidden rings into divisions in the verification equation, conducted over a prime field. The extended algorithm non-linearly embeds the signature into public polynomial coefficients, employing the floor function of big integer divisions. This innovative approach overcomes vulnerabilities associated with linear relationships of earlier MPPK DS schemes. The security analysis reveals exponential complexity for both private key recovery and forged signature attacks, taking into account that the bit length of the rings is twice that of the prime field size. The effectiveness of the proposed Homomorphic Polynomial Public Key Digital Signature (HPPK DS) scheme is illustrated through a practical toy example, showcasing its intricate functionality and enhanced security features.2023-11-15T19:08:24+00:00https://creativecommons.org/licenses/by-nc-nd/4.0/Randy KuangMaria PerepechaenkoMahmoud SayedDafu Louhttps://creativecommons.org/licenses/by-nc-nd/4.0/https://eprint.iacr.org/2023/1767The Impact of Hash Primitives and Communication Overhead for Hardware-Accelerated SPHINCS+2023-11-15T17:19:09+00:00Patrick KarlJonas SchuppGeorg SiglSPHINCS+ is a signature scheme included in the first NIST post-quantum standard, that bases its security on the underlying hash primitive. As most of the runtime of SPHINCS+ is caused by the evaluation of several hash- and pseudo-random functions, instantiated via the hash primitive, offloading this computation to dedicated hardware accelerators is a natural step. In this work, we evaluate different architectures for hardware acceleration of such a hash primitive with respect to its use-case and evaluate them in the context of SPHINCS+. We attach hardware accelerators for different hash primitives (SHAKE256 and Asconxof for both full and round-reduced versions) to CPU interfaces having different transfer speeds. We show, that for most use-cases, data transfer determines the overall performance if accelerators are equipped with FIFOs.SPHINCS+ is a signature scheme included in the first NIST post-quantum standard, that bases its security on the underlying hash primitive. As most of the runtime of SPHINCS+ is caused by the evaluation of several hash- and pseudo-random functions, instantiated via the hash primitive, offloading this computation to dedicated hardware accelerators is a natural step. In this work, we evaluate different architectures for hardware acceleration of such a hash primitive with respect to its use-case and evaluate them in the context of SPHINCS+. We attach hardware accelerators for different hash primitives (SHAKE256 and Asconxof for both full and round-reduced versions) to CPU interfaces having different transfer speeds. We show, that for most use-cases, data transfer determines the overall performance if accelerators are equipped with FIFOs.2023-11-15T17:19:09+00:00https://creativecommons.org/licenses/by/4.0/Patrick KarlJonas SchuppGeorg Siglhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1766Introducing Clapoti(s): Evaluating the isogeny class group action in polynomial time2023-11-15T12:29:26+00:00Aurel PageDamien RobertIn this short note, we present a simplified (but slower) version Clapoti of Clapotis, whose full description will appear later. Let 𝐸/𝔽_𝑞 be an elliptic curve with an effective primitive orientation by a quadratic imaginary order 𝑅 ⊂ End(𝐸). Let 𝔞 be an invertible ideal in 𝑅. Clapoti is a randomized polynomial time algorithm in 𝑂 ((log Δ_𝑅 + log 𝑞)^𝑂(1) ) operations to compute the class group action 𝐸 ↦ 𝐸_𝔞 ≃ 𝐸/𝐸[𝔞].In this short note, we present a simplified (but slower) version Clapoti of Clapotis, whose full description will appear later. Let 𝐸/𝔽_𝑞 be an elliptic curve with an effective primitive orientation by a quadratic imaginary order 𝑅 ⊂ End(𝐸). Let 𝔞 be an invertible ideal in 𝑅. Clapoti is a randomized polynomial time algorithm in 𝑂 ((log Δ_𝑅 + log 𝑞)^𝑂(1) ) operations to compute the class group action 𝐸 ↦ 𝐸_𝔞 ≃ 𝐸/𝐸[𝔞].2023-11-15T12:29:26+00:00https://creativecommons.org/licenses/by/4.0/Aurel PageDamien Roberthttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1765The Non-Uniform Perebor Conjecture for Time-Bounded Kolmogorov Complexity is False2023-11-15T06:47:39+00:00Noam MazorRafael PassThe Perebor (Russian for “brute-force search”) conjectures, which date back to the 1950s and 1960s are some of the oldest conjectures in complexity theory. The conjectures are a stronger form of the NP ̸ = P conjecture (which they predate) and state that for “meta-complexity” problems, such as the Time-bounded Kolmogorov complexity Problem, and the Minimum Circuit Size Problem, there are no better algorithms than brute force search.
In this paper, we disprove the non-uniform version of the Perebor conjecture for the Time-Bounded Kolmogorov complexity problem. We demonstrate that for every polynomial t(·), there exists of a circuit of size $2^{4n/5+o(n)}$ that solves the t(·)-bounded Kolmogorov complexity problem on every instance.
Our algorithm is black-box in the description of the Universal Turing Machine employed in the definition of Kolmogorov Complexity, and leverages the characterization of one-way functions through the hardness of the time-bounded Kolmogorov complexity problem of Liu and Pass (FOCS’20), and the time-space trade-off for one-way functions of Fiat and Naor (STOC’91). We additionally demonstrate that no such black-box algorithm can have sub-exponential circuit size.
Along the way (and of independent interest), we extend the result of Fiat and Naor and demonstrate that any efficiently computable function can be inverted (with probability 1) by a circuit of size 2^{4n/5+o(n)}; as far as we know, this yields the first formal proof that a non-trivial circuit can invert any efficient function.The Perebor (Russian for “brute-force search”) conjectures, which date back to the 1950s and 1960s are some of the oldest conjectures in complexity theory. The conjectures are a stronger form of the NP ̸ = P conjecture (which they predate) and state that for “meta-complexity” problems, such as the Time-bounded Kolmogorov complexity Problem, and the Minimum Circuit Size Problem, there are no better algorithms than brute force search.
In this paper, we disprove the non-uniform version of the Perebor conjecture for the Time-Bounded Kolmogorov complexity problem. We demonstrate that for every polynomial t(·), there exists of a circuit of size $2^{4n/5+o(n)}$ that solves the t(·)-bounded Kolmogorov complexity problem on every instance.
Our algorithm is black-box in the description of the Universal Turing Machine employed in the definition of Kolmogorov Complexity, and leverages the characterization of one-way functions through the hardness of the time-bounded Kolmogorov complexity problem of Liu and Pass (FOCS’20), and the time-space trade-off for one-way functions of Fiat and Naor (STOC’91). We additionally demonstrate that no such black-box algorithm can have sub-exponential circuit size.
Along the way (and of independent interest), we extend the result of Fiat and Naor and demonstrate that any efficiently computable function can be inverted (with probability 1) by a circuit of size 2^{4n/5+o(n)}; as far as we know, this yields the first formal proof that a non-trivial circuit can invert any efficient function.2023-11-15T06:47:39+00:00https://creativecommons.org/licenses/by/4.0/Noam MazorRafael Passhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1764Distributed Differential Privacy via Shuffling vs Aggregation: a Curious Study2023-11-15T03:05:42+00:00Yu WeiJingyu JiaYuduo WuChanghui HuChangyu DongZheli LiuXiaofeng ChenYun PengShaowei WangHow to achieve distributed differential privacy (DP) without a trusted central party is of great interest in both theory and practice. Recently, the shuffle model has attracted much attention. Unlike the local DP model in which the users send randomized data directly to the data collector/analyzer, in the shuffle model an intermediate untrusted shuffler is introduced to randomly permute the data, which have already been randomized by the users, before they reach the analyzer. The most appealing aspect is that while shuffling does not explicitly add more noise to the data, it can make privacy better. The privacy amplification effect in consequence means the users need to add less noise to the data than in the local DP model, but can achieve the same level of differential privacy. Thus, protocols in the shuffle model can provide better accuracy than those in the local DP model. What looks interesting to us is that the architecture of the shuffle model is similar to private aggregation, which has been studied for more than a decade. In private aggregation, locally randomized user data are aggregated by an intermediate untrusted aggregator. Thus, our question is whether aggregation also exhibits some sort of privacy amplification effect? And if so, how good is this ``aggregation model'' in comparison with the shuffle model. We conducted the first comparative study between the two, covering privacy amplification, functionalities, protocol accuracy, and practicality. The results as yet suggest that the new shuffle model does not have obvious advantages over the old aggregation model. On the contrary, protocols in the aggregation model outperform those in the shuffle model, sometimes significantly, in many aspects.How to achieve distributed differential privacy (DP) without a trusted central party is of great interest in both theory and practice. Recently, the shuffle model has attracted much attention. Unlike the local DP model in which the users send randomized data directly to the data collector/analyzer, in the shuffle model an intermediate untrusted shuffler is introduced to randomly permute the data, which have already been randomized by the users, before they reach the analyzer. The most appealing aspect is that while shuffling does not explicitly add more noise to the data, it can make privacy better. The privacy amplification effect in consequence means the users need to add less noise to the data than in the local DP model, but can achieve the same level of differential privacy. Thus, protocols in the shuffle model can provide better accuracy than those in the local DP model. What looks interesting to us is that the architecture of the shuffle model is similar to private aggregation, which has been studied for more than a decade. In private aggregation, locally randomized user data are aggregated by an intermediate untrusted aggregator. Thus, our question is whether aggregation also exhibits some sort of privacy amplification effect? And if so, how good is this ``aggregation model'' in comparison with the shuffle model. We conducted the first comparative study between the two, covering privacy amplification, functionalities, protocol accuracy, and practicality. The results as yet suggest that the new shuffle model does not have obvious advantages over the old aggregation model. On the contrary, protocols in the aggregation model outperform those in the shuffle model, sometimes significantly, in many aspects.2023-11-15T03:05:42+00:00https://creativecommons.org/licenses/by/4.0/Yu WeiJingyu JiaYuduo WuChanghui HuChangyu DongZheli LiuXiaofeng ChenYun PengShaowei Wanghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1763Secure Transformer Inference2023-11-15T02:55:09+00:00Mu YuanLan ZhangXiang-Yang LiWe present a three-party protocol that can protect both Transformer parameters and user data during the inference phase. For each feedforward inference process, our protocol only introduces permutation computation of input and output data on the user side. Our protocol, Secure Transformer Inference Protocol (STIP), can be applied to real-world services like ChatGPT.We present a three-party protocol that can protect both Transformer parameters and user data during the inference phase. For each feedforward inference process, our protocol only introduces permutation computation of input and output data on the user side. Our protocol, Secure Transformer Inference Protocol (STIP), can be applied to real-world services like ChatGPT.2023-11-15T02:55:09+00:00https://creativecommons.org/licenses/by/4.0/Mu YuanLan ZhangXiang-Yang Lihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1762ZKSMT: A VM for Proving SMT Theorems in Zero Knowledge2023-11-28T17:36:38+00:00Daniel LuickJohn KolesarTimos AntonopoulosWilliam R. HarrisJames ParkerRuzica PiskacEran TromerXiao WangNing LuoVerification of program safety is often reducible to proving the unsatisfiability (i.e., validity) of a formula in Satisfiability Modulo Theories (SMT): Boolean logic combined with theories that formalize arbitrary first-order fragments. Zero-knowledge (ZK) proofs allow SMT formulas to be validated without revealing the underlying formulas or their proofs to other parties, which is a crucial building block for proving the safety of proprietary programs. Recently, Luo et al. (CCS 2022) studied the simpler problem of proving the unsatisfiability of pure Boolean formulas, but it does not support safety proofs generated by SMT solvers. This work presents ZKSMT, a novel framework for proving the validity of SMT formulas in ZK. We design a virtual machine (VM) tailored to efficiently represent the verification process of SMT validity proofs in ZK. Our VM can support the vast majority of popular theories when proving program safety while being complete and sound. To demonstrate this, we instantiate the commonly used theories of equality and linear integer arithmetic in our VM with theory-specific optimizations for proving them in ZK. ZKSMT achieves high practicality even when running on realistic SMT formulas generated by Boogie, a common tool for software verification. It achieves a three-order-of-magnitude improvement compared to a baseline that executes the proof verification code in a general ZK system.Verification of program safety is often reducible to proving the unsatisfiability (i.e., validity) of a formula in Satisfiability Modulo Theories (SMT): Boolean logic combined with theories that formalize arbitrary first-order fragments. Zero-knowledge (ZK) proofs allow SMT formulas to be validated without revealing the underlying formulas or their proofs to other parties, which is a crucial building block for proving the safety of proprietary programs. Recently, Luo et al. (CCS 2022) studied the simpler problem of proving the unsatisfiability of pure Boolean formulas, but it does not support safety proofs generated by SMT solvers. This work presents ZKSMT, a novel framework for proving the validity of SMT formulas in ZK. We design a virtual machine (VM) tailored to efficiently represent the verification process of SMT validity proofs in ZK. Our VM can support the vast majority of popular theories when proving program safety while being complete and sound. To demonstrate this, we instantiate the commonly used theories of equality and linear integer arithmetic in our VM with theory-specific optimizations for proving them in ZK. ZKSMT achieves high practicality even when running on realistic SMT formulas generated by Boogie, a common tool for software verification. It achieves a three-order-of-magnitude improvement compared to a baseline that executes the proof verification code in a general ZK system.2023-11-15T00:55:14+00:00https://creativecommons.org/licenses/by/4.0/Daniel LuickJohn KolesarTimos AntonopoulosWilliam R. HarrisJames ParkerRuzica PiskacEran TromerXiao WangNing Luohttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1761Guardianship in Group Key Exchange for Limited Environments2023-11-20T16:08:11+00:00Elsie Mestl FondevikBritta HaleXisen TianPost-compromise security (PCS) has been a core goal of end-to-end encrypted messaging applications for many years, both in one-to-one continuous key agreement (CKA) and for groups (CGKA). At its essence, PCS relies on a compromised party to perform a key update in order to `self-heal'. However, due to bandwidth constraints, receive-only mode, and various other environmental demands of the growing number of use cases for such CGKA protocols, a group member may not be able to issue such updates. In this work, we address the issue of devices functioning in limited mode through the introduction of guardianship, where a designated guardian can perform key updates on the behalf of its paired edge device. We introduce a Guardianship PCS (GPCS) security, and provide an associated security experiment. We investigate various architectural designs in the pursuit of GPCS, provide constructions and security analyses, and describe trade-offs.Post-compromise security (PCS) has been a core goal of end-to-end encrypted messaging applications for many years, both in one-to-one continuous key agreement (CKA) and for groups (CGKA). At its essence, PCS relies on a compromised party to perform a key update in order to `self-heal'. However, due to bandwidth constraints, receive-only mode, and various other environmental demands of the growing number of use cases for such CGKA protocols, a group member may not be able to issue such updates. In this work, we address the issue of devices functioning in limited mode through the introduction of guardianship, where a designated guardian can perform key updates on the behalf of its paired edge device. We introduce a Guardianship PCS (GPCS) security, and provide an associated security experiment. We investigate various architectural designs in the pursuit of GPCS, provide constructions and security analyses, and describe trade-offs.2023-11-14T23:04:31+00:00https://creativecommons.org/publicdomain/zero/1.0/Elsie Mestl FondevikBritta HaleXisen Tianhttps://creativecommons.org/publicdomain/zero/1.0/https://eprint.iacr.org/2023/1760Biscuit: New MPCitH Signature Scheme from Structured Multivariate Polynomials2023-11-14T15:46:00+00:00Luk BettaleDelaram KahrobaeiLudovic PerretJavier VerbelThis paper describes Biscuit, a new multivariate-based signature scheme derived using the MPCitH approach. The security of Biscuit is related to the problem of solving a set of quadratic structured systems of algebraic equations. These equations are highly compact and can be evaluated using very few multiplications. The core of Biscuit is a rather simple MPC protocol which consists of the parallel execution of a few secure multiplications using standard optimized multiplicative triples. This paper also includes several improvements with respect to Biscuit submission to the last NIST PQC standardization process for additional
signature schemes. Notably, we introduce a new hypercube variant of Biscuit, refine the security analysis with recent third-party attacks, and present a new avx2 implementation of Biscuit.This paper describes Biscuit, a new multivariate-based signature scheme derived using the MPCitH approach. The security of Biscuit is related to the problem of solving a set of quadratic structured systems of algebraic equations. These equations are highly compact and can be evaluated using very few multiplications. The core of Biscuit is a rather simple MPC protocol which consists of the parallel execution of a few secure multiplications using standard optimized multiplicative triples. This paper also includes several improvements with respect to Biscuit submission to the last NIST PQC standardization process for additional
signature schemes. Notably, we introduce a new hypercube variant of Biscuit, refine the security analysis with recent third-party attacks, and present a new avx2 implementation of Biscuit.2023-11-14T15:46:00+00:00https://creativecommons.org/licenses/by/4.0/Luk BettaleDelaram KahrobaeiLudovic PerretJavier Verbelhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1759Non-Interactive Zero-Knowledge Functional Proofs2023-11-14T10:01:26+00:00Gongxian ZengJunzuo LaiZhengan HuangLinru ZhangXiangning WangKwok-Yan LamHuaxiong WangJian WengIn this paper, we consider to generalize NIZK by empowering a prover to share a witness in a fine-grained manner with verifiers. Roughly, the prover is able to authorize a verifier to obtain extra information of witness, i.e., besides verifying the truth of the statement, the verifier can additionally obtain certain function of the witness from the accepting proof using a secret functional key provided by the prover.
To fulfill these requirements, we introduce a new primitive called \emph{non-interactive zero-knowledge functional proofs (fNIZKs)}, and formalize its security notions. We provide a generic construction of fNIZK for any $\textsf{NP}$ relation $\mathcal{R}$, which enables the prover to share any function of the witness with a verifier. For a widely-used relation about set membership proof (implying range proof), we construct a concrete and efficient fNIZK, through new building blocks (set membership encryption and dual inner-product encryption), which might be of independent interest.In this paper, we consider to generalize NIZK by empowering a prover to share a witness in a fine-grained manner with verifiers. Roughly, the prover is able to authorize a verifier to obtain extra information of witness, i.e., besides verifying the truth of the statement, the verifier can additionally obtain certain function of the witness from the accepting proof using a secret functional key provided by the prover.
To fulfill these requirements, we introduce a new primitive called \emph{non-interactive zero-knowledge functional proofs (fNIZKs)}, and formalize its security notions. We provide a generic construction of fNIZK for any $\textsf{NP}$ relation $\mathcal{R}$, which enables the prover to share any function of the witness with a verifier. For a widely-used relation about set membership proof (implying range proof), we construct a concrete and efficient fNIZK, through new building blocks (set membership encryption and dual inner-product encryption), which might be of independent interest.2023-11-14T10:01:26+00:00https://creativecommons.org/licenses/by/4.0/Gongxian ZengJunzuo LaiZhengan HuangLinru ZhangXiangning WangKwok-Yan LamHuaxiong WangJian Wenghttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1758Pulsar: Secure Steganography through Diffusion Models2023-11-14T01:22:11+00:00Tushar M. JoisGabrielle BeckGabriel KaptchukWidespread efforts to subvert acccess to strong cryptography has renewed interest in steganography, the practice of embedding sensitive messages in mundane cover messages. Recent efforts at provably secure steganography have only focused on text-based generative models and cannot support other types of models, such as diffusion models, which are used for high-quality image synthesis. In this work, we initiate the study of securely embedding steganographic messages into the output of image diffusion models. We identify that the use of variance noise during image generation provides a suitable steganographic channel. We develop our construction, Pulsar, by building optimizations to make this channel practical for communication. Our implementation of Pulsar is capable of embedding $\approx 275$-$542$ bytes (on average) into a single image without altering the distribution of the generated image, all in the span of $\approx 3$ seconds of online time on a laptop. In addition, we discuss how the results of Pulsar can inform future research into diffusion models. Pulsar shows that diffusion models are a promising medium for steganography and censorship resistance.Widespread efforts to subvert acccess to strong cryptography has renewed interest in steganography, the practice of embedding sensitive messages in mundane cover messages. Recent efforts at provably secure steganography have only focused on text-based generative models and cannot support other types of models, such as diffusion models, which are used for high-quality image synthesis. In this work, we initiate the study of securely embedding steganographic messages into the output of image diffusion models. We identify that the use of variance noise during image generation provides a suitable steganographic channel. We develop our construction, Pulsar, by building optimizations to make this channel practical for communication. Our implementation of Pulsar is capable of embedding $\approx 275$-$542$ bytes (on average) into a single image without altering the distribution of the generated image, all in the span of $\approx 3$ seconds of online time on a laptop. In addition, we discuss how the results of Pulsar can inform future research into diffusion models. Pulsar shows that diffusion models are a promising medium for steganography and censorship resistance.2023-11-14T01:22:11+00:00https://creativecommons.org/licenses/by/4.0/Tushar M. JoisGabrielle BeckGabriel Kaptchukhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1757Adaptively Secure Consensus with Linear Complexity and Constant Round under Honest Majority in the Bare PKI Model, and Separation Bounds from the Idealized Message-Authentication Model2023-11-19T18:02:18+00:00Matthieu RambaudWe consider the mainstream model in secure computation known as the bare PKI setup, also as the {bulletin-board PKI}. It allows players to broadcast once and non-interactively before they receive their inputs and start the execution. A bulletin-board PKI is essentially the minimum setup known so far to implement the model known as {messages-authentication}, i.e., when $P$ is forwarded a signed message, it considers it to be issued by $R$ if and only if $R$ signed it. It is known since [Lamport et al, 82] that BA under honest majority (let alone secure computation) would not be possible without messages-authentication. But as further highlighted by [Borcherding, 96], messages-authentication cannot not simply be implemented with digital signatures, without a bulletin-board of public keys. So the bulletin-board PKI setup and the messages-authentication model seem very close: this raizes the question whether there is a separation between them. In the bulletin-board PKI setup, the most communication-efficient synchronous BA is the one of [Boyle-Cohen-Goel, Podc'21 \& J. Cryptol.'24], which has $O(n.\text{polylog}(n))$ bit complexity, $f<n(1/3-\epsilon)$ resilience and tolerates an adversary which cannot adaptively corrupt after the setup. Our main upper-bound is a BA (and also a VBA) in this same model with strictly better parameters: {quasi-optimal} resilience $f<n(1/2-\epsilon)$, with an expected bit complexity of communications which is {linear} in $n$, and tolerance to an {adaptive} rushing adversary (but which unavoidably cannot remove messages sent). As [BCG'21], they have constant expected latency. All previous BAs or VBAs achieving the same metrics as our upper bound, are either in the static adversary model: Sleepy [Pass-Shi, Asiacrypt'17], Snow White [Daian-Pass-Shi, FC'19], or assume more than a bare PKI setup: (i) The model of Thunderella [Pass-Shi, EC'17], Algorand [Gilad et al, SOSP'17], Praos [David et al, EC'18], [Goyal et al, FC'21] and [Momose et al, CCS'22 and CCS'23] assumes a public random seed which is unpredictable until strictly after all players published on the bulletin board; (ii) [Abraham et al, Podc'19] assume a trusted entity which honestly samples the keys of all players; (iii) All known implementations of the setups (i) and (ii), as well as the setup of [Blum et al, TCC'20], require interactions, furthermore in the form of BAs. (iv) [Garay-Kiayas-Shen '23] assume that honest players work more than the adversary, or, [Eckey-Faust-Loss et al '17 '22] at least as fast.
Of independent interest, our tool is a very simple non-interactive mechanism which {sets-up a self-sortition function from non-interactive publications on the bulletin board}, and still, guarantees an honest majority in every committee up to probability exponentially small in both $\epsilon$ and in the multicast complexity.
We provide the following further results:
{- Optimality.} We show that resilience up to a tight honest majority $f<n/2$ is impossible for any multicast-based adaptively secure BA with subquadratic communication, whatever the setup.
{- Separation.} We show impossibility of subquadratic multicast-based BA in the messages-authentication model. Our model for this lower bound is even stronger, since it onboards other assumptions at least as strong as all popular implications of a bulletin-board PKI setup: {secure channels}, {a (possibly structured) random string}, {NIZK}.
{- Partial synchrony.} Given that the multicast lower-bound holds a fortiori, and that the upper-bound adapts seamlessly (for $f<n(1/3-\epsilon)$), the separation also holds.
We show a second separation, which holds for general BAs, non-necessarily multicast-based: any partially-synchronous BA in the messages-authentication model, if it has linear message complexity, then it has latency at least {logarithmic in $f$}.
{- Extension to VBA.} We extend to VBA the logarithmic latency lower bound. This is the first communication lower bound for randomized VBA to our knowledge. It shows that the separation under partial synchrony also holds for VBA. Along the way, we close the characterization of [Civit et al, Podc'23] of validity conditions in authenticated consensus, by apparently new results on VBA: both BA and VBA are infeasible under partial synchrony beyond $f<n/3$, whatever the setup; whereas synchronous VBA is feasible up to $f=n-1$ (contrary to BA).We consider the mainstream model in secure computation known as the bare PKI setup, also as the {bulletin-board PKI}. It allows players to broadcast once and non-interactively before they receive their inputs and start the execution. A bulletin-board PKI is essentially the minimum setup known so far to implement the model known as {messages-authentication}, i.e., when $P$ is forwarded a signed message, it considers it to be issued by $R$ if and only if $R$ signed it. It is known since [Lamport et al, 82] that BA under honest majority (let alone secure computation) would not be possible without messages-authentication. But as further highlighted by [Borcherding, 96], messages-authentication cannot not simply be implemented with digital signatures, without a bulletin-board of public keys. So the bulletin-board PKI setup and the messages-authentication model seem very close: this raizes the question whether there is a separation between them. In the bulletin-board PKI setup, the most communication-efficient synchronous BA is the one of [Boyle-Cohen-Goel, Podc'21 \& J. Cryptol.'24], which has $O(n.\text{polylog}(n))$ bit complexity, $f<n(1/3-\epsilon)$ resilience and tolerates an adversary which cannot adaptively corrupt after the setup. Our main upper-bound is a BA (and also a VBA) in this same model with strictly better parameters: {quasi-optimal} resilience $f<n(1/2-\epsilon)$, with an expected bit complexity of communications which is {linear} in $n$, and tolerance to an {adaptive} rushing adversary (but which unavoidably cannot remove messages sent). As [BCG'21], they have constant expected latency. All previous BAs or VBAs achieving the same metrics as our upper bound, are either in the static adversary model: Sleepy [Pass-Shi, Asiacrypt'17], Snow White [Daian-Pass-Shi, FC'19], or assume more than a bare PKI setup: (i) The model of Thunderella [Pass-Shi, EC'17], Algorand [Gilad et al, SOSP'17], Praos [David et al, EC'18], [Goyal et al, FC'21] and [Momose et al, CCS'22 and CCS'23] assumes a public random seed which is unpredictable until strictly after all players published on the bulletin board; (ii) [Abraham et al, Podc'19] assume a trusted entity which honestly samples the keys of all players; (iii) All known implementations of the setups (i) and (ii), as well as the setup of [Blum et al, TCC'20], require interactions, furthermore in the form of BAs. (iv) [Garay-Kiayas-Shen '23] assume that honest players work more than the adversary, or, [Eckey-Faust-Loss et al '17 '22] at least as fast.
Of independent interest, our tool is a very simple non-interactive mechanism which {sets-up a self-sortition function from non-interactive publications on the bulletin board}, and still, guarantees an honest majority in every committee up to probability exponentially small in both $\epsilon$ and in the multicast complexity.
We provide the following further results:
{- Optimality.} We show that resilience up to a tight honest majority $f<n/2$ is impossible for any multicast-based adaptively secure BA with subquadratic communication, whatever the setup.
{- Separation.} We show impossibility of subquadratic multicast-based BA in the messages-authentication model. Our model for this lower bound is even stronger, since it onboards other assumptions at least as strong as all popular implications of a bulletin-board PKI setup: {secure channels}, {a (possibly structured) random string}, {NIZK}.
{- Partial synchrony.} Given that the multicast lower-bound holds a fortiori, and that the upper-bound adapts seamlessly (for $f<n(1/3-\epsilon)$), the separation also holds.
We show a second separation, which holds for general BAs, non-necessarily multicast-based: any partially-synchronous BA in the messages-authentication model, if it has linear message complexity, then it has latency at least {logarithmic in $f$}.
{- Extension to VBA.} We extend to VBA the logarithmic latency lower bound. This is the first communication lower bound for randomized VBA to our knowledge. It shows that the separation under partial synchrony also holds for VBA. Along the way, we close the characterization of [Civit et al, Podc'23] of validity conditions in authenticated consensus, by apparently new results on VBA: both BA and VBA are infeasible under partial synchrony beyond $f<n/3$, whatever the setup; whereas synchronous VBA is feasible up to $f=n-1$ (contrary to BA).2023-11-14T00:58:09+00:00https://creativecommons.org/licenses/by/4.0/Matthieu Rambaudhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1756How to Use Quantum Indistinguishability Obfuscation2023-11-13T22:16:54+00:00Andrea ColadangeloSam GunnQuantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs.
As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection.
Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs --- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection.
Finally, we construct qsiO relative to an efficient quantum oracle.Quantum copy protection, introduced by Aaronson, enables giving out a quantum program-description that cannot be meaningfully duplicated. Despite over a decade of study, copy protection is only known to be possible for a very limited class of programs.
As our first contribution, we show how to achieve "best-possible" copy protection for all programs. We do this by introducing quantum state indistinguishability obfuscation (qsiO), a notion of obfuscation for quantum descriptions of classical programs. We show that applying qsiO to a program immediately achieves best-possible copy protection.
Our second contribution is to show that, assuming injective one-way functions exist, qsiO is concrete copy protection for a large family of puncturable programs --- significantly expanding the class of copy-protectable programs. A key tool in our proof is a new variant of unclonable encryption (UE) that we call coupled unclonable encryption (cUE). While constructing UE in the standard model remains an important open problem, we are able to build cUE from one-way functions. If we additionally assume the existence of UE, then we can further expand the class of puncturable programs for which qsiO is copy protection.
Finally, we construct qsiO relative to an efficient quantum oracle.2023-11-13T22:16:54+00:00https://creativecommons.org/licenses/by/4.0/Andrea ColadangeloSam Gunnhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1755HashRand: Efficient Asynchronous Random Beacon without Threshold Cryptographic Setup2023-12-01T07:29:41+00:00Akhil BandarupalliAdithya BhatSaurabh BagchiAniket KateMichael ReiterRegular access to unpredictable and bias-resistant randomness is important for applications such as blockchains, voting, and secure distributed computing. Distributed random beacon protocols address this need by distributing trust across multiple nodes, with the majority of them assumed to be honest. These protocols have found applications in blockchain technology, leading to the proposal of several distributed random beacon protocols, with some already implemented. However, many current random beacon systems rely on threshold cryptographic setups or exhibit high computational costs, while others assume partial or bounded synchronous networks. To overcome these limitations, we propose HashRand, a computation and communication-efficient asynchronous random beacon protocol that uses a secure Hash function to generate beacons and pairwise secure channels. HashRand has a per-node communication complexity of $\mathcal{O}(\lambda n \log(n))$ bits per beacon. The computational efficiency of HashRand is attributed to the two orders of magnitude lower time of a one-way Hash computation compared to discrete log exponentiation. Interestingly, besides reduced overhead, HashRand achieves Post-Quantum security by leveraging the secure Hash function against quantum adversaries, setting it apart from other random beacon protocols that use discrete log cryptography. In a geo-distributed testbed of $n=160$ nodes, HashRand produces 1 beacon every second, which is at least 4x higher than Spurt. We also demonstrate the practical utility of HashRand by implementing a Post-Quantum secure Asynchronous SMR protocol, which has a response rate of over 122k txns per second over a WAN at $n=40$ nodes.Regular access to unpredictable and bias-resistant randomness is important for applications such as blockchains, voting, and secure distributed computing. Distributed random beacon protocols address this need by distributing trust across multiple nodes, with the majority of them assumed to be honest. These protocols have found applications in blockchain technology, leading to the proposal of several distributed random beacon protocols, with some already implemented. However, many current random beacon systems rely on threshold cryptographic setups or exhibit high computational costs, while others assume partial or bounded synchronous networks. To overcome these limitations, we propose HashRand, a computation and communication-efficient asynchronous random beacon protocol that uses a secure Hash function to generate beacons and pairwise secure channels. HashRand has a per-node communication complexity of $\mathcal{O}(\lambda n \log(n))$ bits per beacon. The computational efficiency of HashRand is attributed to the two orders of magnitude lower time of a one-way Hash computation compared to discrete log exponentiation. Interestingly, besides reduced overhead, HashRand achieves Post-Quantum security by leveraging the secure Hash function against quantum adversaries, setting it apart from other random beacon protocols that use discrete log cryptography. In a geo-distributed testbed of $n=160$ nodes, HashRand produces 1 beacon every second, which is at least 4x higher than Spurt. We also demonstrate the practical utility of HashRand by implementing a Post-Quantum secure Asynchronous SMR protocol, which has a response rate of over 122k txns per second over a WAN at $n=40$ nodes.2023-11-13T16:39:12+00:00https://creativecommons.org/licenses/by/4.0/Akhil BandarupalliAdithya BhatSaurabh BagchiAniket KateMichael Reiterhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1754That’s not my signature! Fail-stop signatures for a post-quantum world2023-11-13T14:47:10+00:00Cecilia BoschiniHila DahariMoni NaorEyal RonenThe Snowden's revelations kick-started a community-wide effort to develop cryptographic tools against mass surveillance.
In this work, we propose to add another primitive to that toolbox: Fail-Stop Signatures (FSS) [EC'89].
FSS are digital signatures enhanced with a forgery-detection mechanism that can protect a PPT signer from more powerful attackers.
Despite the fascinating concept, research in this area stalled after the '90s. However, the ongoing transition to post-quantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS.
This paper aims to reboot research on FSS with practical use in mind: Our framework for FSS includes ``fine-grained'' security definitions (that assume a powerful, but bounded adversary e.g: can break $128$-bit of security, but not $256$-bit).
As an application, we show new FSS constructions for the post-quantum setting.
We show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles, and that this implies lattice-based FSS.
Our main construction is an FSS version of SPHINCS, which required building FSS versions of all its building blocks: WOTS, XMSS, and FORS.
In the process, we identify and provide generic solutions for two fundamental issues arising when deriving a large number of private keys from a single seed, and when building FSS for Hash-and-Sign-based signatures.The Snowden's revelations kick-started a community-wide effort to develop cryptographic tools against mass surveillance.
In this work, we propose to add another primitive to that toolbox: Fail-Stop Signatures (FSS) [EC'89].
FSS are digital signatures enhanced with a forgery-detection mechanism that can protect a PPT signer from more powerful attackers.
Despite the fascinating concept, research in this area stalled after the '90s. However, the ongoing transition to post-quantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS.
This paper aims to reboot research on FSS with practical use in mind: Our framework for FSS includes ``fine-grained'' security definitions (that assume a powerful, but bounded adversary e.g: can break $128$-bit of security, but not $256$-bit).
As an application, we show new FSS constructions for the post-quantum setting.
We show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles, and that this implies lattice-based FSS.
Our main construction is an FSS version of SPHINCS, which required building FSS versions of all its building blocks: WOTS, XMSS, and FORS.
In the process, we identify and provide generic solutions for two fundamental issues arising when deriving a large number of private keys from a single seed, and when building FSS for Hash-and-Sign-based signatures.2023-11-13T14:47:10+00:00https://creativecommons.org/licenses/by/4.0/Cecilia BoschiniHila DahariMoni NaorEyal Ronenhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1752Secure Encryption and Key Exchange using Arbiter PUF2023-11-13T10:13:56+00:00Raja Adhithan RadhakrishnanThis paper introduces a novel approach to enhancing cryp-
tographic security. It proposes the use of one-time message sharing com-
bined with Physically Unclonable Functions (PUF) to securely exchange
keys and generate an S-subbyte-box for encryption. This innovative tech-
nique aims to elevate the security standards of cryptographic applica-
tions.This paper introduces a novel approach to enhancing cryp-
tographic security. It proposes the use of one-time message sharing com-
bined with Physically Unclonable Functions (PUF) to securely exchange
keys and generate an S-subbyte-box for encryption. This innovative tech-
nique aims to elevate the security standards of cryptographic applica-
tions.2023-11-13T10:13:56+00:00https://creativecommons.org/licenses/by/4.0/Raja Adhithan Radhakrishnanhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1750A Statistical Verification Method of Random Permutations for Hiding Countermeasure Against Side-Channel Attacks2023-11-13T00:06:15+00:00Jong-Yeon ParkJang-Won JuWonil LeeBo-Gyeong KangYasuyuki KachiKouichi SakuraiAs NIST is putting the final touches on the standardization of PQC (Post Quantum Cryptography) public key algorithms, it is a racing certainty that peskier cryptographic attacks undeterred by those new PQC algorithms will surface. Such a trend in turn will prompt more follow-up studies of attacks and countermeasures. As things stand, from the attackers’ perspective, one viable form of attack that can be implemented thereupon is the so-called “side-channel attack”. Two best-known countermeasures heralded to be durable against side-channel attacks are: “masking” and “hiding”. In that dichotomous picture, of particular note are successful single-trace attacks on some of the NIST’s PQC then-candidates, which worked to the detriment of the former: “masking”. In this paper, we cast an eye over the latter: “hiding”. Hiding proves to be durable against both side-channel attacks and another equally robust type of attacks called “fault injection attacks”, and hence is deemed an auspicious countermeasure to be implemented. Mathematically, the hiding method is fundamentally based on random permutations. There has been a cornucopia of studies on generating random permutations. However, those are not tied to implementation of the hiding method. In this paper, we propose a reliable and efficient verification of permutation implementation, through employing Fisher–Yates’ shuffling method. We introduce the concept of an 𝑛-th order permutation and explain how it can be used to verify that our implementation is more efficient than its previous-gen counterparts for hiding countermeasures.As NIST is putting the final touches on the standardization of PQC (Post Quantum Cryptography) public key algorithms, it is a racing certainty that peskier cryptographic attacks undeterred by those new PQC algorithms will surface. Such a trend in turn will prompt more follow-up studies of attacks and countermeasures. As things stand, from the attackers’ perspective, one viable form of attack that can be implemented thereupon is the so-called “side-channel attack”. Two best-known countermeasures heralded to be durable against side-channel attacks are: “masking” and “hiding”. In that dichotomous picture, of particular note are successful single-trace attacks on some of the NIST’s PQC then-candidates, which worked to the detriment of the former: “masking”. In this paper, we cast an eye over the latter: “hiding”. Hiding proves to be durable against both side-channel attacks and another equally robust type of attacks called “fault injection attacks”, and hence is deemed an auspicious countermeasure to be implemented. Mathematically, the hiding method is fundamentally based on random permutations. There has been a cornucopia of studies on generating random permutations. However, those are not tied to implementation of the hiding method. In this paper, we propose a reliable and efficient verification of permutation implementation, through employing Fisher–Yates’ shuffling method. We introduce the concept of an 𝑛-th order permutation and explain how it can be used to verify that our implementation is more efficient than its previous-gen counterparts for hiding countermeasures.2023-11-13T00:06:15+00:00https://creativecommons.org/licenses/by/4.0/Jong-Yeon ParkJang-Won JuWonil LeeBo-Gyeong KangYasuyuki KachiKouichi Sakuraihttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1749Dora: Processor Expressiveness is (Nearly) Free in Zero-Knowledge for RAM Programs2023-11-12T17:19:02+00:00Aarushi GoelMathias Hall-AndersenGabriel KaptchukExisting protocols for proving the correct execution of a RAM program in zero-knowledge are plagued by a processor expressiveness trade-oﬀ : supporting fewer instructions results in smaller processor circuits (which improves performance), but may result in more program execution steps because non-supported instruction must be emulated over multiple processor steps (which diminishes performance).
We present Dora, a concretely eﬃcient zero-knowledge protocol for RAM programs that sidesteps this tension by making it (nearly) free to add additional instructions to the processor. The computational and communication complexity of proving each step of a computation in Dora, is constant in the number of supported instructions. Dora is also highly generic and only assumes the existence of linearly homomorphic commitments. We implement Dora and demonstrate that on commodity hardware it can prove the correct execution of a processor with thousands of instruction, each of which has thousands of gates, in just a few milliseconds per step.Existing protocols for proving the correct execution of a RAM program in zero-knowledge are plagued by a processor expressiveness trade-oﬀ : supporting fewer instructions results in smaller processor circuits (which improves performance), but may result in more program execution steps because non-supported instruction must be emulated over multiple processor steps (which diminishes performance).
We present Dora, a concretely eﬃcient zero-knowledge protocol for RAM programs that sidesteps this tension by making it (nearly) free to add additional instructions to the processor. The computational and communication complexity of proving each step of a computation in Dora, is constant in the number of supported instructions. Dora is also highly generic and only assumes the existence of linearly homomorphic commitments. We implement Dora and demonstrate that on commodity hardware it can prove the correct execution of a processor with thousands of instruction, each of which has thousands of gates, in just a few milliseconds per step.2023-11-12T17:19:02+00:00https://creativecommons.org/licenses/by/4.0/Aarushi GoelMathias Hall-AndersenGabriel Kaptchukhttps://creativecommons.org/licenses/by/4.0/https://eprint.iacr.org/2023/1748Forging tropical signatures2023-11-11T23:41:23+00:00Lorenz PannyA recent preprint [ePrint 2023/1475] suggests the use of polynomials over a tropical algebra to construct a digital signature scheme "based on" the problem of factoring such polynomials, which is known to be NP‑hard.
This short note presents two very efficient forgery attacks on the scheme, bypassing the need to factorize tropical polynomials and thus demonstrating that security in fact rests on a different, empirically easier problem.A recent preprint [ePrint 2023/1475] suggests the use of polynomials over a tropical algebra to construct a digital signature scheme "based on" the problem of factoring such polynomials, which is known to be NP‑hard.
This short note presents two very efficient forgery attacks on the scheme, bypassing the need to factorize tropical polynomials and thus demonstrating that security in fact rests on a different, empirically easier problem.2023-11-11T23:41:23+00:00https://creativecommons.org/licenses/by/4.0/Lorenz Pannyhttps://creativecommons.org/licenses/by/4.0/