## All papers in 2020 (1620 results)

Neural Aided Statistical Attack for Cryptanalysis

In Crypto’19, Gohr proposed the first deep learning-based
key recovery attack on 11-round Speck32/64, which opens the direction of neural aided cryptanalysis. Until now, neural aided cryptanalysis still faces two problems: (1) the attack complexity estimations rely purely on practical experiments. There is no theoretical framework for estimating theoretical complexity.
(2) it does not work when there are not enough neutral bits that exist in the prepended differential.
To the best of our knowledge,
we are the first to solve these two problems. In this paper, we
propose a Neural Aided Statistical Attack (NASA) that has the following advantages: (1) NASA supports estimating the theoretical complexity. (2) NASA does not rely on any special properties including neutral bits. (3) NASA is applicable to large-size ciphers. Moreover, we propose three methods for reducing the attack complexity of NASA. One of the methods is based on a newly proposed concept named Informative Bit that reveals an important phenomenon.
Four attacks on 9-round or 10-round Speck32/64 are executed to verify the correctness of NASA. To further highlight the advantages of NASA, we have performed a series of experiments. At first, we apply NASA and Gohr’s attack to round reduced DES. Since NASA does not rely on neutral bits, NASA breaks 10-round DES while Gohr’s attack breaks 8-round DES. Then, we compare the time consumption of attacks on 11-round Speck32/64. When the newly proposed three methods are used, the time consumption of NASA is almost the same as that of Gohr’s attack. Besides, NASA is applied to 13-round Speck32/64. At last, we introduce how to analyze the resistance of large-size ciphers with respect to NASA, and apply NASA to 14-round Speck96/96. The code of this paper is available at https://github.com/AI-Lab-Y/NASA. Our work
arguably raises a new direction for neural aided cryptanalysis.

Getting Rid of Linear Algebra in Number Theory Problems

We revisit some well-known cryptographic problems in a black box modular ring model of computation. This model allows us to compute with black box access to the ring $\mathbb{Z}/m\mathbb{Z}$. We develop new generic ring algorithms to recover $m$ even if it is unknown. At the end, Maurer's generic algorithm allows to recover an element from its black box representation. However, we avoid Maurer's idealized model with CDH oracle for the multiplication in the hidden ring by introducing a new representation compatible with ring operations. An element is encoded by its action over the factor basis. Consequently, we can multiply two elements with classical descent computations in sieving algorithms. As the algorithms we propose work without using an expensive linear algebra computation at the end, even though they manipulate large sparse matrices, we can exploit a high-level of parallelism.
Then, we consider general groups such as imaginary quadratic class group and the Jacobian of a hyperelliptic curve, and obtain new methods for group order computation. The repeated squaring problem and the adaptive root problem used in the construction of Verifiable Delay Functions are particularly easy to solve in the black box modular ring, the high degree of parallelism provided by our method allows a reduction in the time to solve them. We improve the smoothing time, and as a result,
we break Verifiable Delay Functions and factorize weak keys with lower Area-Time cost.
Finally, we show new AT costs for computing a discrete logarithm over an adversarial basis in finite fields.

Proof-Carrying Data without Succinct Arguments

Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme.
In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce.
Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD.
Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments.
Our results are supported by a modular and efficient implementation.

Arguments of Knowledge via hidden order groups

We study non-interactive arguments of knowledge (AoKs) for commitments in groups of hidden order. We provide protocols whereby a Prover can demonstrate certain properties of and relations between committed sets/multisets, with succinct proofs that are publicly verifiable against the constant-sized commitments. In particular, we provide AoKs for the disjointness of committed sets/multisets in cryptographic accumulators, with a view toward applications to verifiably outsourcing data storage and sharded stateless blockchains.
Recent work ([DGS20]) suggests that the hidden order groups need to be substantially larger in size that previously thought, in order to ensure the desired security level. Thus, in order to keep the communication complexity between the Prover and the the Verifier to a minimum, we have designed the protocols so that the proofs entail a constant number of group elements, independent of the number of the committed sets/multisets rather than just independent of the sizes of these sets/multisets.
If the underlying group of hidden order is an appropriate imaginary quadratic class group or a genus three Jacobian, the argument systems are transparent. Furthermore, since all challenges are public coin, the protocols can be made non-interactive using the Fiat-Shamir heuristic. We build on the techniques from [BBF19] and [Wes18].

Algebraic Geometric Secret Sharing Schemes over Large Fields Are Asymptotically Threshold

In Chen-Cramer Crypto 2006 paper \cite{cc} algebraic geometric secret sharing schemes were proposed such that the ``Fundamental Theorem in Information-Theoretically Secure Multiparty Computation" by Ben-Or, Goldwasser and Wigderson \cite{BGW88} and Chaum, Crépeau and Damgård \cite{CCD88} can be established over constant-size base
finite fields. These algebraic geometric secret sharing schemes defined by a curve of genus $g$ over a constant size finite field ${\bf F}_q$ is quasi-threshold in the following sense, any subset of $u \leq T-1$ players (non qualified) has no information of the secret and any subset of $u \geq T+2g$ players (qualified) can reconstruct the secret. It is natural to ask that how far
from the threshold these quasi-threshold secret sharing schemes are? How many subsets of $u \in [T, T+2g-1]$ players can recover the secret or have no information of the secret?
In this paper it is proved that almost all subsets of $u \in [T,T+g-1]$ players have no information of the secret and almost all
subsets of $u \in [T+g,T+2g-1]$ players can reconstruct the secret when the size $q$ goes to the infinity and the genus satisfies $\lim \frac{g}{\sqrt{q}}=0$. Then algebraic geometric secretsharing schemes over large finite fields are asymptotically
threshold in this case. We also analyze the case when the size $q$ of the base field is fixed and the genus goes to the infinity.

An Ideal Compartmented Secret Sharing Scheme Based on Linear Homogeneous Recurrence Relations

Multipartite secret sharing schemes are those that have multipartite access structures. The set of the participants in those schemes is divided into several parts, and all the participants in the same part play the equivalent role. One type of such access structure is the compartmented access structure. We propose an ideal and efficient compartmented multi-secret sharing scheme based on the linear homogeneous recurrence (LHR) relations. In the construction phase, the shared secrets are hidden in some terms of the linear homogeneous recurrence sequence. In the recovery phase, the shared secrets are obtained by solving those terms in which the shared secrets are hidden. When the global threshold is $t$, our scheme can reduce the computational complexity from $O(n^{t-1})$ to $O(n^{\max(t_i-1)}\log n)$, where $t_i<t$. The security of the proposed scheme is based on Shamir's threshold scheme. Moreover, it is efficient to share the multi-secret and to change the shared secrets in the proposed scheme. That is, the proposed scheme can improve the performances of the key management and the distributed system.

SoK: Algorithmic Incentive Manipulation Attacks on Permissionless PoW Cryptocurrencies

A long standing question in the context of cryptocurrencies based on Nakamoto consensus is whether such constructions are
incentive compatible, i.e., the intended properties of the system emerge from the appropriate utility model for participants. Bribing and other related attacks, such as front-running or Goldfinger attacks, aim to directly influence the incentives of actors within (or outside) of the targeted cryptocurrency system. The theoretical possibility of bribing at tacks on cryptocurrencies was discussed early on in the cryptocurrency community and various different techniques and approaches have since been proposed. Some of these attacks are designed to gain in-band profits, while others intend to break the mechanism design and render the cryptocurrency worthless. In this paper, we systematically expose the large but scattered body of research in this area which has accumulated over the years. We summarize these bribing attacks and similar techniques that leverage on programmatic execution and verification under the term algorithmic incentive manipulation (AIM) attacks, and show that the problem space is not yet fully explored. Based on our analysis we present several research gaps and opportunities that warrant further investigation. In particular, we highlight no- and near-fork attacks as a powerful, yet largely underestimated, AIM category that raises serious security concerns not only for smart contract platforms.

Lockable Signatures for Blockchains: Scriptless Scripts for All Signatures

Payment Channel Networks (PCNs) have given a huge boost to the scalability of blockchain-based cryptocurrencies: Beyond improving the transaction rate, PCNs enabled cheap cross-currency payments and atomic swaps. However, current PCNs proposals either heavily rely on special scripting features of the underlying blockchain (e.g. Hash Time Lock Contracts) or are tailored to a handful of digital signature schemes, such as Schnorr or ECDSA signatures. This leaves us in an unsatisfactory situation where many currencies that are being actively developed and use different signature schemes cannot enjoy the benefits of a PCN.
In this work, we investigate whether we can construct PCNs assuming the minimal ability of a blockchain to verify a digital signature, for any signature scheme. In answering this question in the affirmative, we introduce the notion of lockable signatures, which constitutes the cornerstone of our PCN protocols. Our approach is generic and the PCN protocol is compatible with any digital signature scheme, thus inheriting all favorable properties of the underlying scheme that are not offered by Schnorr/ECDSA (e.g.\ aggregatable signatures or post-quantum security).
While the usage of generic cryptographic machinery makes our generic protocol impractical, we view it as an important feasibility result as it may serve as the basis for constructing optimized protocols for specific signature schemes. To substantiate this claim, we design a highly efficient PCN protocol for the special case of Boneh-Lynn-Shacham (BLS) signatures. BLS signatures enjoy many unique features that make it a viable candidate for a blockchain, e.g. short, unique, and aggregatable signatures. Yet, prior to our work, no PCN was known to be compatible with it (without requiring an advanced scripting language). The cost of our PCN is dominated by a handful of calls to the BLS algorithms. Our concrete evaluation of these basic operations shows that users with commodity hardware can process payments with minimal overhead.

A New Efficient Hierarchical Multi-secret Sharing Scheme Based on Linear Homogeneous Recurrence Relations

Hierarchical secret sharing is an important key management technique since it is specially customized for hierarchical organizations with different departments allocated with different privileges, such as the government agencies or companies. Hierarchical access structures have been widely adopted in secret sharing schemes, where efficiency is the primary consideration for various applications. How to design an efficient hierarchical secret sharing scheme is an important issue. In 2007, a famous hierarchical secret sharing (HSS) scheme was proposed by Tassa based on Birkhoff interpolation, and later, based on the same method, many other HSS schemes were proposed. However, these schemes all depend on Polya's condition, which is a necessary condition not a sufficient condition. It cannot guarantee that Tassa's HSS scheme always exists. Furthermore, this condition needs to check the non-singularity of many matrices. We propose a hierarchical multi-secret sharing scheme based on the linear homogeneous recurrence (LHR) relations and the one-way function. In our scheme, we select $m$ linearly independent homogeneous recurrence relations. The participants in the highly-ranked subsets $\gamma_1, \gamma_2 ,\cdots, \gamma_{j-1}$ join in the $j$-th subset to construct the $j$-th LHR relation. In addition, the proposed hierarchical multi-secret sharing scheme just requires one share for each participant, and keeps the same computational complexity. Compared with the state-of-the-art hierarchical secret sharing schemes, our scheme has high efficiency.

SLAP: Simple Lattice-Based Private Stream Aggregation Protocol

Private Stream Aggregation (PSA) protocols allow for the secure aggregation of time-series data, affording security and privacy to users' private data, with significantly better efficiency than general secure computation such as homomorphic encryption, multiparty computation, and secure hardware based approaches.
Earlier PSA protocols face limitations including needless complexity, a lack of post-quantum security, or other practical issues. In this work, we present SLAP, a Simple Lattice-based Private Stream Aggregation Protocol.
SLAP features two variants with post-quantum security, with simpler and more efficient computations enabled by (1) our white-box approach that builds the encryption directly from the Ring Learning With Errors assumption and (2) application of state-of-the-art algorithmic optimizations in lattice-based cryptography.
We prove that SLAP with differentially private inputs is an aggregator oblivious PSA scheme.
We implement SLAP, and show experimentally the improvements of SLAP over similar work. We show a speedup of 20.76x over the previous state-of-the-art RLWE-based PSA work's aggregation, and apply techniques including RNS, NTT, and batching to obtain a throughput of 390,691 aggregations per second for 1000 users. The communication overhead of SLAP is less than in previous work, with decreases of up to 99.96% in ciphertext sizes as compared to previous work in RLWE-based PSA. We also show the improvement of SLAP over other state-of-the-art post-quantum PSA with regards to throughput, and compare and contrast our RLWE-based approach with other work based upon secret sharing and Learning-With-Rounding.

New directions in the ransomware phenomenon

Ransomware is a type of malware that blocks an user’s access to files and requests him/her a ransom. The main approach of an attacker is to encrypt the user’s files and give him/her the decrypting tool only after he/she pays the requested amount of money. The payment is usually done in difficult to trace currencies. In this paper, we provide a review of the ransomware phenomenon, making a clear distinction between the threats before and after WannaCry (which appeared in May 2017). Initially, we give two taxonomy examples from the literature and one designed by us. The first two taxonomies use ”Platform”,
”Cryptosystem”/”Crypto”, ”Severity”, ”Attack” and ”Target” as criteria (the terms appear in one of them or both), but we have chosen ”Target Zone”, ”Propagation”, ”Payment” and ”Weakness”. We further describe/compare ransomware programs, taking into account several aspects including how they work (e.g., encryption methods), whom they target (e.g., individuals/organizations), what impact they have and what weaknesses can be used to provide countermeasures (besides the general prevention techniques that we mention briefly).

A new method for secondary constructions of vectorial bent functions

Uncategorized

Uncategorized

In 2017, Tang et al. have introduced a generic construction for bent functions of the form $f(x)=g(x)+h(x)$, where $g$ is a bent function satisfying some conditions and $h$ is a Boolean function. Recently, Zheng et al. generalized this result to construct large classes of bent vectorial Boolean function from known ones in the form $F(x)=G(x)+h(X)$, where $G$ is a bent vectorial and $h$ a Boolean function. In this paper we further generalize this construction to obtain vectorial bent functions of the form $F(x)=G(x)+\mathbf{H}(X)$, where $\mathbf{H}$ is also a vectorial Boolean function. This allows us to construct new infinite families of vectorial bent functions, EA-inequivalent to $G$, which was used in the construction. Most notably, specifying $\mathbf{H } (x)=\mathbf{h} (Tr_1^n(u_1x),\ldots,Tr_1^n(u_tx))$, the function $\mathbf{h} :\mathbb{F}_2^t \rightarrow \mathbb{F}_{2^t}$ can be chosen arbitrary which gives a relatively large class of different functions for a fixed function $G$. We also propose a method of constructing vectorial $(n,n)$-functions having maximal number of bent components.

Cryptographic competitions

Competitions are widely viewed as the safest way to select cryptographic algorithms. This paper surveys procedures that have been used in cryptographic competitions, and analyzes the extent to which those procedures reduce security risks.

Manta: Privacy Preserving Decentralized Exchange

Uncategorized

Uncategorized

Cryptocurrencies and decentralized ledger technology has been widely adopted over the last decades. However, there isn’t yet a decentralized exchange that protects users’ privacy from end to end. In this paper, we construct the first ledger-based decentralized token exchange with strong privacy guarantees. We propose the first Decentralized Anonymous eXchange scheme (DAX scheme) based on automated market maker (AMM) and zkSNARK and present a formal definition of its security and privacy properties.

PEGASUS: Bridging Polynomial and Non-polynomial Evaluations in Homomorphic Encryption

Uncategorized

Uncategorized

Homomorphic encryption (HE) is considered as one of the most important primitives for privacy-preserving applications.
However, an efficient approach to evaluate both polynomial and non-polynomial functions on encrypted data is still absent,
which hinders the deployment of HE to real-life applications. To address this issue, we propose a practical framework PEGASUS.
PEGASUS can efficiently switch back and forth between a packed CKKS ciphertext and FHEW ciphertexts without decryption,
allowing us to evaluate arithmetic functions efficiently on the CKKS side, and to evaluate look-up tables on FHEW ciphertexts.
Our FHEW ! CKKS conversion algorithm is more practical than the existing methods. We improve the computational
complexity from linear to sublinear. Moreover, the size of our conversion key is significantly smaller, e.g., reduced from 80
gigabytes to 12 megabytes. We present extensive benchmarks of PEGASUS, including sigmoid/ReLU/min/max/division, sorting
and max-pooling. To further demonstrate the capability of PEGASUS, we developed two more applications. The first one is a
private decision tree evaluation whose communication cost is about two orders of magnitude smaller than the previous HE-based approaches. The second one is a secure K-means clustering that is able to run on thousands of encrypted samples in minutes that outperforms the best existing system by 14 – 20. To the best of our knowledge, this is the first work that supports
practical K-means clustering using HE in a single server setting.

$P_4$-free Partition and Cover Numbers and Application

Uncategorized

Uncategorized

$P_4$-free graphs-- also known as cographs, complement-reducible graphs, or hereditary Dacey graphs--have been well studied in graph theory.
Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite $P_4$-free graphs.
For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant.
Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems.
Interestingly, our covering problem is closely related to the well-studied problem of product/Prague dimension of loopless undirected graphs, which allows us to employ algebraic lower-bounding techniques for the product/Prague dimension.
We prove that computing these numbers is \npol-complete, even for bipartite graphs.
We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-$N$ partite sets such that these numbers are at least ${\epsilon\cdot N^{1-2\epsilon}}$, for $\epsilon\in\{1/3,1/4,1/5,\dotsc\}$.
Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality.
For applications in information theory and communication \& cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample.
Alice and Bob's objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness.
A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective.
Lower bounds to the minimum size of the genie's assistance translate into communication and cryptographic lower bounds.
We show that (the $\log_2$ of) the $P_4$-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie's assistance.
Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high $P_4$-free partition numbers correspond to joint distributions requiring more assistance from the genie.
As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output.
We show that (the $\log_2$ of) the $P_4$-free cover number of the bipartite graph encoding a Boolean function $f$ is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of $f$ in this model).
Consequently, the functions corresponding to the bipartite graphs with high $P_4$-free cover numbers have high communication complexity.
Furthermore, there are functions with communication complexity close to the \naive protocol where the nondeterministic input reveals a party's input.
Finally, the access to the equality oracle reduces the communication complexity of computing set disjointness by a constant factor in contrast to the model where parties do not have access to the equality oracle.
To compute the inequality function, we show an exponential reduction in the communication complexity, and this bound is optimal.
On the other hand, access to the equality oracle is (nearly) useless for computing set intersection.

An Embedded Domain-Specific Language for Logical Circuit Descriptions with Applications to Garbled Circuits

Contemporary libraries and frameworks that make it possible to incorporate secure multi-party computation protocols and capabilities into production software systems and applications must sometimes deliver underlying capabilities (such as logical circuit synthesis) to new kinds of environments (such as web browsers or serverless cloud computing platforms). In order to illustrate some of the benefits of addressing this challenge by building a solution from the ground up that leverages the features of a contemporary and widely used programming language, we present an embedded domain-specific language that allows programmers to describe and synthesize logical circuits. Notably, this approach allows programmers to employ many of the language features and any of the programming paradigms supported by the host language. We illustrate this flexibility by considering two use cases: synthesizing circuits for relational operations and synthesizing circuits corresponding to the SHA-256 cryptographic hash function.

One-Time Delegation of Unlinkable Signing Rights and Its Application

Delegation of signing rights can be useful to promote effective resource sharing and smooth cooperation among participants
in distributed systems, and
in many situations, we often need restricted delegation
such as one-timeness and unlinkability rather than simple full delegation.
Particularly, one-timesness cannot be achieved just by deploying cryptographic measures,
and one needs to resort to some form of tamper-proofness
or the assistance from external cloud servers for ``key-disabling''.
In this work, we extend the latter such that a delegatee can sign a message
without the delegator's involvement with the assumption that there exists at least one
honest cloud server with secure erasure to achieve one-timeness.
In this setting, if the delegator just shares their signing key between
the delegatee and cloud servers, it may be problematic.
It is because in the worst case, the delegator cannot know whether or not
a signing key theft occurred because the signatures generated illegally are
indistinguishable from the ones generated legally.
To solve this, first we propose an efficient one-time delegation scheme of Okamoto-Schnorr signing.
Further we combine the basic delegation scheme with anonymous credentials
such that the delegator can detect the signing key theft even if one-time delegation is broken
while also achieving unlinkability for both the delegator and cloud servers.
Further we show its application to an e-cash scheme, which can prevent double-spending.

Speeding-up Ideal Lattice-Based Key Exchange Using a RSA/ECC Coprocessor

Polynomial multiplication is one of the most costly operations of
ideal lattice-based cryptosystems. In this work, we study its
optimization when one of the operand has coefficients close to 0.
We focus on this structure since it is at the core of lattice-based
Key Exchange Mechanisms submitted to the NIST call for post-quantum
cryptography. In particular, we propose optimization of this
operation for embedded devices by using a RSA/ECC coprocessor that
provides efficient large-integer arithmetic. In this context, we compare
Kronecker Substitution, already studied by Albrecht et al. in TCHES 2019,
with two specific algorithms that
we introduce: KSV, a variant of this substitution, and an
adaptation of the schoolbook multiplication, denoted
Shift&Add. All these algorithms rely on the transformation
of polynomial multiplication to large-integer arithmetic. Then,
thanks to these algorithms, existing coprocessors dedicated to
large-integer can be re-purposed in order to speed-up post-quantum
schemes. The efficiency of these algorithms depends on the component
specifications and the cryptosystem parameters set. Thus, we
establish a methodology to determine which algorithm to use, for a
given component, by only implementing basic large-integer
operations. Moreover, the three algorithms are assessed on a chip
ensuring that the theoretical methodology matches with practical
results. They are also compared to reference software
implementations such as NTT or schoolbook multiplication.

Adaptive layer-two dispute periods in blockchains

Second-layer or off-chain protocols increase the throughput of permissionless blockchains by enabling parties
to lock funds into smart-contracts and perform payments through peer-to-peer communication, only resorting to the smart-contracts for protection against fraud. Current protocols have fixed time periods during which participants can dispute any fraud attempts. However, current blockchains have limited transaction processing capacity, so a fixed dispute period will not always be sufficient to deter all fraudulent behaviour in an off-chain protocol. In this work, we describe how to set adaptive dispute periods that accommodate the congestion and capacity of the underlying blockchain. Adaptive dispute periods ensure that users retain the opportunity to dispute fraudulent behaviours during blockchain congestion, while increasing second-layer protocol efficiency by reducing dispute period lengths when the number of disputes is low. We describe a non-interactive argument system for setting adaptive dispute periods under the current Ethereum Virtual Machine, and discuss how to efficiently integrate built-in support for adaptive dispute periods in any blockchain. We empirically demonstrate that an adaptive-dispute second-layer protocol can handle a larger number of disputes and prevent more fraud than its non-adaptive counterparts even when users are slow to issue disputes, due to denial of service or blockchain congestion.

Auto-tune POIs: Estimation of distribution algorithms for efficient side-channel analysis

Due to the constant increase and versatility of IoT devices that should keep sensitive information private, Side-Channel Analysis (SCA) attacks on embedded devices are gaining visibility in the industrial field. The integration and validation of countermeasures against SCA can be an expensive and cumbersome process, especially for the less experienced ones, and current certification procedures require to attack the devices under test using multiple SCA techniques and attack vectors, often implying a high degree of complexity.
The goal of this paper is to ease one of the most crucial and tedious steps of profiling attacks i.e. the points of interest (POI) selection and hence assist the SCA evaluation process. To this end, we introduce the usage of Estimation of Distribution Algorithms (EDAs) in the SCA field in order to automatically tune the point of interest selection.
We showcase our approach on several experimental use cases, including attacks on unprotected and protected AES implementations over distinct copies of the same device, dismissing in this way the portability issue.

Function Secret Sharing for PSI-CA: With Applications to Private Contact Tracing

In this work we describe a token-based solution to Contact Tracing via Distributed Point Functions (DPF) and, more generally, Function Secret Sharing (FSS). The key idea behind the solution is that FSS natively supports secure keyword search on raw sets of keywords without a need for processing the keyword sets via a data structure for set membership. Furthermore, the FSS functionality enables adding up numerical payloads associated with multiple matches without additional interaction. These features make FSS an attractive tool for lightweight privacy-preserving searching on a database of tokens belonging to infected individuals.

MILP Based Differential Attack on Round Reduced WARP

WARP is proposed by S. Banik et al. in SAC 2020. It is a 128-bit lightweight block cipher with 128-bit key. WARP is based on the
32-nibble type-2 Generalised Feistel Network (GFN) structure. It uses a permutation over nibbles which is designed to optimize the security and efficiency. The designers have provided a lower bound for the number of differentially active S-boxes but the detailed differential characteristics are not provided. In this paper, we discuss the MILP based search technique and present the differential characteristics for the 18-round and 19-round WARP with probability of 2^(-122) and 2^(-132) respectively. We also present a key recovery attack on the 21-round WARP with data complexity of 2^(113 )chosen plaintexts. To the best of our knowledge, these detailed differential characteristics are presented for the first time and this is the first key recovery attack on the 21-round WARP.

A New Improved AES S-box With Enhanced Properties

The Advanced Encryption Standard (AES) is the most widely used symmetric encryption algorithm. Its security is mainly based on the structure of the S-box. In this paper, we present a new way to create S-boxes for AES and exhibit an S-box with improved cryptographic properties such as Bit Independence Criterion (BIC), periodicity, algebraic complexity, Strict Avalanche Criterion (SAC) and Distance to SAC.

Batched Differentially Private Information Retrieval

Private Information Retrieval (PIR) allows several clients to query a database held by one or more servers, such that the contents of their queries remain private. Prior PIR schemes have achieved sublinear communication and computation by leveraging computational assumptions, federating trust among many servers, relaxing security to permit differentially private leakage, refactoring effort into an offline stage to reduce online costs, or amortizing costs over a large batch of queries.
In this work, we present an efficient PIR protocol that combines all of the above techniques to achieve constant amortized communication and computation complexity in the size of the database and constant client work. We leverage differentially private leakage in order to provide better trade-offs between privacy and efficiency. Our protocol achieves speed-ups up to and exceeding $10$x in practical settings compared to state of the art PIR protocols, and can scale to batches with hundreds of millions of queries on cheap commodity AWS machines. Our protocol builds upon a new secret sharing scheme that is both incremental and non-malleable,
which may be of interest to a wider audience. Our protocol provides security up to abort against malicious adversaries that can corrupt all but one party.

Attacks on Beyond-Birthday-Bound MACs in the Quantum Setting

We systematically study the security of twelve Beyond-Birthday-Bound Message Authentication Codes (BBB MACs) in the Q2 model where attackers have quantum-query access to MACs. Assuming the block size of the underlying (tweakable) block cipher is $n$ bits, the security proofs show that they are secure at least up to $\mathcal{O}(2^ {2n/3}) $ queries in the classical setting. The best classical attacks need $\mathcal{O}(2^ {3n/4}) $ queries. We consider secret state recovery against SUM-ECBC-like and PMAC_Plus-like MACs and key recovery against PMAC_Plus-like MACs. Both attacks lead to successful forgeries. The first attack costs $\mathcal{O}(2^{n/2}n)$ quantum queries by applying Grover-meet-Simon algorithm. The second attack costs $\mathcal{O}(2^{m/2})$ quantum queries by applying Grover's algorithm, assuming the key size of (tweakable) block cipher is $m$ bits. As far as we know, these are the first quantum attacks against BBB MACs. It is remarkable that our attacks are suitable even for some optimally secure MACs, such as mPMAC+-f, mPMAC+-p1, and mPMAC+-p2.

Bootstrapping on SEAL

We implement bootstrapping of RNS-CKKS on SEAL, a homomorphic encryption library released by Microsoft. And we measure the accuracy of encrypted data after bootstrapping for various parameters, which allows us to do more than thousands of homomorphic operations.

Towards Post-Quantum Updatable Public-Key Encryption via Supersingular Isogenies

We present the first post-quantum secure Key-Updatable Public-Key Encryption (UPKE) construction. UPKE has been proposed as a mechanism to improve the forward secrecy and post-compromise security of secure messaging protocols, but the hardness of all existing constructions rely on discrete logarithm assumptions. We focus our assessment on isogeny-based cryptosystems due to their suitability for performing a potentially unbounded number of update operations, a practical requirement for secure messaging where user conversations can occur over months, if not years.
We begin by formalizing two UPKE variants in the literature as Symmetric and Asymmetric UPKE, which differ in how encryption and decryption keys are updated. We argue that Asymmetric UPKE constructions in the literature cannot be straightforwardly instantiated using SIDH nor CSIDH. We then describe a SIDH construction that partially achieves the required security notions for Symmetric UPKE, but due to existing mathematical limitations, cannot provide fine-grained forward secrecy. Finally, we present a CSIDH Symmetric UPKE construction that requires a parameter set in which the class group structure is fully known. We discuss open problems which are applicable to any cryptosystem with similar requirements for continuous operations over the secret domain.

Puncturable Pseudorandom Sets and Private Information Retrieval with Near-Optimal Online Bandwidth and Time

Imagine one or more non-colluding servers each holding a large
public database, e.g., the repository of DNS entries. Clients would
like to access entries in this database without disclosing their
queries to the servers. Classical private information retrieval (PIR)
schemes achieve polylogarithmic bandwidth per query, but require the
server to perform linear computation per query, which is a
significant barrier towards deployment.
Several recent works showed, however, that by introducing a
one-time, per-client, off-line preprocessing phase, an
\emph{unbounded} number of client queries can be subsequently served
with sublinear online computation time per query (and the cost of the
preprocessing can be amortized over the unboundedly many queries).
Existing preprocessing PIR schemes (supporting unbounded queries), unfortunately, make undesirable tradeoffs to achieve sublinear online computation:
they are either significantly non-optimal in online time or bandwidth,
%they either require
%$\sqrt{n}$ or more bandwidth per query, which is asymptotically
%worse than classical PIR schemes,
or require the servers to store
a linear amount of state per client or even per query, or require
polylogarithmically many non-colluding servers.
We propose a novel 2-server preprocessing PIR scheme that achieves
$\widetilde{O}(\sqrt{n})$ online computation per query and
$\widetilde{O}(\sqrt{n})$ client storage, while
preserving the polylogarithmic online bandwidth of classical PIR
schemes. Both the online bandwidth and computation
are optimal up to a poly-logarithmic factor.
In our construction, each server stores only the original
database and nothing extra, and each online query is served within a
single round trip. Our construction relies on the standard LWE
assumption. As an important stepping stone, we propose new, more
generalized definitions for a cryptographic object called a Privately
Puncturable Pseudorandom Set, and give novel constructions that depart
significantly from prior approaches.

Game-Theoretic Fairness Meets Multi-Party Protocols: The Case of Leader Election

Suppose that $n$ players
want to elect a random leader and they communicate by posting
messages to a common broadcast channel.
This problem is called leader election, and it is
fundamental to the distributed systems and cryptography literature.
Recently, it has attracted renewed interests
due to its promised applications in decentralized environments.
In a game theoretically fair leader election protocol, roughly speaking,
we want that even majority coalitions
cannot increase its own chance of getting
elected, nor hurt the chance of any honest individual.
The folklore tournament-tree
protocol, which completes in logarithmically many rounds,
can easily be shown to satisfy game theoretic security. To the best of our knowledge,
no sub-logarithmic round protocol was known in the setting that we consider.
We show that
by adopting an appropriate notion of approximate game-theoretic fairness,
and under standard cryptographic assumption,
we can achieve
$(1-1/2^{\Theta(r)})$-fairness in $r$ rounds for $\Theta(\log \log n) \leq r \leq \Theta(\log n)$,
where $n$ denotes the number of players. In particular, this means that we can approximately match the fairness of the tournament tree protocol using as few as $O(\log \log n)$ rounds.
We also prove a lower bound showing that
logarithmically many rounds is necessary if we restrict ourselves
to ``perfect'' game-theoretic fairness
and protocols that are
``very similar in structure'' to the tournament-tree protocol.
Although leader election is a well-studied problem in other contexts in distributed
computing,
our work is the first exploration of the round complexity
of {\it game-theoretically
fair} leader election in the presence of a possibly majority coalition.
As a by-product of our exploration,
we suggest a new, approximate game-theoretic fairness
notion, called ``approximate sequential fairness'',
which provides a more desirable solution concept than some previously
studied approximate fairness notions.

RandPiper -- Reconfiguration-Friendly Random Beacons with Quadratic Communication

Random beacon protocols provide a continuous public source of randomness and their applications range from public lotteries to zero-knowledge proofs. Existing random beacon protocols in the bounded synchronous model sacrifice either the fault tolerance or the communication complexity for security, or ease of reconfigurability. This work overcomes the challenges with the existing works through a novel communication efficient combination of state machine replication and (publicly) verifiable secret sharing (PVSS/VSS) protocols.
We first design a new Byzantine fault-tolerant state machine replication protocol with $O(\kappa n^2)$ bits communication per consensus decision without using threshold signatures. Next, we design GRandPiper (Good Pipelined Random beacon), a random beacon protocol with bias-resistance and unpredictability, that uses PVSS and has a communication complexity of $O(\kappa n^2)$ always (best and worst cases), for a static adversary. However, GRandPiper allows an adaptive adversary to predict beacon values up to $t+1$ epochs into the future. Therefore, we design BRandPiper (Better RandPiper), that uses VSS and has a communication complexity of $O(\kappa fn^2)$, where $f$ is the actual number of faults, while offering a strong unpredictability with an advantage of only a single round even for an adaptive adversary.

Unifying Presampling via Concentration Bounds

Auxiliary-input (AI) idealized models, such as auxiliary-input random oracle model (AI-ROM) and auxiliary-input random permutation model (AI-PRM), play a critical role in assessing non-uniform security of symmetric key and hash function constructions. However, obtaining security bounds in these models is
often much more challenging.
The presampling technique, initially introduced by Unruh (CRYPTO' 07) for AI-ROM and later exported to several other models by Coretti et al. (EUROCRYPT' 18). It generically reduces security proofs in AI models to much simpler bit-fixing (BF) models, making it much easier to obtain concrete bounds in AI models. As a result, the presampling technique has leads to simpler proofs for many known bounds (e.g. one-way functions), and has been applied to many settings where the compression technique appears intractable (e.g., Merkle-Damgard hashing).
We study the possibility of leveraging the presampling technique to the quantum world. To this end,
* We show that such leveraging will resolve a major open problem in quantum computing, which is closely related to the famous Aaronson-Ambainis conjecture (ITCS' 11).
* Faced with this barrier, we give a new but equivalent bit-fixing model and a simple proof of presampling techniques for arbitrary oracle distribution in the classical setting, including AI-ROM and AI-PRM. Our theorem matches the best-known security loss and unifies previous presampling techniques by Coretti et al. (EUROCRYPT' 18) and Coretti et al. (CRYPTO' 18).
* Finally, we leverage our new classical presampling techniques to a novel ``quantum bit-fixing'' version of presampling. It matches the optimal security loss of the classical presampling. Using our techniques, we give the first post-quantum non-uniform security for salted Merkle-Damgard hash functions and reprove the tight non-uniform security for function inversion by Chung et al. (FOCS' 20).

Deniable Fully Homomorphic Encryption from LWE

We define and construct Deniable Fully Homomorphic Encryption based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption.
Our constructions achieve compactness independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the detection probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the detection probability. Canetti et al. argued that this dependence ``seems inherent'', but our constructions illustrate this is not the case.
We note that the Sahai-Waters (STOC 2014) construction of deniable encryption from indistinguishability-obfuscation achieves compactness and can be easily modified to achieve deniable FHE as well, but it requires multiple, stronger sub-exponential hardness assumptions, which are furthermore not post-quantum secure. In contrast, our constructions rely only on the LWE polynomial hardness assumption, as currently required for FHE even without deniability.
The running time of our encryption algorithm depends on the inverse of the detection probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability and polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption. Our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. This results in an efficient online phase whose running time is independent of the detection probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.

On the properties of the Boolean functions associated to the differential spectrum of general APN functions and their consequences

The notion of almost perfect nonlinear (APN) function is important, mathematically and cryptographically. Much still needs to be understood on the structure and the properties of APN functions. For instance, finding an APN permutation in an even number of variables larger than 6 would be an important theoretical and practical advance. A way to progress on a notion is to introduce and study generalizations making sense from both theoretical and practical points of view. The introduction and study of differentially uniform functions has brought more knowledge on APN functions themselves. Their notion is directly related to one of the classical characterizations of APN functions and presents an important practical interest for cryptography. In this paper we introduce and study two other generalizations of almost perfect nonlinearity, that are also related to classical characterizations of APN functions. The resulting notions are significantly different (and behave differently) from differential uniformity; they also behave differently from each other, despite the apparent similarity between their definitions. We study their satisfiability, their invariance under classical equivalence relations, their monotonicity and we characterize one of them by the Walsh transform; our results give more insight on the almost perfect nonlinearity notion itself.\\
We study the behavior of the multiplicative inverse function (which plays an important role in cryptography and in the study of finite fields) with respect to these two notions. To this aim, we find a rather simple expression of the sum of the values of this function over any affine subspace $A$ of $F_{2^n}$ not containing 0. The expression shows that such sum never vanishes (which is a remarkable property of the inverse function, which may represent a tool for attacking block ciphers using it in its S-boxes). We show that, for every $k$ not co-prime with $n$, the multiplicative inverse function sums to zero over at least one $k$-dimensional $F_2$-subspace of $F_{2^n}$. We study the behavior of the inverse function over direct sums of vector spaces and we deduce that the property of the inverse function to sum to zero over at least one $k$-dimensional $F_2$-subspace of $F_{2^n}$ happens for $k$ if and only if it happens for $n-k$. We derive several results on the sums of values of the inverse function over vector spaces and we address the cases of dimension at most 3 (equivalently, of co-dimension at most 3). We leave the other cases open.

CirC: Compiler infrastructure for proof systems, software verification, and more

Cryptographic tools like proof systems, multi-party computation, and fully
homomorphic encryption are usually applied to computations expressed as
systems of arithmetic constraints. In practice, this means that these
applications rely on compilers from high-level programming languages
(like C) to such constraints. This compilation task is challenging, but
not entirely new: the software verification community has a rich literature
on compiling programs to logical constraints (like SAT or SMT). In this
work, we show that building shared compiler infrastructure for compiling
to constraint representations is possible, because these representations
share a common abstraction: stateless, non-uniform, non-deterministic
computations that we call existentially quantified circuits, or EQCs.
Moreover, we show that this shared infrastructure is useful, because
it allows compilers for proof systems to benefit from decades of work
on constraint compilation techniques for software verification.
To make our approach concrete we create CirC, an infrastructure for building
compilers to EQCs. CirC makes it easy to compile to new EQCs: we build support
for three, R1CS (used for proof systems), SMT (used for verification and
bug-finding), and ILP (used for optimization), in ≈2000 LOC. It's also easy
to extend CirC to support new source languages: we build a feature-complete
compiler for a cryptographic language in one week and ≈900 LOC, whereas the
reference compiler for the same language took years to write, comprises ≈24000
LOC, and produces worse-performing output than our compiler. Finally, CirC
enables novel applications that combine multiple EQCs. For example, we build
the first pipeline that (1) automatically identifies bugs in programs, then
(2) automatically constructs cryptographic proofs of the bugs' existence.

Semi-Regularity of Pairs of Boolean Polynomials

Semi-regular sequences over $\mathbb{F}_2$ are sequences of homogeneous elements of the algebra $
B^{(n)}=\mathbb{F}_2[X_1,...,X_n]/(X_1^2,...,X_n^2)
$, which have a given Hilbert series and can be thought of as having as few relations between them as possible. It is believed that most such systems are semi-regular and this property has important consequences for understanding the complexity of Grobner basis algorithms such as F4 and F5 for solving such systems. We investigate the case where the sequence has length two and give an almost complete description of the number of semi-regular sequences for each $n$.

Post-Quantum Hash-Based Signatures for Secure Boot

The potential development of large-scale quantum computers is raising concerns among IT and security research professionals due to their ability to solve (elliptic curve) discrete logarithm and integer factorization problems in polynomial time. All currently used, public-key cryptography algorithms would be deemed insecure in a post-quantum setting. In response, the United States National Institute of Standards and Technology has initiated a process to standardize quantum-resistant cryptographic algorithms, focusing primarily on their security guarantees. Additionally, the Internet Engineering Task Force has published two quantum-secure signature schemes and has been looking into adding quantum-resistant algorithms in protocols. In this work, we investigate two post-quantum, hash-based signature schemes published by the Internet Engineering Task Force and submitted to the National Institute of Standards and Technology for use in secure boot. We evaluate various parameter sets for the use-cases in question and we prove that post-quantum signatures would not have material impact on image signing. We also study the hierarchical design of these signatures in different scenarios of hardware secure boot.

HERMES: Scalable, Secure, and Privacy-Enhancing Vehicle Access System

We propose HERMES, a scalable, secure, and privacy-enhancing system for users to share and access vehicles. HERMES securely outsources operations of vehicle access token generation to a set of untrusted servers. It builds on an earlier proposal, namely SePCAR [1], and extends the system design for improved efficiency and scalability. To cater to system and user needs for secure and private computations, HERMES utilizes and combines several cryptographic primitives with secure multiparty computation efficiently. It conceals secret keys of vehicles and transaction details from the servers, including vehicle booking details, access token information, and user and vehicle identities. It also provides user accountability in case of disputes. Besides, we provide semantic security analysis and prove that HERMES meets its security and privacy requirements. Last but not least, we demonstrate that HERMES is efficient and, in contrast to SePCAR, scales to a large number of users and vehicles, making it practical for real-world deployments. We build our evaluations with two different multiparty computation protocols: HtMAC-MiMC and CBC-MAC-AES. Our results demonstrate that HERMES with HtMAC-MiMC requires only approx 1,83 ms for generating an access token for a single-vehicle owner and approx 11,9 ms for a large branch of rental companies with over a thousand vehicles. It handles 546 and 84 access token generations per second, respectively. This results in HERMES being 696 (with HtMAC-MiMC) and 42 (with CBC-MAC-AES) times faster compared to in SePCAR for a single-vehicle owner access token generation. Furthermore, we show that HERMES is practical on the vehicle side, too, as access token operations performed on a prototype vehicle on-board unit take only approx 62,087 ms.

A New Method for Designing Lightweight S-boxes with High Differential and Linear Branch Numbers, and Its Application

Bit permutations are efficient linear functions often used for lightweight cipher designs. However, they have low diffusion effects, compared to word-oriented binary and MDS matrices. Thus, the security of bit permutation-based ciphers is significantly affected by differential and linear branch numbers (DBN and LBN) of nonlinear functions. In this paper, we introduce a widely applicable method for constructing S-boxes with high DBN and LBN. Our method exploits constructions of S-boxes from smaller S-boxes and it derives/proves the required conditions for smaller S-boxes so that the DBN and LBN of the constructed S-boxes are at least 3. These conditions enable us to significantly reduce the search space required to create such S-boxes. In order to make cryptographically good and efficient S-boxes, we propose a unbalanced-Bridge structure that accepts one 3-bit and two 5-bit S-boxes, and produces 8-bit S-boxes. Using the proposed structure, we develop a variety of new lightweight S-boxes that provide not only both DBN and LBN of at least 3 but also efficient bitsliced implementations including at most 11 nonlinear bitwise operations. The new S-boxes are the first that exhibit these characteristics. Moreover, we propose a block cipher PIPO based on one of the new S-boxes, which supports a 64-bit plaintext and a 128 or 256-bit key. Our implementations demonstrate that PIPO outperforms existing block ciphers (for the same block and key lengths) in both side-channel protected and unprotected environments, on an 8-bit AVR. The security of PIPO has been scrutinized with regards to state-of-the-art cryptanalysis.

Remark on the Security of CKKS Scheme in Practice

Recently, Li and Micciancio (ePrint 2020/1533) have proposed a passive attack on the CKKS approximate homomorphic encryption (HE) scheme, which allows an adversary to query decryption on valid ciphertexts. In this paper, we discuss for which applications such attack is applicable, and introduce an extension of the HEaaN library. In addition, we investigate the mitigation strategies of other HE libraries that support the CKKS scheme including HElib, PALISADE, Lattigo and SEAL.

Achieving State Machine Replication without Honest Players

Existing standards for player characterisation in tokenised state machine replication protocols depend on honest players who will always follow the protocol, regardless of possible token increases for deviating. Given the ever-increasing market capitalisation of these tokenised protocols, honesty is becoming more expensive and more unrealistic. As such, this out-dated player characterisation must be removed to provide true guarantees of safety and liveness in a major stride towards universal trust in state machine replication protocols and a new scale of adoption. As all current state machine replication protocols are built on these legacy standards, it is imperative that a new player model is identified and utilised to reflect the true nature of players in tokenised protocols, now and into the future.
To this effect, we propose the ByRa player model for state machine replication protocols. In the ByRa model, players either attempt to maximise their tokenised rewards, or behave adversarially. This merges the fields of game theory and distributed systems, an intersection in which tokenised state machine replication protocols exist, but on which little formalisation has been carried out. In the ByRa model, we identify the properties of strong incentive compatibility in expectation and fairness that all protocols must satisfy in order to achieve state machine replication. We then provide Tenderstake, a protocol which provably satisfies these properties, and by doing so, achieves state machine replication in the ByRa model.

Efficient Verifiable Image Redacting based on zk-SNARKs

Image is a visual representation of a certain fact and can be used as proof of events. As the utilization of the image increases, it is required to prove its authenticity with the protection of its sensitive personal information. In this paper, we propose a new efficient verifiable image redacting scheme based on zk-SNARKs, a commitment, and a digital signature scheme. We adopt a commit-and-prove SNARK scheme which takes commitments as inputs, in which the authenticity can be quickly verified outside the circuit. We also specify relations between the original and redacted images to guarantee the redacting correctness. Our experimental results show that the proposed scheme is superior to the existing works in terms of the key size and proving time without sacrificing the other parameters. The security of the proposed scheme is proven formally.

An IND-CCA2 Attack Against the 1st- and 2nd-round Versions of NTS-KEM

This paper presents an IND-CCA2 attack against the 1st- and 2nd-round versions of NTS-KEM, i.e., the versions before the update in December 2019. Our attack works against the 1st- and 2nd-round specifications, with a number of decapsulation queries upper-bounded by n − k and an advantage lower-bounded by roughly 0.5(n − k)t/n^2 , where n, k, and t stand for the code length, code dimension, and the designed decoding capacity, for all the three parameter sets of NTS-KEM. We found that the non-reference implementations are also vulnerable to our attack, even though there are bugs. There are also bugs in the reference implementations, but in a way invulnerable to our attack.

Multi-Party Replicated Secret Sharing over a Ring with Applications to Privacy-Preserving Machine Learning

Secure multi-party computation has seen significant performance advances and increasing use in recent years. Techniques based on secret sharing offer attractive performance and are a popular choice for privacy-preserving machine learning applications. Traditional techniques operate over a field, while designing equivalent techniques for a ring $\mathbb{Z}_{2^k}$ can boost performance. In this work, we develop a suite of multi-party protocols for a ring in the honest majority setting starting from elementary operations to more complex with the goal of supporting general-purpose computation. We demonstrate that our techniques are substantially faster than their field-based equivalents when instantiated with a different number of parties and perform on par with or better than state-of-the-art techniques with designs customized for a fixed number of parties. We evaluate our techniques on machine learning applications and show that they offer attractive performance.

How to Make Private Distributed Cardinality Estimation Practical, and Get Differential Privacy for Free

Secure computation is a promising privacy enhancing technology, but it is often not scalable enough for data intensive applications. On the other hand, the use of sketches has gained popularity in data mining, because sketches often give rise to highly efficient and scalable sub-linear algorithms. It is natural to ask: what if we put secure computation and sketches together? We investigated the question and the findings are interesting: we can get security, we can get scalability, and somewhat unexpectedly, we can also get differential privacy -- for free. Our study started from building a secure computation protocol based on the Flajolet-Martin (FM) sketches, for solving the Private Distributed Cardinality Estimation (PDCE) problem, which is a fundamental problem with applications ranging from crowd tracking to network monitoring. The state of art protocol for PDCE (Fenske et al. CCS'17) is computationally expensive and not scalable enough to cope with big data applications, which prompted us to design a better protocol. Our further analysis revealed that if the cardinality to be estimated is large enough, our protocol can achieve $(\epsilon,\delta)$-differential privacy automatically, without requiring any additional manipulation of the output. The result signifies a new approach for achieving differential privacy that departs from the mainstream approach (i.e. adding noise to the result). Free differential privacy can be achieved because of two reasons: secure computation minimizes information leakage, and the intrinsic estimation variance of the FM sketch makes the output of our protocol uncertain. We further show that the result is not just theoretical: the minimal cardinality for differential privacy to hold is only $10^2-10^4$ for typical parameters.

(In)security of the Radio Interface in Sigfox

Sigfox is a popular communication and security protocol which allows setting up low-power wide-area networks for the Internet of Things. Currently, Sigfox networks operate in 72 countries, and cover 1.3 billion people. In this paper, we make an extensive analysis of the security mechanisms used to protect the radio interface. We describe news attacks against data authenticity, which is the only mandatory security property in Sigfox. Namely we describe how to replay frames, and how to compute forgeries. In addition, we highlight a flaw in the (optional) data encryption procedure. Our attacks do not exploit implementation or hardware bugs, nor do they imply a physical access to any equipment (e.g., legitimate end-device). They rely only on the peculiarities of the Sigfox security protocol. Our analysis is supported by practical experiments made in interaction with the Sigfox back-end network. These experiments validate our findings. Finally, we present efficient counter-measures which are likely straightforward to implement.

Analysing Mining Machine Shutdown Price

The security of PoW-based blockchains relies on the total amount of mining power and the ratio of mining power possessed by the honest miners. Loosely speaking, a system with higher mining power makes an attack more difficult. To incentivise miners joining the network and contributing their mining power, reward mechanisms are designed to provide economic profit to miners in exchange for their mining power.
We identify shutdown price of mining machines as an overlooked factor that has an impact on the total mining power in the network, so the level of system security of PoW-based blockchains. We formalise the concept of shutdown price, which represents the break-even point of operating a mining machine. Once the shutdown price of a type of machines is reached, mining coins with them can be more expensive than buying coins directly in the cryptocurrency market. Therefore a rational operator would switch off these machines. This reduces the mining power in the network. However, due to the high market volatility and the coin price may recover from the break-even point quickly, the miners may delay shut down or may choose a partial shutdown strategy to hedge risk. We define and analyse such shutdown tolerance by applying real option theory. We also provide a discussion on the key factors determining shutdown price and their impact on the blockchain security.

Last updated: 2023-04-14

Halo 0.9: A Halo Protocol with Fully-Succinctness

Zero-Knowledge Proof is a crucial tool for privacy preserving and stake proving. It allows the Prover to convince the Verifier about the validity of some statement without leaking any knowledge of his own. Quantities of zero knowledge protocols have been proposed by now and one of the state-of-the-art works is Halo [1], which is brought about by Bowe, Grigg and Hopwood. Even though nested amortization technique used in Halo, the Verifier still has to compute an O(n) operation ultimately. As a result, Halo is not a fully succinct zero-knowledge scheme and infeasible to be utilized in some scenarios such as Ethereum Smart Contract applications.
We propose Halo 0.9, which is an enhanced version of Halo aiming at the issue above. Specifically, we introduce the SRS in [2] as the substitute for the random vector in the inner product and thus transform the Pedersen vector commitment to Kate polynomial commitment [2]. On the premise of original Halo protocol remained, the computation of Verifier is in logarithmic time.

Last updated: 2021-07-23

Achieve Fully Decentralized End to End encryption meeting via Blockchain

Zoom Meeting is an enterprise online video conferencing solution with real-time messaging and content sharing.
However, they are lack of privacy protection since centralized Zoom servers are capable of monitoring user’s messages.
Thereby, to solve the privacy problem, in May 2020, Zoom acquired Keybase so that Keybase’s team can help it to build
end-to-end encryption meeting while remain Zoom’s current scalability and high-performance. Nonetheless, according to the latest released Zoom’s whitepaper, even with the new design of E2E(end to end) encryption meeting, the security threats can’t be erased completely since the new design is not fully decentralized.
In this paper, we introduce a fully decentralized design of E2E encryption meeting via blockchain technology. With this new design, Zoom’s E2E meeting privacy can be further improved.

Hardware Security without Secure Hardware: How to Decrypt with a Password and a Server

Hardware security tokens have now been used for several decades to store cryptographic keys. When deployed, the security of the corresponding schemes fundamentally relies on the tamper-resistance of the tokens – a very strong assumption in practice. Moreover, even secure tokens, which are expensive and cumbersome, can often be subverted.
We introduce a new cryptographic primitive called Encryption schemes with Password-protected Assisted Decryption (EPAD schemes), in which a user’s decryption key is shared between a user device (or token) on which no assumption is made, and an online server. The user shares a human-memorizable password with the server. To decrypt a ciphertext, the user launches, from a public computer, a distributed protocol with the device and the server, authenticating herself to the server with her password (unknown to the device); in such a way that her secret key is never reconstructed during the interaction. We propose a strong security model which guarantees that (1) for an efficient adversary to infer any information about a user’s plaintexts, it must know her password and have corrupted her device (secrecy is guaranteed if only one of the two conditions is fulfilled), (2) the device and the server are unable to infer any information about the ciphertexts they help to decrypt (even though they could together reconstruct the secret key), and (3) the user is able to verify that device and server both performed the expected computations. These EPAD schemes are in the password-only model, meaning that the user is not required to remember a trusted public key, and her password remains safe even if she is led to interact with a wrong server and a malicious device.
We then give a practical pairing-based EPAD scheme. Our construction is provably secure under standard computational assumptions, using non-interactive proof systems which can be efficiently instantiated in the standard security model, i.e., without relying on the random oracle heuristic.

Secret Key Agreement with Physical Unclonable Functions: An Optimality Summary

We address security and privacy problems for digital devices and biometrics from an information-theoretic optimality perspective, where a secret key is generated for authentication, identification, message encryption/decryption, or secure computations. A physical unclonable function (PUF) is a promising solution for local security in digital devices and this review gives the most relevant summary for information theorists, coding theorists, and signal processing community members who are interested in optimal PUF constructions. Low-complexity signal processing methods such as transform coding that are developed to make the information-theoretic analysis tractable are discussed. The optimal trade-offs between the secret-key, privacy-leakage, and storage rates for multiple PUF measurements are given. Proposed optimal code constructions that jointly design the vector quantizer and error-correction code parameters are listed. These constructions include modern and algebraic codes such as polar codes and convolutional codes, both of which can achieve small block-error probabilities at short block lengths, corresponding to a small number of PUF circuits. Open problems in the PUF literature from a signal processing, information theory, coding theory, and hardware complexity perspectives and their combinations are listed to stimulate further advancements in the research on local privacy and security.

Optimal Communication Complexity of Authenticated Byzantine Agreement

Byzantine Agreement (BA) is one of the most fundamental problems in distributed computing, and its communication complexity is an important efficiency metric.
It is well known that quadratic communication is necessary for BA in the worst case due to a lower bound by Dolev and Reischuk.
This lower bound has been shown to be tight for the unauthenticated setting with $f < n/3$ by Berman et al. but a considerable gap remains for the authenticated setting with $n/3 \le f < n/2$.
This paper provides two results towards closing this gap.
Both protocols have a quadratic communication complexity and have different trade-offs in resilience and assumptions.
The first protocol achieves the optimal resilience of $f < n/2$ but requires a trusted setup for threshold signature.
The second protocol achieves near optimal resilience $f \le (1/2 - \varepsilon)n$ in the standard PKI model.

Compact Certificates of Collective Knowledge

We introduce compact certificate schemes, which allow any party to take
a large number of signatures on a message $M$, by many signers of different
weights, and compress them to a much shorter certificate. This certificate
convinces the verifiers that signers with sufficient total weight signed
$M$, even though the verifier will not see---let alone verify---all of the
signatures. Thus, for example, a compact certificate can be used to prove
that parties who jointly have a sufficient total account balance have
attested to a given block in a blockchain.
After defining compact certificates, we demonstrate an efficient compact
certificate scheme. We then show how to implement such a scheme in
a decentralized setting over an unreliable network and in the presence
of adversarial parties who wish to disrupt certificate creation. Our
evaluation shows that compact certificates are 50-280$\times$ smaller
and 300-4000$\times$ cheaper to verify than a natural baseline approach.

Secure Decentralized Access Control Policy for Data Sharing in Smart Grid

Smart grid has improved the security, efficiency of the power system and balanced the supply and demand by intelligent management, which enhanced stability and reliability of power grid. The key point to achieve them is real-time data and consume data sharing by using fine-grained policies. But it will bring the leakage of the privacy of the users and losing of control over data for data owners. The reported solutions can not give the best trade-off among the privacy protection, control over the data shared and confidentiality. In addition, they can not solve the problems of large computation overhead and dynamic management such as users’ revocation. This paper aims at these problems and proposes a decentralized attribute-based data sharing scheme. The proposed scheme ensures the secure sharing of data while removing the central authority and hiding user’s identity information. It uses attribute-based signcryption(ABSC) to achieve data confidentiality and authentication. Under this model, attribute-based encryption gives the access policies for users and keeps the data confidentiality, and the attribute-based signature is used for authentication of the primary ciphertextintegrity. It is more efficient than ”encrypt and then sign” or ”sign and then encrypt”. In addition, the proposed scheme enables user’s revocation and public verifiability. Under the random oracle model, the security and the unforgeability against adaptive chosen message attack are demonstrated.

An efficient and provably secure authenticated key agreement scheme for mobile edge computing

Though Mobile Cloud Computing (MCC) and
Mobile Edge Computing (MEC) technologies have brought more
convenience to mobile services over past few years, but security
concerns like mutual authentication, user anonymity, user
untraceability, etc., have yet remained unresolved. In recent years,
many efforts have been made to design security protocols in the
context of MCC and MEC, but most of them are prone to security
threats. In this paper, we analyze Jia et al.’s scheme, one of the
latest authentication protocols for MEC environment and we show
this scheme is vulnerable to user impersonation and ephemeral
secret leakage attacks. Further, we demonstrate that the
aforementioned attacks can be similarly applied to Li et al.’s
scheme which recently derived from Jia et al.’s protocol. In this
paper, we propose a provably secure authenticated key agreement
protocol on the basis of Jia et al.’s scheme that not only withstands
security weaknesses of it, but also offers low computational and
communicational costs compared to the other related schemes. As
a formal security proof, we simulate our scheme with widely used
AVISPA tool. Moreover, we show the scalability and practicality
of our scheme in a MEC environment through NS-3 simulation.

Achieving privacy and accountability in traceable digital currency

Several Central Bank Digital Currency (CBDC) projects are considering the development of a digital currency that is managed on a permissioned blockchain, i.e. only authorized entities are involved in transactions verification.
In this paper, we explore the best possible balance between privacy and accountability in such a traceable digital currency.
Indeed, in case of suspicion of fraud or money laundering activity, it is important to enable the retrieval of the identity of a payer or a payee involved in a specific transaction.
Based on a preliminary analysis of achievable anonymity properties in a transferable, divisible and traceable digital currency systems, we first present a digital currency framework along with the corresponding security and privacy model. Then, we propose a pairing-free traceable digital currency system that reconciles user's privacy protection and accountability. Our system is proven secure in the random oracle model.

Prime Proof Protocol

Prime integers form the basis for finite field and elliptic curve cryptography, as well as many other applications. Provable prime generation guarantees primality and is more efficient than probabilistic generation, and provides components for an efficient primality proof. This paper details a protocol which takes in the proof components from the generation process, proves primality, and as an added benefit, supplies the user with a subgroup generator.

Verifiable Timed Signatures Made Practical

A verifiable timed signature (VTS) scheme allows one to time-lock a signature on a known message for a given amount of time $T$ such that after performing a sequential computation for time $T$ anyone can extract the signature from the time-lock. Verifiability ensures that anyone can publicly check if a time-lock contains a valid signature on the message without solving it first, and that the signature can be obtained by solving the same for time $T$.
This work formalizes VTS, presents efficient constructions compatible with BLS, Schnorr, and ECDSA signatures, and experimentally demonstrates that these constructions can be employed in practice. On a technical level, we design an efficient cut-and-choose protocol based on the homomorphic time-lock puzzles to prove the validity of a signature encapsulated in a time-lock puzzle. We also present a new efficient {range proof} protocol that significantly improves upon existing proposals in terms of the proof size, and is also of independent interest.
While VTS is a versatile tool with numerous existing applications, we demonstrate VTS's applicability to resolve three novel challenging issues in the space of cryptocurrencies. Specifically, we show how VTS is the cryptographic cornerstone to construct: (i) Payment channel networks with improved on-chain unlinkability of users involved in a transaction, (ii) multi-party signing of transactions for cryptocurrencies without any on-chain notion of time and (iii) cryptocurrency-enabled fair multi-party computation protocol.

A complete study of two classes of Boolean functions for homomorphic-friendly stream ciphers

In this paper, we completely study two classes of Boolean functions that are suited for hybrid symmetric-FHE encryption with stream ciphers like FiLIP. These functions (which we call homomorphic-friendly) need to satisfy contradictory constraints: 1) allow a fast homomorphic evaluation, and have then necessarily a very elementary structure, 2) be secure, that is, allow the cipher to resist all classical attacks (and even more, since guess and determine attacks are facilitated in such framework). Because of constraint 2, these functions need to have a large number of variables (often more than 1000), and this makes even more difficult to satisfy constraint 1 (hence the interest of these two classes). We determine exactly all the main cryptographic parameters (algebraic degree, resiliency order, nonlinearity, algebraic immunity) for all functions in these two classes and we give close bounds for the others (fast algebraic immunity, dimension of the space of annihilators of minimal degree). This is the first time that this is done for all functions in classes of a sufficient cryptographic interest.

Cryptonite: A Framework for Flexible Time-Series Secure Aggregation with Online Fault Tolerance

Private stream aggregation (PSA) allows an untrusted data aggregator to compute statistics over a set of multiple participants' data while ensuring the data remains private. Existing works rely on a trusted third party to enable an aggregator to achieve fault tolerance, that requires interactive recovery, but in the real world this may not be practical or secure. We develop a new formal framework for PSA that accounts for user faults, and can support non-interactive recovery, while still supporting strong individual privacy guarantees. We first must define a new level of security in the presence of faults and malicious adversaries because the existing definitions do not account for faults and the security implications of the recovery. After this we develop the first protocol that provably reaches this level of security, i.e., individual inputs are private even after the aggregator's recovery, and reach new levels of scalability and communication efficiency over existing work seeking to support fault tolerance. The techniques we develop are general, and can be used to augment any PSA scheme to support non-interactive fault recovery.

Modified Cache Template Attack on AES

CPU caches are a powerful source of information leakage. To develop practical cache-based attacks, there is an increasingly need to automate the process of finding exploitable cache-based side-channels in computer systems. Cache template attack is a generic technique that utilizes Flush+Reload attack in order to automatically exploit cache vulnerability of Intel platforms. Cache template attack on T-table-based AES implementation consists of two phases including the profiling phase and the key exploitation phase. Profiling is a preprocessing phase to monitor dependencies between the secret key and behavior of the cache memory. In addition, the addresses of T-tables can be obtained automatically. In the key exploitation phase, most significant bits (MSBs) of the secret key bytes are retrieved by monitoring exploitable addresses. In this paper, we propose a simple yet effective searching technique which accelerates the profiling phase by a factor of at most 64. To verify the theoretical model of our technique, we implement the described attack on AES. The experimental results showed the profiling phase runtime of the cache template attack is around 10 minutes while our method speeds up the running of this phase to around 9 seconds.

On Exploiting Message Leakage in (few) NIST PQC Candidates for Practical Message Recovery and Key Recovery Attacks

With the NIST Post quantum cryptography competition in final round, the importance of implementation security is highlighted in the latest call. In this regard, we report practical side-channel assisted message recovery attacks over embedded implementations of several post-quantum public key encryption (PKE) and key encapsulation mechanisms (KEM) based on the Learning With Errors (LWE) and Learning With Rounding (LWR) problem, which include
three finalists and three semi-finalist candidates of the NIST standardization process. The proposed attacks
target storage of the decrypted message in memory, a basic operation found in all libraries and typically
unavoidable in any embedded implementation. We also identify interesting ciphertext malleability properties
for LWE/LWR-based PKEs and exploit them to generalise proposed attack to different implementation
choices as well as implementations protected with side-channel countermeasures such as shuffling and masking.
All proposed attacks are validated on ARM Cortex-M4 microcontroller, targeting optimized open source implementations
of PQC schemes using electromagnetic side-channel measurements.

Double-Odd Elliptic Curves

This article explores the use of elliptic curves with order 2r = 2 mod 4, which we call double-odd elliptic curves. This is a very large class, comprising about 1/4th of all curves over a given field. On such curves, we manage to define a prime order group with appropriate characteristics for building cryptographic protocols:
- Element encoding is canonical, and verified upon decoding. For a 2n-bit group (with n-bit security), encoding size is 2n + 1 bits, i.e. as good as compressed points on classic prime order curves.
- Unified and complete formulas allow secure and efficient computations in the group.
- Efficiency is on par with twisted Edwards curves, and in some respects slightly better; e.g. half of double-odd curves have formulas for computing point doublings with only six multiplications (down to 1M+5S per doubling on some curves).
We describe here various formulas and discuss implementations. We also define two specific parameter choices for curves with 128-bit security, called do255e and do255s. Our own implementations on 64-bit x86 (Coffee Lake) and low-end ARM Cortex M0+ achieve generic point multiplication in 76696 and 2.19 million cycles, respectively, with curve do255e.

Efficient Quantum Public-Key Encryption From Learning With Errors

Our main result is a quantum public-key encryption scheme based on the Extrapolated Di-
hedral Coset problem (EDCP) which is equivalent, under quantum polynomial-time reductions,
to the Learning With Errors (LWE) problem. For limited number of public keys (roughly linear
in the security parameter), the proposed scheme is information-theoretically secure. For poly-
nomial number of public keys, breaking the scheme is as hard as solving the LWE problem.
The public keys in our scheme are quantum states of size Õ(n) qubits. The key generation and
decryption algorithms require Õ(n) qubit operations while the encryption algorithm takes O(1)
qubit operations.

Honest Majority MPC with Abort with Minimal Online Communication

In this work we focus on improving the communication complexity of the \emph{online phase} of honest majority MPC protocols.
To this end, we present a general and simple method to compile arbitrary secret-sharing-based passively secure protocols defined over an arbitrary ring that are secure up to additive attacks in a malicious setting, to actively secure protocols with abort.
The resulting protocol has a total communication complexity in the online phase of $1.5(n-1)$ shares, which amounts to $1.5$ shares per party asymptotically.
An important aspect of our techniques is that they can be seen as generalization of ideas that have been used in other works in a rather \emph{ad-hoc} manner for different secret-sharing protocols.
Thus, our work serves as a way of unifying key ideas in recent honest majority protocols, to understand better the core techniques and similarities among these works.
Furthermore, for $n=3$, when instantiated with replicated secret-sharing-based protocols (Araki et al.~CCS 2016), the communication complexity in the online phase amounts to only $1$ ring element per party, matching the communication complexity of the BLAZE protocol (Patra \& Suresh, NDSS 2020), while having a much simpler design.

Limits on the Efficiency of (Ring) LWE based Non-Interactive Key Exchange

Uncategorized

Uncategorized

LWE based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie-Hellman key-exchange or polynomial LWE-modulus, resulting in unwanted efficiency overhead.
We study the possibility of designing non-interactive LWE-based protocols with polynomial LWE-modulus. To this end,
• We identify and formalize simple non-interactive and polynomial LWE-modulus variants of existing protocols, where Alice and Bob simultaneously exchange one or more (ring) LWE samples with polynomial LWE-modulus and then run individual key reconciliation functions to obtain the shared key.
• We point out central barriers and show that such non-interactive key-exchange protocols are impossible if:
1) the reconciliation functions first compute the inner product of the received LWE sample with their private LWE secret. This impossibility is information theoretic.
2) one of the reconciliation functions does not depend on the error of the transmitted LWE sample. This impossibility assumes hardness of LWE.
• We give further evidence that progress in either direction, of giving an LWE-based NIKE protocol or proving impossibility of one will lead to progress on some other well-studied questions in cryptography.
Overall, our results show possibilities and challenges in designing simple (ring) LWE-based non-interactive key exchange protocols.

DNFA: Differential No-Fault Analysis of Bit Permutation Based Ciphers Assisted by Side-Channel

Physical security of NIST lightweight cryptography competition candidates is gaining importance as the standardization process progresses. Side-channel attacks (SCA) are a well-researched topic within the physical security of cryptographic implementations. It was shown that collisions in the intermediate values can be captured by side-channel measurements to reduce the complexity of the key retrieval to trivial numbers.
In this paper, we target a specific bit permutation vulnerability in the block cipher GIFT that allows the attacker to mount a key recovery attack. We present a novel SCA methodology called DCSCA - Differential Ciphertext SCA, which follows principles of differential fault analysis, but instead of the usage of faults, it utilizes SCA and statistical distribution of intermediate values. We simulate the attack on a publicly available bitslice implementation of GIFT, showing the practicality of the attack. We further show the application of the attack on GIFT-based AEAD schemes (GIFT-COFB, ESTATE, HYENA, and SUNDAE-GIFT) proposed for the NIST LWC competition. DCSCA can recover the master key with $2^{13.39}$ AEAD sessions, assuming 32 encryptions per session.

A Novel Asymmetric Searchable Encryption Scheme with Granting search capability

Nowadays, information is known as the main asset of each organization, which causes data generation to be exponentially increasing. Hence, different capacity issues and requirements show up with it, e.g. storage and maintenance of generating data, searching among them, and analyzing them. Cloud computing is one of the common technologies used to meet these requirements. Popularity of this technology is extremely growing as it can be used to handle high amount of data in a cost efficient and highly available (anytime and anywhere) manner. However, there are still extensive security challenges (e.g. data confidentiality) with this technology. Cryptography is one of the main methods used to fulfill privacy preserving of people and organizations. Encryption methods can impressively keep data private, so it is not possible to search among encrypted messages in order to retrieve information, after applying traditional encryption. Searchable encryption can enable searching among encrypted data and overcome this shortage. However, much more research is required to enable whole data searching while proper level of security would be achieved for these systems. In this paper, a technique to perform searching by the third party is introduced. When a number of nodes are interacting and some of them may upload malicious documents, this technique can be useful. Furthermore, document categorization is another application of the referred scheme.

Threshold Password-Hardened Encryption Services

Uncategorized

Uncategorized

Password-hardened encryption (PHE) was introduced by Lai et al. at USENIX 2018 and immediately productized by VirgilSecurity. PHE is a password-based key derivation protocol that involves an oblivious external crypto service for key derivation. The security of PHE protects against offline brute-force attacks, even when the attacker is given the entire database. Furthermore, the crypto service neither learns the derived key nor the password. PHE supports key-rotation meaning that both the server and crypto service can update their keys without involving the user.
While PHE significantly strengthens data security, it introduces a single point of failure because key-derivation always requires access to the crypto service. In this work, we address this issue and simultaneously increase security by introducing threshold password-hardened encryption. Our formalization of this primitive revealed shortcomings of the original PHE definition that we also address in this work. Following the spirit of prior works, we give a simple and efficient construction using lightweight tools only. We also implement our construction and evaluate its efficiency. Our experiments confirm the practical efficiency of our scheme and show that it is more efficient than common memory-hard functions, such as scrypt. From a practical perspective this means that threshold PHE can be used as an alternative to scrypt for password protection and key-derivation, offering better security in terms of offline brute force attacks.

Multi-Client Oblivious RAM with Poly-Logarithmic Communication

Uncategorized

Uncategorized

Oblivious RAM enables oblivious access to memory in the single-client setting, which may not be the best fit in the network setting. Multi-client oblivious RAM (MCORAM) considers a collaborative but untrusted environment, where a database owner selectively grants read access and write access to different entries of a confidential database to multiple clients. Their access pattern must remain oblivious not only to the server but also to fellow clients. This upgrade rules out many techniques for constructing ORAM, forcing us to pursue new techniques.
MCORAM not only provides an alternative solution to private anonymous data access (Eurocrypt 2019) but also serves as a promising building block for equipping oblivious file systems with access control and extending other advanced cryptosystems to the multi-client setting.
Despite being a powerful object, the current state-of-the-art is unsatisfactory: The only existing scheme requires $O(\sqrt n)$ communication and client computation for a database of size $n$. Whether it is possible to reduce these complexities to $\mathsf{polylog}(n)$, thereby matching the upper bounds for ORAM, is an open problem, i.e., can we enjoy access control and client-obliviousness under the same bounds?
Our first result answers the above question affirmatively by giving a construction from fully homomorphic encryption (FHE). Our main technical innovation is a new technique for cross-key trial evaluation of ciphertexts.
We also consider the same question in the setting with $N$ non-colluding servers, out of which at most $t$ of them can be corrupt. We build multi-server MCORAM from distributed point functions (DPF), and propose new constructions of DPF via a virtualization technique with bootstrapping, assuming the existence of homomorphic secret sharing and pseudorandom generators in NC0, which are not known to imply FHE.

Foundations of Ring Sampling

Uncategorized

Uncategorized

A ring signature scheme allows the signer to sign on behalf of an ad hoc set of users, called a ring. The verifier can be convinced that a ring member signs, but cannot point to the exact signer. Ring signatures have become increasingly important today with their deployment in anonymous cryptocurrencies. Conventionally, it is implicitly assumed that all ring members are equally likely to be the signer. This assumption is generally false in reality, leading to various practical and devastating deanonymizing attacks in Monero, one of the largest anonymous cryptocurrencies. These attacks highlight the unsatisfactory situation that how a ring should be chosen is poorly understood.
We propose an analytical model of ring samplers towards a deeper understanding of them through systematic studies. Our model helps to describe how anonymous a ring sampler is with respect to a given signer distribution as an information-theoretic measure. We show that this measure is robust, in the sense that it only varies slightly when the signer distribution varies slightly. We then analyze three natural samplers -- uniform, mimicking, and partitioning -- under our model with respect to a family of signer distributions modeled after empirical Bitcoin data. We hope that our work paves the way towards researching ring samplers from a theoretical point of view.

High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization

The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt'17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping, but as modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt'21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation over HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracies within a small multiplicative depth, for example, 93-bit within multiplicative depth 11.

CCA-Secure (Puncturable) KEMs from Encryption With Non-Negligible Decryption Errors

Public-key encryption (PKE) schemes or key-encapsulation mechanisms (KEMs) are fundamental cryptographic building blocks to realize secure communication protocols. There are several known transformations that generically turn weakly secure schemes into strongly (i.e., IND-CCA) secure ones. While most of these transformations require the weakly secure scheme to provide perfect correctness, Hofheinz, Hövelmanns, and Kiltz (HHK) (TCC 2017) have recently shown that variants of the Fujisaki-Okamoto (FO) transform can work with schemes that have negligible correctness error in the (quantum) random oracle model (QROM). Many recent schemes in the NIST post-quantum competition (PQC) use variants of these transformations. Some of their CPA-secure versions even have a non-negligible correctness error and so the techniques of HHK cannot be applied.
In this work, we study the setting of generically transforming PKE schemes with potentially large, i.e., non-negligible, correctness error to ones having negligible correctness error. While there have been previous treatments in an asymptotic setting by Dwork, Naor, and Reingold (EUROCRYPT 2004), our goal is to come up with practically efficient compilers in a concrete setting and apply them in two different contexts. Firstly, we show how to generically transform weakly secure deterministic or randomized PKEs into CCA-secure KEMs in the (Q)ROM using variants of HHK. This applies to essentially all candidates to the NIST PQC based on lattices and codes with non-negligible error for which we provide an extensive analysis. We thereby show that it improves some of the code-based candidates. Secondly, we study puncturable KEMs in terms of the Bloom Filter KEM (BFKEM) proposed by Derler et al. (EUROCRYPT 2018) which inherently have a non-negligible correctness error. BFKEMs are a building block to construct fully forward-secret zero round-trip time (0-RTT) key-exchange protocols. In particular, we show the first approach towards post-quantum secure BFKEMs generically from lattices and codes by applying our techniques to identity-based encryption (IBE) schemes with (non-)negligible correctness error.

Two-server Distributed ORAM with Sublinear Computation and Constant Rounds

Distributed ORAM (DORAM) is a multi-server variant of Oblivious RAM. Originally proposed to lower bandwidth, DORAM has recently been of great interest due to its applicability to secure computation in the RAM model, where circuit complexity and rounds of communication are equally important metrics of efficiency.
In this work, we construct the first DORAM schemes in the 2-server, semi-honest setting that simultaneously achieve sublinear server computation and constant rounds of communication.
We provide two constant-round constructions, one based on square root ORAM that has $O(\sqrt{N}\log N)$ local computation and another based on secure computation of a doubly efficient PIR that achieves local computation of $O(N^\epsilon)$ for any $\epsilon > 0$ but that allows the servers to distinguish between reads and writes.
As a building block in the latter construction, we provide secure computation protocols for evaluation and interpolation of multivariate polynomials based on the Fast Fourier Transform, which may be of independent interest.

Privacy-Preserving Epidemiological Modeling on Mobile Graphs

The latest pandemic COVID-19 brought governments worldwide to use various containment measures to control its spread, such as contact tracing, social distance regulations, and curfews. Epidemiological simulations are commonly used to assess the impact of those policies before they are implemented. Unfortunately, the scarcity of relevant empirical data, specifically detailed social contact graphs, hampered their predictive accuracy. As this data is inherently privacy-critical, a method is urgently needed to perform powerful epidemiological simulations on real-world contact graphs without disclosing any sensitive information.
In this work, we present RIPPLE, a privacy-preserving epidemiological modeling framework enabling standard models for infectious disease on a population’s real contact graph while keeping all contact information locally on the participants’ devices. As a building block of independent interest, we present PIR-SUM, a novel extension to private information retrieval for secure download of element sums from a database. Our protocols are supported by a proof-of-concept implementation, demonstrating a 2-week simulation over half a million participants completed in 7 minutes, with each participant communicating less than 50 KB.

A Tutorial on the Implementation of Block Ciphers: Software and Hardware Applications

In this article, we discuss basic strategies that can be used to implement block ciphers in both software and hardware environments. As models for discussion, we use substitution-permutation networks which form the basis for many practical block cipher structures. For software implementation, we discuss approaches such as table lookups and bit-slicing, while for hardware implementation, we examine a broad range of architectures from high speed structures like pipelining, to compact structures based on serialization. To illustrate different implementation concepts, we present example data associated with specific methods and discuss sample designs that can be employed to realize different implementation strategies. We expect that the article will be of particular interest to researchers, scientists, and engineers that are new to the field of cryptographic implementation.

PAS-TA-U: PASsword-based Threshold Authentication with PASsword Update

A single-sign-on (SSO) is an authentication system that allows a user to log in with a single identity and password to any of several related, yet independent, server applications. SSO solutions eliminate the need for users to repeatedly prove their identities to different applications and hold different credentials for each application. Token-based authentication is commonly used to enable an SSO experience on the web, and on enterprise networks. A large body of work considers distributed token generation which can protect the long-term keys against a subset of breached servers. A recent work (CCS'18) introduced the notion of Password-based Threshold Authentication (PbTA) with the goal of making password-based token generation for SSO secure against server breaches that could compromise both long-term keys and user credentials. They also introduced a generic framework called PASTA that can instantiate a PbTA system.
The existing SSO systems built on distributed token generation techniques, including the PASTA framework, do not admit password-update functionality. In this work, we address this issue by proposing a password-update functionality into the PASTA framework. We call the modified framework PAS-TA-U.
As a concrete application, we instantiate PAS-TA-U to implement in Python a distributed SSH key manager for enterprise networks (ESKM) that also admits a password-update functionality for its clients. Our experiments show that the overhead of protecting secrets and credentials against breaches in our system compared to a traditional single server setup is low (average 119 ms in a 10-out-of-10 server setting on Internet with 80 ms round trip latency).

CovidBloc: A Blockchain Powered Exposure Database for Contact Tracing

Contact tracing is an important mitigation tool for national health services to fight epidemics such as COVID-19. While many of the existing approaches for automated contact tracing focus on privacy-preserving decentralized solutions, the use of blockchain in these applications is often suggested for the transparency and immutability of the data being collected.
We present CovidBloc, a contact tracing system that implements the COVID 19 exposure database on Hyperledger Fabric Blockchain Network. Like most decentralized contact tracing application, the participants of the CovidBloc are: (1) a mobile application running on a bluetooth-equipped smartphone, (2) a web dashboard for health officials, and (3) a backend server acting as a repository for data being collected. We have implemented all components of CovidBloc to make it a fully functional contact tracing application. It is hosted at https://anonymous.4open.science/r/c6caad6d-62a4-463c-8301-472e421b931f/.
The mobile application for CovidBloc is developed for Android. The exposure notification system in our mobile application is implemented as per the recently released draft documentation by Google and Apple. The exposure notification API from Google and Apple is only available to a limited number of teams per country.
The backend server is an important component of any automated contact tracing system which acts as a repository for exposure data to be pushed by smartphones upon authorization by the health staff. Since adding or removing information on the server has privacy consequences, it is required that the server should not be trusted. The backend server for CovidBloc is implemented on Hyperledger Fabric Blockchain network.

Feeding Three Birds With One Scone: A Generic Duplication Based Countermeasure To Fault Attacks (Extended Version)

In the current world of the Internet-of-things and edge computing, computations are increasingly performed locally on small connected systems. As such, those devices are often vulnerable to adversarial physical access, enabling a plethora of physical attacks which is a challenge even if such devices are built for security.
As cryptography is one of the cornerstones of secure communication among devices, the pertinence of fault attacks is becoming increasingly apparent in a setting where a device can be easily accessed in a physical manner. In particular, two recently proposed fault attacks, Statistical Ineffective Fault Attack (SIFA) and the Fault Template Attack (FTA) are shown to be formidable due to their capability to bypass the common duplication based countermeasures. Duplication based countermeasures, deployed to counter the Differential Fault Attack (DFA), work by duplicating the execution of the cipher followed by a comparison to sense the presence of any effective fault, followed by an appropriate recovery procedure. While a handful of countermeasures are proposed against SIFA, no such countermeasure is known to thwart FTA to date.
In this work, we propose a novel countermeasure based on duplication, which can protect against both SIFA and FTA. The proposal is also lightweight with only a marginally additional cost over simple duplication based countermeasures. Our countermeasure further protects against all known variants of DFA, including Selmke, Heyszl, Sigl’s attack from FDTC 2016. It does not inherently leak side-channel information and is easily adaptable for any symmetric key primitive. The validation of our countermeasure has been done through gate-level fault simulation.

PsiBench: Pragmatic Benchmark of Two-party Private Set Intersection.

Private Set Intersection (PSI) allows two parties to obtain the intersection of their data sets while revealing nothing else. PSI is attractive in many scenarios and has wide applications in academia and industry. Over the last three decades, a large number of PSI protocols have been proposed using different cryptographic techniques, under different assumptions, for different scenarios. The inherent complexity, heterogeneous constructions, and rapid evolution of PSI protocols make it difficult to have a unified perspective and further promote the field.
We make the following three contributions to present a pragmatic benchmark of two-party PSI.
First, we propose the Map-and-Compare framework, which generalizes almost all efficient PSI constructions to date, and intuitively explains the idea and challenge of PSI constructions.
Based on the framework, we divide existing proposals into several categories and perform a systematic analysis of the features and use cases of different PSI variants.
Second, we present a Java-based benchmark library that implements almost all two-party PSI protocols (which are considered to define the state-of-the-art in terms of concrete performance) and supports rapid prototyping of new PSI protocols.
Third, by using our library as a common ground, we provide a comprehensive and impartial comparison of all our PSI implementations in detail.
We discuss the performance of different proposals in various settings.

On Bounded Distance Decoding with Predicate: Breaking the "Lattice Barrier" for the Hidden Number Problem

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.
We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework, which we also make available with this work.

Information-Theoretic Security of Cryptographic Channels

We discuss the setting of information-theoretically secure channel protocols where confidentiality of transmitted data should hold against unbounded adversaries. We argue that there are two possible scenarios: One is that the adversary is currently bounded, but stores today's communication and tries to break confidentiality later when obtaining more computational power or time. We call channel protocols protecting against such attacks future-secure. The other scenario is that the adversary already has extremely strong computational powers and may try to use that power to break current executions. We call channels withstanding such stronger attacks unconditionally-secure.
We discuss how to instantiate both future-secure and unconditionally-secure channels. To this end we first establish according confidentiality and integrity notions, then prove the well-known composition theorem to also hold in the information-theoretic setting: Chosen-plaintext security of the channel protocol, together with ciphertext integrity, implies the stronger chosen-ciphertext notion. We discuss how to build future-secure channel protocols by combining computational message authentication schemes like HMAC with one-time pad encryption. Chosen-ciphertext security follows easily from the generalized composition theorem. We also show that using one-time pad encryption with the unconditionally-secure Carter-Wegman MACs we obtain an unconditionally-secure channel protocol.

Homological Characterization of bounded $F_2$-regularity

Semi-regular sequences over $\mathbb{F}_2$ are sequences of homogeneous elements of the algebra $
B^{(n)}=\mathbb{F}_2[X_1,...,X_n]/(X_1^2,...,X_n^2)$, which have as few relations between them as possible. It is believed that most such systems are semi-regular and this property has important consequences for understanding the complexity of Grobner basis algorithms such as F4 and F5 for solving such systems. In fact even in one of the simplest and most important cases, that of quadratic sequences of length $n$ in $n$ variables, the question of the existence of semi-regular sequences for all $n$ remains open. In this paper we present a new framework for the concept of semiregularity which we hope will allow the use of ideas and machinery from homological algebra to be applied to this interesting and important open question. First we introduce an analog of the Koszul complex and show that $\mathbb{F}_2$-semi-regularity can be characterized by the exactness of this complex. We show how the well known formula for the Hilbert series of a semiregular sequence can be deduced from the Koszul complex. Finally we show that the concept of first fall degree also has a natural description in terms of the Koszul complex.

Last updated: 2020-12-15

Comments on “ Multi Recipient Aggregate Signcryption Scheme Based on Elliptic Curve”

Aggregate signcryption combines the functionalities of aggregate signature and encryption. Very recently, Zia & Ali [1] (Wireless Personal Communications, https://doi.org/10.1007/s11277-020-07637-z) proposed an elliptic curve cryptography (ECC) based multi-recipient aggregate signcryption scheme. The authors claimed that their scheme is correct, efficient, and secure against known attacks. However, by this comment, we show that their scheme is incorrect and the receiver(s) is unable to unsigncrypt the message.

Halo Infinite: Recursive zk-SNARKs from any Additive Polynomial Commitment Scheme

Polynomial commitment schemes (PCS) have recently been in the spotlight for their key role in building SNARKs. A PCS provides the ability to commit to a polynomial over a finite field and prove its evaluation at points. A succinct PCS has commitment and evaluation proof size sublinear in the degree of the polynomial. An efficient PCS has sublinear proof verification. Any efficient and succinct PCS can be used to construct a SNARK with similar security and efficiency characteristics (in the random oracle model).
Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes incrementally verifiable computation and can even be used to construct SNARKs.
Until recently, however, the only known method for constructing PCD required expensive SNARK recursion. A system called Halo first demonstrated a new methodology for building PCD without SNARKs, exploiting an aggregation property of the Bulletproofs inner-product argument.
The construction was heuristic because it makes non-black-box use of a concrete instantiation of the Fiat-Shamir transform. We expand upon this methodology to show that PCD can be (heuristically) built from any homomorphic polynomial commitment scheme (PCS), even if the PCS evaluation proofs are neither succinct nor efficient. In fact, the Halo methodology extends to any PCS that has an even more general property, namely the ability to aggregate linear combinations of commitments into a new succinct commitment that can later be opened to this linear combination. Our results thus imply new constructions of SNARKs and PCD that were not previously described in the literature and serve as a blueprint for future constructions as well.

Designer Primes

Prime integers are the backbone of most public key cryptosystems. Attacks often go after the primes themselves, as in the case of all factoring and index calculus algorithms. Primes are time sensitive cryptographic material and should be periodically changed. Unfortunately many systems use fixed primes for a variety of reasons, including the difficulty of generating trusted, random, cryptographically secure primes. This is particularly concerning in the case of discrete log based cryptosystems. This paper describes a variant of provable prime generation, intended for discrete logarithm based cryptography, based off Pocklington's theorem with improved efficiency, flexibility and security.

Improved Differential Fault Attack on LEA by Algebraic Representation of Modular Addition

Recently, as the number of IoT (Internet of Things) devices has increased, the use of lightweight cryptographic algorithms that are suitable for environments with scarce resources has also increased. Consequently, the safety of such cryptographic algorithms is becoming increasingly important. Among them, side-channel analysis methods are very realistic threats. In this paper, we propose a novel differential fault attack method on the Lightweight Encryption Algorithm (LEA) cipher which became the ISO/IEC international standard lightweight cryptographic algorithm in 2019. Previously proposed differential fault attack methods on the LEA used the Single Bit Flip model, making it difficult to apply to real devices. The proposed attack method uses a more realistic attacker assumption, the Random Word Error model. We demonstrate that the proposed attack method can be implemented on real devices using an electromagnetic fault injection setup. Our attack method has the weakest attacker assumption among attack methods proposed to date. In addition, the number of required fault-injected ciphertexts and the number of key candidates for which exhaustive search is performed are the least among all existing methods. Therefore, when implementing the LEA cipher on IoT deivces, designers must apply appropriate countermeasures against fault injection attacks.

On the Security of Homomorphic Encryption on Approximate Numbers

We present passive attacks against CKKS, the homomorphic encryption
scheme for arithmetic on approximate numbers presented at
Asiacrypt 2017. The attack is both theoretically efficient
(running in expected polynomial time)
and very practical, leading to complete key recovery with high probability
and very modest running times.
We implemented and tested the attack against major open source
homomorphic encryption libraries, including HEAAN, SEAL, HElib and
PALISADE, and when computing several functions that often arise in applications of the
CKKS scheme to machine learning on encrypted data, like mean and variance computations, and
approximation of logistic and exponential functions using their Maclaurin series.
The attack shows that the traditional formulation of IND-CPA security
(or indistinguishability against chosen plaintext attacks)
achieved by CKKS does not adequately capture security against passive
adversaries when applied to approximate encryption schemes,
and that a different, stronger definition is required to evaluate
the security of such schemes.
We provide a solid theoretical basis for the security evaluation of homomorphic
encryption on approximate numbers (against passive attacks)
by proposing new definitions, that
naturally extend the traditional notion of INDCPA security to the approximate
computation setting.
We propose both indistinguishability-based and simulation-based variants,
as well as restricted versions of the definitions that limit the order and number
of adversarial queries (as may be enforced by some applications).
We prove implications and separations among different definitional variants,
and discuss possible modifications to CKKS that may serve as a countermeasure to our
attacks.

Oblivious Pseudorandom Functions from Isogenies

An oblivious PRF, or OPRF, is a protocol between a client and a server, where the server has a key $k$ for a secure pseudorandom function $F$, and the client has an input $x$ for the function. At the end of the protocol the client learns $F(k,x)$, and nothing else, and the server learns nothing.
An OPRF is verifiable if the client is convinced that the server has evaluated the PRF correctly with respect to a prior commitment to $k$. OPRFs and verifiable OPRFs have numerous applications, such as private-set-intersection protocols, password-based key-exchange protocols, and defense against denial-of-service attacks. Existing OPRF constructions use RSA-, Diffie-Hellman-, and lattice-type assumptions. The first two are not post-quantum secure.
In this paper we construct OPRFs and verifiable OPRFs from isogenies. Our main construction uses isogenies of supersingular elliptic curves over $\mathbb{F}_{p^{2}}$ and tries to adapt the Diffie-Hellman OPRF to that setting. However, a recent attack on supersingular-isogeny systems due to Galbraith et al. [ASIACRYPT 2016] makes this approach difficult to secure. To overcome this attack, and to validate the server's response, we develop two new zero-knowledge protocols that convince each party that its peer has sent valid messages. With these protocols in place, we obtain an OPRF in the SIDH setting and prove its security in the UC framework.
Our second construction is an adaptation of the Naor-Reingold PRF to commutative group actions. Combining it with recent constructions of oblivious transfer from isogenies, we obtain an OPRF in the CSIDH setting.

Reconstructing with Less: Leakage Abuse Attacks in Two-Dimensions

Access and search pattern leakage from range queries are detrimental to the security of encrypted databases, as evidenced by a large body of work on efficient attacks that reconstruct one-dimensional databases. Recently, the first attack from 2D range queries showed that higher-dimensional databases are also in danger. This attack requires complete information for reconstruction. In this paper, we develop reconstructions that require less information. We present an order reconstruction attack that only depends on access pattern leakage, and empirically show that the order allows the attacker to infer the geometry of the underlying data. Notably, this attack also achieves full database reconstruction when the 1D horizontal and vertical projections of the points are dense.
We also give an approximate database reconstruction attack that is distribution-agnostic and works with any subset of the possible search pattern, given the order of the database. Finally, we show how knowledge of auxiliary information such as the centroid of a related dataset allows to improve the reconstruction. We support our results with formal analysis and experiments on real-world databases and queries drawn from various distributions.

Security Analysis of Public Key Searchable Encryption Schemes against Injection Attacks

Cloud computing and cloud storage are among the most efficient technologies for storing and processing metadata. But there are many privacy concerns within this domain. Most of the challenges are coming from trusted or semi trusted cloud servers where some computations must be applied to high confidential data. Data encryption can solve some confidentiality issues on the cloud but it is not easy to provide privacy preserving data processing services such as searching a query over encrypted data. On the other hand implementing searchable encryption algorithms in cloud infrastructure helps providing data confidentiality and privacy preserving data processing and can provide searching capability as well, which is the most important step of selecting a document. First in this article, some injection attacks against searchable public key encryption schemes are described. To be more specific message injection attack and index injection attack are applied against PEKS and PERKS schemes. Afterwards, two new schemes are proposed that are secure against them and are based of dPEKS and SAE-I. Finally, efficiency and security of proposed schemes are analyzed, and some implementation issues were discussed.

Bounds on the nonlinearity of differentially uniform functions by means of their image set size, and on their distance to affine functions

We revisit and take a closer look at a (not so well known) result of a 2017 paper, showing that the differential uniformity of any vectorial function is bounded from below by an expression depending on the size of its image set. We make explicit the resulting tight lower bound on the image set size of differentially $\delta$-uniform functions.
We also significantly improve an upper bound on the nonlinearity of vectorial functions obtained in the same reference and involving their image set size. We study when the resulting bound is sharper than the covering radius bound. We obtain as a by-product a lower bound on the Hamming distance between differentially $\delta$-uniform functions and affine functions, which we improve significantly with a second bound. This leads us to study what can be the maximum Hamming distance between vectorial functions and affine functions. We provide an upper bound which is slightly sharper than a bound by Liu, Mesnager and Chen when $m< n$, and a second upper bound, which is much stronger in the case (happening in practice) where $m$ is near $n$; we study the tightness of this latter bound; this leads to an interesting question on APN functions, to which we answer. We finally make more precise the bound on the differential uniformity which was the starting point of the paper.

On the Concurrent Composition of Quantum Zero-Knowledge

We study the notion of zero-knowledge secure against quantum polynomial-time verifiers (referred to as quantum zero-knowledge) in the concurrent composition setting. Despite being extensively studied in the classical setting, concurrent composition in the quantum setting has hardly been studied.
We initiate a formal study of concurrent quantum zero-knowledge. Our results are as follows:
- Bounded Concurrent QZK for NP and QMA: Assuming post-quantum one-way functions, there exists a quantum zero-knowledge proof system for NP in the bounded concurrent setting. In this setting, we fix a priori the number of verifiers that can simultaneously interact with the prover. Under the same assumption, we also show that there exists a quantum zero-knowledge proof system for QMA in the bounded concurrency setting.
- Quantum Proofs of Knowledge: Assuming quantum hardness of learning with errors (QLWE), there exists a bounded concurrent zero-knowledge proof system for NP satisfying quantum proof of knowledge property.
Our extraction mechanism simultaneously allows for extraction probability to be negligibly close to acceptance probability (extractability) and also ensures that the prover's state after extraction is statistically close to the prover's state after interacting with the verifier (simulatability).
The seminal work of [Unruh EUROCRYPT'12], and all its followups, satisfied a weaker version of extractability property and moreover, did not achieve simulatability. Our result yields a proof of quantum knowledge system for QMA with better parameters than prior works.

Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier

Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs.
We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an $N$-gate arithmetic circuit over any field of size $\Omega(N)$, the prover uses $O(N)$ field operations and the verifier uses $\polylog(N)$ field operations (with proof length $O(N)$ and query complexity $\polylog(N)$). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved).
Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).

Flexible and Efficient Verifiable Computation on Encrypted Data

We consider the problem of verifiable and private delegation of computation [Gennaro et al. CRYPTO'10] in which a client stores private data on an untrusted server and asks the server to compute functions over this data. In this scenario we aim to achieve three main properties: the server should not learn information on inputs and outputs of the computation (privacy), the server cannot return wrong results without being caught (integrity), and the client can verify the correctness of the outputs faster than running the computation (efficiency).
A known paradigm to solve this problem is to use a (non-private) verifiable computation (VC) to prove correctness of a homomorphic encryption (HE) evaluation on the ciphertexts. Despite the research advances in obtaining efficient VC and HE, using these two primitives together in this paradigm is concretely expensive. Recent work [Fiore et al. CCS'14, PKC'20] addressed this problem by designing specialized VC solutions that however require the HE scheme to work with very specific parameters; notably HE ciphertexts must be over $\mathbb{Z}_q$ for a large prime $q$.
In this work we propose a new solution that allows a flexible choice of HE parameters, while staying modular (based on the paradigm combining VC and HE) and efficient (the VC and the HE schemes are both executed at their best efficiency). At the core of our new protocol are new homomorphic hash functions for Galois rings.
As an additional contribution we extend our results to support non-deterministic computations on encrypted data and an additional privacy property by which verifiers do not learn information on the inputs of the computation.

BUFFing signature schemes beyond unforgeability and the case of post-quantum signatures

Modern digital signature schemes can provide more guarantees than the standard notion of (strong) unforgeability, such as offering security even in the presence of maliciously generated keys, or requiring to know a message to produce a signature for it. The use of signature schemes that lack these properties has previously enabled attacks on real-world protocols. In this work we revisit several of these notions beyond unforgeability, establish relations among them, provide the first formal definition of non re-signability, and two generic transformations that can provide these properties for a given signature scheme in a provable and efficient way.
Our results are not only relevant for established schemes: for example, the ongoing NIST PQC competition towards standardizing post-quantum signature schemes had six candidates in its third round of which three are to be standardized. We perform an in-depth analysis of all the candidates with respect to their security properties beyond unforgeability. We show that many of them do not yet offer these stronger guarantees, which implies that the security guarantees of these post-quantum schemes are not strictly stronger than, but instead incomparable to, classical signature schemes. We show how applying our transformations would efficiently solve this, paving the way for the standardized schemes to provide these additional guarantees and thereby making them harder to misuse.

Nonce-Misuse Security of the SAEF Authenticated Encryption mode

ForkAE is a NIST lightweight cryptography candidate that uses the forkcipher primitive in two modes of operation -- SAEF and PAEF -- optimized for authenticated encryption of the shortest messages. SAEF is a sequential and online AEAD that minimizes the memory footprint compared to its alternative parallel mode PAEF, catering to the most constrained devices. SAEF was proven AE secure against nonce-respecting adversaries.
Due to their more acute and direct exposure to device misuse and mishandling, in most use cases of lightweight cryptography, nonce reuse presents a very realistic attack vector. Furthermore, many lightweight applications mandate security for their online AEAD schemes against block-wise adversaries. Surprisingly, very few NIST lightweight AEAD candidates come with provable guarantees against these security threats.
In this work, we investigate the provable security guarantees of SAEF when nonces are repeated under a refined version of the notion of online authenticated encryption OAE given by Fleischmann et al. in 2012. We apply
Using the coefficient H technique we show that, with no modifications, SAEF is OAE secure up to the birthday security bound, i.e., up to $2^{n/2}$ processed blocks of data, where $n$ is the block size of the forkcipher. The implications of our work are that SAEF is safe to use in a block-wise fashion, and that if nonces get repeated, this has no impact on ciphertext integrity and confidentiality only degrades by a limited extent up to repetitions of common message prefixes.

Revisiting the Security of DbHtS MACs: Beyond-Birthday-Bound in the Multi-User Setting

Double-block Hash-then-Sum (DbHtS) MACs are a class of MACs that aim for achieving beyond-birthday-bound security, including SUM-ECBC, PMAC\_Plus, 3kf9 and LightMAC_Plus. Recently Datta et al. (FSE'19), and then Kim et al. (Eurocrypt'20) prove that DbHtS constructions are secure beyond the birthday bound in the single-user setting. However, by a generic reduction, their results degrade to (or even worse than) the birthday bound in the multi-user setting.
In this work, we revisit the security of DbHtS MACs in the multi-user setting. We propose a generic framework to prove beyond-birthday-bound security for DbHtS constructions. We demonstrate the usability of this framework with applications to key-reduced variants of DbHtS MACs, including 2k-SUM-ECBC, 2k-PMAC_Plus and 2k-LightMAC_Plus. Our results show that the security of these constructions will not degrade as the number of users grows. On the other hand, our results also indicate that these constructions are secure beyond the birthday bound in both single-user and multi-user setting without additional domain separation, which is used in the prior work to simplify the analysis.
Moreover, we find a critical flaw in 2kf9, which is proved to be secure beyond the birthday bound by Datta et al. (FSE'19).
We can successfully forge a tag with probability 1 without making any queries. We go further to show attacks with birthday-bound complexity on several variants of 2kf9.

Reducing Participation Costs via Incremental Verification for Ledger Systems

Ledger systems are applications run on peer-to-peer networks that provide strong integrity guarantees. However, these systems often have high participation costs. For a server to join this network, the bandwidth and computation costs grow linearly with the number of state transitions processed; for a client to interact with a ledger system, it must either maintain the entire ledger system state like a server or trust a server to correctly provide such information. In practice, these substantial costs centralize trust in the hands of the relatively few parties with the resources to maintain the entire ledger system state.
The notion of *incrementally verifiable computation*, introduced by Valiant (TCC '08), has the potential to significantly reduce such participation costs. While prior works have studied incremental verification for basic payment systems, the study of incremental verification for a general class of ledger systems remains in its infancy.
In this paper we initiate a systematic study of incremental verification for ledger systems, including its foundations, implementation, and empirical evaluation. We formulate a cryptographic primitive providing the functionality and security for this setting, and then demonstrate how it captures applications with privacy and user-defined computations. We build a system that enables incremental verification, for applications such as privacy-preserving payments, with universal (application-independent) setup. Finally, we show that incremental verification can reduce participation costs by orders of magnitude, for a bare-bones version of Bitcoin.

Delegated RingCT: faster anonymous transactions

We present a modification to RingCT protocol with stealth addresses that makes it compatible with Delegated Proof of Stake based consensus mechanisms called Delegated RingCT.
Our scheme has two building blocks: a customised version of an Integrated Signature and Encryption scheme composed of a public key encryption scheme and two signature schemes (a digital signature and a linkable ring signature); and non-interactive zero knowledge proofs. We give a description of the scheme, security proofs and a prototype implementation whose
benchmarking is discussed.
Although Delegated RingCT does not have the same degree of anonymity as other RingCT constructions, we argue that the benefits that the compatibility with DPoS consensus mechanisms brings constitute a reasonable trade-off for being able to develop an anonymous decentralised cryptocurrency faster and more scalable than existing ones.