Cryptology ePrint Archive: Report 2014/238

High Parallel Complexity Graphs and Memory-Hard Functions

Joel Alwen and Vladimir Serbinenko

Abstract: Motivated by growing importance of parallelism in modern computational systems, we introduce a very natural generalization to a parallel setting of the powerful (sequential) black pebbling game over DAGs. For this new variant, when considering pebbling graphs with with multiple disconnected components (say when modelling the computation of multiple functions in parallel), we demonstrate a significant shortcoming of the two most common types of complexity measures for DAGs inherited from the sequential setting (namely S-complexity and ST-complexity). Thus, to ensure the applicability of the new pebbling game as a tool for proving results about say the \emph{amortized} hardness of functions being repeatedly evaluated, we introduce a new complexity measure for DAGs called \emph{cumulative complexity} (CC) and show how it overcomes this problem.\\

With the aim of facilitating the new complexity lower-bounds in parallel settings we turn to the task of finding high CC graphs for the parallel pebbling game. First we look at several types of graphs such as certain stacks of superconcentrators, permutation graphs, bit-reversal graphs and pyramid graphs, which are known to have high (even optimally so) complexity in the sequential setting. We show that all of them have much lower parallel CC then one could hope for from a graph of equal size. This motivates our first main technical result, namely the construction of a new family of constant in-degree graphs whose parallel CC approaches maximality to within a polylogarithmic factor.\\

The second contribution of this work is to demonstrate an application of these new theoretical tools, in particular to the field of cryptography. Memory-hard function (MHF), introduced by Percival~\cite{Per09}, have the intuitive goal of leverage the relatively high cost of memory in integrated circuits compared to general purpose computers in order to decrease the attractiveness of using custom circuits to mount brute-force attacks. We provide a new formalization for key property of such functions (overcoming problems with the approach of~\cite{Per09}) using a new type of \emph{amortized} computational hardness for families of functions in the (parallel) random oracle model. We motivate the hardness definition by showing how it provides an immediate lower-bound on the monetary cost of repeatedly evaluating such functions in several real-world (parallel) computational environments (e.g. FPGAs, ASICs, Cloud Computers). Indeed, in practice such devices are often the most cost effective means for mounting large-scale brute-force attacks on security relevant functions (such as say Proofs-of-Work and the hash functions used to obscure stored passwords in login servers). As the main technical result of this section, for the family of functions $f_G$ (over strings) characterized via a given DAG $G$, we prove a lower-bound on the hardness of $f_G$ in terms of the parallel CC of $G$. In consequence, we obtain the first provably secure (and intuitively sound) MHF.

Category / Keywords: foundations / Parallel Pebbling, Memory-Hard Functions

Date: received 2 Apr 2014

Contact author: alwenj at inf ethz ch

Available format(s): PDF | BibTeX Citation

Version: 20140411:195251 (All versions of this report)

Discussion forum: Show discussion | Start new discussion

[ Cryptology ePrint archive ]