You are looking at a specific version 20140411:195251 of this paper. See the latest version.

Paper 2014/238

High Parallel Complexity Graphs and Memory-Hard Functions

Joel Alwen and Vladimir Serbinenko

Abstract

Motivated by growing importance of parallelism in modern computational systems, we introduce a very natural generalization to a parallel setting of the powerful (sequential) black pebbling game over DAGs. For this new variant, when considering pebbling graphs with with multiple disconnected components (say when modelling the computation of multiple functions in parallel), we demonstrate a significant shortcoming of the two most common types of complexity measures for DAGs inherited from the sequential setting (namely S-complexity and ST-complexity). Thus, to ensure the applicability of the new pebbling game as a tool for proving results about say the \emph{amortized} hardness of functions being repeatedly evaluated, we introduce a new complexity measure for DAGs called \emph{cumulative complexity} (CC) and show how it overcomes this problem.\\ With the aim of facilitating the new complexity lower-bounds in parallel settings we turn to the task of finding high CC graphs for the parallel pebbling game. First we look at several types of graphs such as certain stacks of superconcentrators, permutation graphs, bit-reversal graphs and pyramid graphs, which are known to have high (even optimally so) complexity in the sequential setting. We show that all of them have much lower parallel CC then one could hope for from a graph of equal size. This motivates our first main technical result, namely the construction of a new family of constant in-degree graphs whose parallel CC approaches maximality to within a polylogarithmic factor.\\ The second contribution of this work is to demonstrate an application of these new theoretical tools, in particular to the field of cryptography. Memory-hard function (MHF), introduced by Percival~\cite{Per09}, have the intuitive goal of leverage the relatively high cost of memory in integrated circuits compared to general purpose computers in order to decrease the attractiveness of using custom circuits to mount brute-force attacks. We provide a new formalization for key property of such functions (overcoming problems with the approach of~\cite{Per09}) using a new type of \emph{amortized} computational hardness for families of functions in the (parallel) random oracle model. We motivate the hardness definition by showing how it provides an immediate lower-bound on the monetary cost of repeatedly evaluating such functions in several real-world (parallel) computational environments (e.g. FPGAs, ASICs, Cloud Computers). Indeed, in practice such devices are often the most cost effective means for mounting large-scale brute-force attacks on security relevant functions (such as say Proofs-of-Work and the hash functions used to obscure stored passwords in login servers). As the main technical result of this section, for the family of functions $f_G$ (over strings) characterized via a given DAG $G$, we prove a lower-bound on the hardness of $f_G$ in terms of the parallel CC of $G$. In consequence, we obtain the first provably secure (and intuitively sound) MHF.

Metadata
Available format(s)
PDF
Category
Foundations
Publication info
Preprint. MINOR revision.
Keywords
Parallel PebblingMemory-Hard Functions
Contact author(s)
alwenj @ inf ethz ch
History
2014-11-04: revised
2014-04-11: received
See all versions
Short URL
https://ia.cr/2014/238
License
Creative Commons Attribution
CC BY
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.