Paper 2018/147
Sustained Space Complexity
Joel Alwen, Jeremiah Blocki, and Krzysztof Pietrzak
Abstract
Memoryhard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on offtheshelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains. Alwen and Serbinenko [STOC'15] define the cumulative memory complexity (cmc) of a function as the sum (over all timesteps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible timememory tradeoffs; as memory cost doesn't scale linearly, functions with the same cmc could still have very different actual hardware cost. In this work we address this problem, and introduce the notion of sustainedmemory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustainedmemory complexity is almost optimal: our function can be evaluated using $n$ steps and $O(n/\log(n))$ memory, in each step making one query to the (fixedinput length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs $\Omega(n/\log(n))$ memory for $\Omega(n)$ steps. As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustainedmemory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on $n$ nodes with constant indegree with high ``sustainedspace complexity", meaning that any parallel blackpebbling strategy requires $\Omega(n/\log(n))$ pebbles for at least $\Omega(n)$ steps. Along the way we construct a family of maximally ``depthrobust" DAGs with maximum indegree $O(\log n)$, improving upon the construction of Mahmoody et al. [ITCS'13] which had maximum indegree $O\left(\log^2 n \cdot \mathsf{polylog}(\log n)\right)$.
Note: Full Version of EUROCRYPT 2018 paper. Includes missing proofs and expanded discussion.
Metadata
 Available format(s)
 Category
 Cryptographic protocols
 Publication info
 A major revision of an IACR publication in EUROCRYPT 2018
 Keywords
 Memory Hard FunctionsDepthRobust GraphSustained Space Complexity
 Contact author(s)
 jblocki @ purdue edu
 History
 20180209: revised
 20180208: received
 See all versions
 Short URL
 https://ia.cr/2018/147
 License

CC BY
BibTeX
@misc{cryptoeprint:2018/147, author = {Joel Alwen and Jeremiah Blocki and Krzysztof Pietrzak}, title = {Sustained Space Complexity}, howpublished = {Cryptology ePrint Archive, Paper 2018/147}, year = {2018}, note = {\url{https://eprint.iacr.org/2018/147}}, url = {https://eprint.iacr.org/2018/147} }