Paper 2016/465
Can Large Deviation Theory be Used for Estimating Data Complexity?
Subhabrata Samajder and Palash Sarkar
Abstract
Statistical analysis of attacks on block ciphers have mostly used normal approximations. A few recent works have proposed doing away with normal approximations and instead use Chernoff and Hoeffding bounds to obtain rigorous bounds on data complexities of several attacks. This opens up the question of whether even better general bounds can be obtained using the statistical theory of large deviations. In this note we examine this question. Our conclusion is that while in theory this is indeed possible, in general obtaining meaningful expressions for data complexity presents several difficulties. This leaves open the question of whether this can be done for specific attacks.
Metadata
- Available format(s)
- Category
- Secret-key cryptography
- Publication info
- Preprint. MINOR revision.
- Keywords
- block cipherscryptanalysisdata complexity
- Contact author(s)
- palash @ isical ac in
- History
- 2016-05-24: revised
- 2016-05-13: received
- See all versions
- Short URL
- https://ia.cr/2016/465
- License
-
CC BY
BibTeX
@misc{cryptoeprint:2016/465, author = {Subhabrata Samajder and Palash Sarkar}, title = {Can Large Deviation Theory be Used for Estimating Data Complexity?}, howpublished = {Cryptology {ePrint} Archive, Paper 2016/465}, year = {2016}, url = {https://eprint.iacr.org/2016/465} }