Paper 2023/1462

High-precision RNS-CKKS on fixed but smaller word-size architectures: theory and application

Rashmi Agrawal, Boston University
Jung Ho Ahn, Seoul National University
Flavio Bergamaschi, Intel Labs
Ro Cammarota, Intel Labs
Jung Hee Cheon, Seoul National University
Fillipe D. M. de Souza, Intel Labs
Huijing Gong, Intel Labs
Minsik Kang, Seoul National University
Duhyeong Kim, Intel Labs
Jongmin Kim, Seoul National University
Hubert de Lassus, Intel Labs
Jai Hyun Park, Seoul National University
Michael Steiner, Intel Labs
Wen Wang, Intel Labs
Abstract

A prevalent issue in the residue number system (RNS) variant of the Cheon-Kim-Kim-Song (CKKS) homomorphic encryption (HE) scheme is the challenge of efficiently achieving high precision on hardware architectures with a fixed, yet smaller, word-size of bit-length $W$, especially when the scaling factor satisfies $\log\Delta > W$. In this work, we introduce an efficient solution termed composite scaling. In this approach, we group multiple RNS primes as $q_\ell:= \prod_{j=0}^{t-1} q_{\ell,j}$ such that $\log q_{\ell,j} < W$ for $0\le j < t$, and use each composite $q_\ell$ in the rescaling procedure as $\mathsf{ct}\mapsto \lfloor \mathsf{ct} / q_\ell\rceil$. Here, the number of primes, denoted by $t$, is termed the composition degree. This strategy contrasts the traditional rescaling method in RNS-CKKS, where each $q_\ell$ is chosen as a single $\log\Delta$-bit prime, a method we designate as single scaling. To achieve higher precision in single scaling, where $\log\Delta > W$, one would either need a novel hardware architecture with word size $W' > \log\Delta$ or would have to resort to relatively inefficient solutions rooted in multi-precision arithmetic. This problem, however, doesn't arise in composite scaling. In the composite scaling approach, the larger the composition degree $t$, the greater the precision attainable with RNS-CKKS across an extensive range of secure parameters tailored for workload deployment. We have integrated composite scaling RNS-CKKS into both OpenFHE and Lattigo libraries. This integration was achieved via a concrete implementation of the method and its application to the most up-to-date workloads, specifically, logistic regression training and convolutional neural network inference. Our experiments demonstrate that single and composite scaling approaches are functionally equivalent, both theoretically and practically.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Minor revision. WAHC 2023 – 11th Workshop on Encrypted Computing & Applied Homomorphic Cryptography
Keywords
Fully Homomorphic EncryptionHigh-precision CKKSFixed-word Size ArchitectureComposite Scaling
Contact author(s)
rashmi23 @ bu edu
gajh @ snu ac kr
flavio @ intel com
rosario cammarota @ intel com
jhcheon @ snu ac kr
fillipe souza @ intel com
huijing gong @ intel com
kaiser351 @ snu ac kr
duhyeong kim @ intel com
jongmin kim @ snu ac kr
hubert de lassus @ intel com
jhyunp @ snu ac kr
michael steiner @ intel com
wen wang @ intel com
History
2023-09-25: last of 3 revisions
2023-09-24: received
See all versions
Short URL
https://ia.cr/2023/1462
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/1462,
      author = {Rashmi Agrawal and Jung Ho Ahn and Flavio Bergamaschi and Ro Cammarota and Jung Hee Cheon and Fillipe D. M. de Souza and Huijing Gong and Minsik Kang and Duhyeong Kim and Jongmin Kim and Hubert de Lassus and Jai Hyun Park and Michael Steiner and Wen Wang},
      title = {High-precision RNS-CKKS on fixed but smaller word-size architectures: theory and application},
      howpublished = {Cryptology ePrint Archive, Paper 2023/1462},
      year = {2023},
      note = {\url{https://eprint.iacr.org/2023/1462}},
      url = {https://eprint.iacr.org/2023/1462}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.