Cryptology ePrint Archive: Report 2005/428
Loud and Clear: Human-Verifiable Authentication Based on Audio
Michael T. Goodrich, Michael Sirivianos, John Solis, Gene Tsudik and Ersin Uzun
Abstract: Secure pairing of electronic devices that lack
any previous association is a challenging problem which has been
considered in many contexts and in various flavors.
In this paper, we investigate an alternative and complementary approach--the use of the audio channel for human-assisted
authentication of previously un-associated devices.
We develop and evaluate a system we call Loud-and-Clear
(L&C) which places very little demand on
the human user. L&C involves the use of a text-to-speech (TTS)
engine for vocalizing a robust-sounding and syntactically-correct
(English-like) sentence derived from the hash of a device's public key. By coupling vocalization on one device with the display of the same information on another device, we demonstrate that L&C is suitable for secure device pairing (e.g., key exchange) and similar tasks. We also describe several common use cases, provide some performance data for our prototype implementation and discuss the security properties of L&C.
Category / Keywords: Human-assisted authentication, Man-in-the-middle attack, Audio, Text-to-speech, Public key, Key agreement, Personal device, Wireless networks.
Publication Info: ICDCS 2006
Date: received 23 Nov 2005, last revised 28 Jun 2006
Contact author: msirivia at uci edu
Available formats: Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation
Note: Minor revisions
Version: 20060628:165555 (All versions of this report)
Discussion forum: Show discussion | Start new discussion
[ Cryptology ePrint archive ]