Paper 2005/428

Loud and Clear: Human-Verifiable Authentication Based on Audio

Michael T. Goodrich, Michael Sirivianos, John Solis, Gene Tsudik, and Ersin Uzun


Secure pairing of electronic devices that lack any previous association is a challenging problem which has been considered in many contexts and in various flavors. In this paper, we investigate an alternative and complementary approach--the use of the audio channel for human-assisted authentication of previously un-associated devices. We develop and evaluate a system we call Loud-and-Clear (L&C) which places very little demand on the human user. L&C involves the use of a text-to-speech (TTS) engine for vocalizing a robust-sounding and syntactically-correct (English-like) sentence derived from the hash of a device's public key. By coupling vocalization on one device with the display of the same information on another device, we demonstrate that L&C is suitable for secure device pairing (e.g., key exchange) and similar tasks. We also describe several common use cases, provide some performance data for our prototype implementation and discuss the security properties of L&C.

Note: Minor revisions

Available format(s)
Publication info
Published elsewhere. ICDCS 2006
Human-assisted authenticationMan-in-the-middle attackAudioText-to-speechPublic keyKey agreementPersonal deviceWireless networks.
Contact author(s)
msirivia @ uci edu
2006-06-28: last of 10 revisions
2005-11-23: received
See all versions
Short URL
Creative Commons Attribution


      author = {Michael T.  Goodrich and Michael Sirivianos and John Solis and Gene Tsudik and Ersin Uzun},
      title = {Loud and Clear: Human-Verifiable Authentication Based on Audio},
      howpublished = {Cryptology ePrint Archive, Paper 2005/428},
      year = {2005},
      note = {\url{}},
      url = {}
Note: In order to protect the privacy of readers, does not use cookies or embedded third party content.