Let's Read: Designing a smart display application to support CODAS when learning spoken language

  • Katie Rodeghiero Chapman University
  • Yingying Yuki Chen Chapman University
  • Annika M. Hettmann Chapman University
  • Franceli L. Cibrian Chapman University

Abstract

Hearing children of Deaf adults (CODAs) face many challenges including having difficulty learning spoken languages, experiencing social judgment, and encountering greater responsibilities at home. In this paper, we present a proposal for a smart display application called Let's Read that aims to support CODAs when learning spoken language. We conducted a qualitative analysis using online community content in English to develop the first version of the prototype. Then, we conducted a heuristic evaluation to improve the proposed prototype. As future work, we plan to use this prototype to conduct participatory design sessions with Deaf adults and CODAs to evaluate the potential of Let's Read in supporting spoken language in mixed-ability family dynamics.

Published
Nov 30, 2021
How to Cite
RODEGHIERO, Katie et al. Let's Read: Designing a smart display application to support CODAS when learning spoken language. Avances en Interacción Humano-Computadora, [S.l.], n. 1, p. 18-21, nov. 2021. ISSN 2594-2352. Available at: <https://aihc.amexihc.org/index.php/aihc/article/view/80>. Date accessed: 09 may 2024. doi: http://dx.doi.org/10.47756/aihc.y6i1.80.
Section
Research Papers