Let's Read: Designing a smart display application to support CODAS when learning spoken language

Autores/as

  • Katie Rodeghiero Chapman University
  • Yingying Yuki Chen Chapman University
  • Annika M. Hettmann Chapman University
  • Franceli L. Cibrian Chapman University

DOI:

https://doi.org/10.47756/aihc.y6i1.80

Palabras clave:

CODAs, Deaf, Smart display, Reading, Mix-ability

Resumen

Hearing children of Deaf adults (CODAs) face many challenges including having difficulty learning spoken languages, experiencing social judgment, and encountering greater responsibilities at home. In this paper, we present a proposal for a smart display application called Let's Read that aims to support CODAs when learning spoken language. We conducted a qualitative analysis using online community content in English to develop the first version of the prototype. Then, we conducted a heuristic evaluation to improve the proposed prototype. As future work, we plan to use this prototype to conduct participatory design sessions with Deaf adults and CODAs to evaluate the potential of Let's Read in supporting spoken language in mixed-ability family dynamics.

Descargas

Los datos de descargas todavía no están disponibles.

Descargas

Publicado

2021-11-30

Cómo citar

[1]
Rodeghiero, K. et al. 2021. Let’s Read: Designing a smart display application to support CODAS when learning spoken language. Avances en Interacción Humano-Computadora. 1 (nov. 2021), 18–21. DOI:https://doi.org/10.47756/aihc.y6i1.80.

Número

Sección

Artículos de Investigación

Artículos similares

1 2 > >> 

También puede {advancedSearchLink} para este artículo.