Injecting life into toys

Songchun Fan, Hyojeong Shin, Romit Roy Choudhury

Research output: Contribution to conferencePaper

Abstract

This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, andwhen possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise - we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.

Original languageEnglish (US)
DOIs
StatePublished - Jan 1 2014
Event15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014 - Santa Barbara, CA, United States
Duration: Feb 26 2014Feb 27 2014

Other

Other15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014
CountryUnited States
CitySanta Barbara, CA
Period2/26/142/27/14

Fingerprint

Smartphones
Sensors
Vibrations (mechanical)
Signal processing
Acoustics
Display devices
Feedback

ASJC Scopus subject areas

  • Computer Science Applications
  • Software

Cite this

Fan, S., Shin, H., & Choudhury, R. R. (2014). Injecting life into toys. Paper presented at 15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014, Santa Barbara, CA, United States. https://doi.org/10.1145/2565585.2565606

Injecting life into toys. / Fan, Songchun; Shin, Hyojeong; Choudhury, Romit Roy.

2014. Paper presented at 15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014, Santa Barbara, CA, United States.

Research output: Contribution to conferencePaper

Fan, S, Shin, H & Choudhury, RR 2014, 'Injecting life into toys', Paper presented at 15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014, Santa Barbara, CA, United States, 2/26/14 - 2/27/14. https://doi.org/10.1145/2565585.2565606
Fan S, Shin H, Choudhury RR. Injecting life into toys. 2014. Paper presented at 15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014, Santa Barbara, CA, United States. https://doi.org/10.1145/2565585.2565606
Fan, Songchun ; Shin, Hyojeong ; Choudhury, Romit Roy. / Injecting life into toys. Paper presented at 15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014, Santa Barbara, CA, United States.
@conference{9c580d39afa4498eb8b33fea2d0f17a4,
title = "Injecting life into toys",
abstract = "This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, andwhen possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her {"}gesture vocabulary{"}, motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise - we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.",
author = "Songchun Fan and Hyojeong Shin and Choudhury, {Romit Roy}",
year = "2014",
month = "1",
day = "1",
doi = "10.1145/2565585.2565606",
language = "English (US)",
note = "15th Workshop on Mobile Computing Systems and Applications, HotMobile 2014 ; Conference date: 26-02-2014 Through 27-02-2014",

}

TY - CONF

T1 - Injecting life into toys

AU - Fan, Songchun

AU - Shin, Hyojeong

AU - Choudhury, Romit Roy

PY - 2014/1/1

Y1 - 2014/1/1

N2 - This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, andwhen possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise - we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.

AB - This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, andwhen possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise - we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.

UR - http://www.scopus.com/inward/record.url?scp=84899824783&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84899824783&partnerID=8YFLogxK

U2 - 10.1145/2565585.2565606

DO - 10.1145/2565585.2565606

M3 - Paper

AN - SCOPUS:84899824783

ER -