TY - GEN
T1 - Generating Realistic Sound with Prosthetic Hand
T2 - 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024
AU - Jeong, Taemoon
AU - Yamsani, Sankalp
AU - Hong, Jooyoung
AU - Park, Kyungseo
AU - Kim, Joohyung
AU - Choi, Sungjoon
N1 - This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program (Korea University), No. 2022-0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, No. 2022-0-00612, Geometric and Physical Commonsense Reasoning based Behavior Intelligence for Embodied AI, and No. 2022-0-00480, Development of Training and Inference Methods for Goal-Oriented Artificial Intelligence Agents).
PY - 2024
Y1 - 2024
N2 - In this study, we tackle the complex task of enabling prosthetic hands to accurately reproduce sounds, a crucial aspect for distinguishing between different materials through auditory feedback. Sound identification, such as discerning a drywall tap from that on a brick wall, significantly enhances the functionality and user experience of prosthetic devices. However, achieving this level of auditory feedback in prosthetic hands poses considerable challenges. We utilize reinforcement learning (RL) techniques to train prosthetic hands in emulating human-like sound characteristics, focusing on key auditory signals like amplitude and onset timing. Our approach integrates a detailed analysis of these sound attributes to direct the prosthetic hand's movements for the sound generation that mimics natural human actions. We developed a tailored reward function incorporating amplitude, onset strength, and timing criteria to ensure the prosthetic hand's movements align closely with the intended human-like sound output.
AB - In this study, we tackle the complex task of enabling prosthetic hands to accurately reproduce sounds, a crucial aspect for distinguishing between different materials through auditory feedback. Sound identification, such as discerning a drywall tap from that on a brick wall, significantly enhances the functionality and user experience of prosthetic devices. However, achieving this level of auditory feedback in prosthetic hands poses considerable challenges. We utilize reinforcement learning (RL) techniques to train prosthetic hands in emulating human-like sound characteristics, focusing on key auditory signals like amplitude and onset timing. Our approach integrates a detailed analysis of these sound attributes to direct the prosthetic hand's movements for the sound generation that mimics natural human actions. We developed a tailored reward function incorporating amplitude, onset strength, and timing criteria to ensure the prosthetic hand's movements align closely with the intended human-like sound output.
UR - http://www.scopus.com/inward/record.url?scp=85214516836&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214516836&partnerID=8YFLogxK
U2 - 10.1109/EMBC53108.2024.10782257
DO - 10.1109/EMBC53108.2024.10782257
M3 - Conference contribution
C2 - 40039010
AN - SCOPUS:85214516836
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
BT - 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 15 July 2024 through 19 July 2024
ER -