Transfer learning in sign language

Ali Farhadi, David Alexander Forsyth, Ryan White

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.

Original languageEnglish (US)
Title of host publication2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
DOIs
StatePublished - Oct 11 2007
Event2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 - Minneapolis, MN, United States
Duration: Jun 17 2007Jun 22 2007

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919

Other

Other2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07
CountryUnited States
CityMinneapolis, MN
Period6/17/076/22/07

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Farhadi, A., Forsyth, D. A., & White, R. (2007). Transfer learning in sign language. In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07 [4270344] (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2007.383346

Transfer learning in sign language. / Farhadi, Ali; Forsyth, David Alexander; White, Ryan.

2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. 4270344 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Farhadi, A, Forsyth, DA & White, R 2007, Transfer learning in sign language. in 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07., 4270344, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07, Minneapolis, MN, United States, 6/17/07. https://doi.org/10.1109/CVPR.2007.383346
Farhadi A, Forsyth DA, White R. Transfer learning in sign language. In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. 4270344. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2007.383346
Farhadi, Ali ; Forsyth, David Alexander ; White, Ryan. / Transfer learning in sign language. 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07. 2007. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
@inproceedings{f5212a7502374d538dd2878671d20e26,
title = "Transfer learning in sign language",
abstract = "We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.",
author = "Ali Farhadi and Forsyth, {David Alexander} and Ryan White",
year = "2007",
month = "10",
day = "11",
doi = "10.1109/CVPR.2007.383346",
language = "English (US)",
isbn = "1424411807",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
booktitle = "2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07",

}

TY - GEN

T1 - Transfer learning in sign language

AU - Farhadi, Ali

AU - Forsyth, David Alexander

AU - White, Ryan

PY - 2007/10/11

Y1 - 2007/10/11

N2 - We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.

AB - We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.

UR - http://www.scopus.com/inward/record.url?scp=34948911163&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34948911163&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2007.383346

DO - 10.1109/CVPR.2007.383346

M3 - Conference contribution

AN - SCOPUS:34948911163

SN - 1424411807

SN - 9781424411801

T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

BT - 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'07

ER -