Abstract
Amongst modellers of natural language comprehension, the suspicion that explicit semantic representations are inherently biased has led many to rely more heavily on the ability of networks to form their own internal semantic representations over the course of training. The concern over explicit semantics, however, betrays a lack of appreciation for the manner in which insidious biases can and cannot creep into models of comprehension. In fact, the trend of relying on networks to form their own internal semantic representations has done little to curtail one common form of insidious bias. Where models of natural language comprehension are concerned, the cause of inappropriate biases has everything to do with the manner in which regularities find their way into sentence/meaning pairs and little or nothing to do with the degree to which semantic information is made explicit. This is fortunate, as there may be drawbacks to relying too heavily on the ability of networks to form their own internal semantic representations.
Original language | English (US) |
---|---|
Pages (from-to) | 277-292 |
Number of pages | 16 |
Journal | Connection Science |
Volume | 13 |
Issue number | 3 |
DOIs | |
State | Published - Sep 2001 |
Keywords
- Comprehension
- Linguistics
- Microfeatures
- Natural language
- Semantics
ASJC Scopus subject areas
- Software
- Human-Computer Interaction
- Artificial Intelligence