### Abstract

Some learning techniques for classification tasks work indirectly, by first tiying to fit a full probabilistic model to the observed data. Whether this is a good idea or not depends on the robustness with respect to deviations from the postulated model. We study this question experimentally in a restricted, yet non-trivial and interesting case: we consider a conditionally independent attribute (CIA) model which postulates a single binary-valued hidden variable z on which all other attributes (i.e., the target and the observables) depend. In this model, finding the most likely value of any one variable (given known values for the others) reduces to testing a linear function of the observed values. We learn CIA with two techniques: the standard EM algorithm, and a new algorithm we develop based on covariances. We compare these, in a controlled fashion, against an algorithm (a version of Winnow) that attempts to find a good linear classifier directly. Our conclusions help delimit the fragility of using the CIA model for classification: once the data departs from this model, performance quickly degrades and drops below that of the directly-learned linear classifier.

Original language | English (US) |
---|---|

Title of host publication | Advances in Neural Information Processing Systems 10 - Proceedings of the 1997 Conference, NIPS 1997 |

Publisher | Neural information processing systems foundation |

Pages | 500-506 |

Number of pages | 7 |

ISBN (Print) | 0262100762, 9780262100762 |

State | Published - Jan 1 1998 |

Event | 11th Annual Conference on Neural Information Processing Systems, NIPS 1997 - Denver, CO, United States Duration: Dec 1 1997 → Dec 6 1997 |

### Publication series

Name | Advances in Neural Information Processing Systems |
---|---|

ISSN (Print) | 1049-5258 |

### Other

Other | 11th Annual Conference on Neural Information Processing Systems, NIPS 1997 |
---|---|

Country | United States |

City | Denver, CO |

Period | 12/1/97 → 12/6/97 |

### ASJC Scopus subject areas

- Computer Networks and Communications
- Information Systems
- Signal Processing

## Fingerprint Dive into the research topics of 'Linear concepts and hidden variables: An empirical study'. Together they form a unique fingerprint.

## Cite this

*Advances in Neural Information Processing Systems 10 - Proceedings of the 1997 Conference, NIPS 1997*(pp. 500-506). (Advances in Neural Information Processing Systems). Neural information processing systems foundation.