Citation

Miller, I. D., Dumay, N., Pitt, M. Lam, B., & Armstrong, B. (Under review). Context variability promotes generalization in reading aloud: Insight from a neural network simulation.

Abstract

How do neural network models of quasiregular domains, such as spelling-sound correspondences in English, learnto represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work proposed that a graded “warping” mechanismallows for the implicit representation of how a new word’spronunciation should generalize when it is first learned. We explored the micro-structure of this proposal by training anetwork to pronounce new made-up words that were consistentwith the dominant pronunciation (regulars), were comprisedof a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). We also “diluted” these pronunciations, such that we either presented one or multiple made-up words that shared the same rhyme, increasing context variability. We observed that dilution promoted generalization of novelpronunciations. These results point to the importance of context variability in modulating warping in quasiregular domains.

Accuracy Results

screenshot