|Title:||How we learn variation, optionality, and probability|
|Comment:||Superseded by ch. 15 of Functional Phonology, available via http://www.fon.hum.uva.nl/paul/|
|Abstract:||Variation is controlled by the grammar, though indirectly: it follows automatically from the robustness requirement of learning.
If every constraint in an Optimality-Theoretic grammar has a ranking value along a continuous scale, and the disharmony of a constraint at evaluation time is randomly distributed about this ranking value, the phenomenon of optionality in determining the winning candidate follows automatically from the finiteness of the difference between the ranking values of the relevant constraints. The degree of optionality is a descending function of this ranking difference.
In the production grammar, the symmetrized Minimal Gradual Learning Algorithm will automatically cause the learner to copy the degrees of optionality from the language environment.
In the perception grammar, even the slightest degree of randomness in constraint evaluation will automatically cause the learner to become a probability-matching listener, whose categorization distributions match the production distributions of the language environment.
Evidence suggests that natural learners follow a symmetric demotion-and-promotion strategy, not a demotion-only strategy.