[Author Login]
[Home]
ROA:675
Title:Probabilistic Learning Algorithms and Optimality Theory
Authors:Frank Keller, Ash Asudeh
Comment:In Linguistic Inquiry 33:2, 225-244, 2002.
Length:21
Abstract: This paper provides a critical assessment of the Gradual Learning Algorithm (GLA) for probabilistic optimality-theoretic grammars proposed by Boersma and Hayes (2001). After a short introduction to the problem of grammar learning in OT, we discuss the limitations of the standard solution to this problem (the Constraint Demotion Algorithm by Tesar and Smolensky (1998)), and outline how the GLA attempts to overcome these limitations. We point out a number of serious shortcomings with the GLA approach: (a) A methodological problem is that the GLA has not been tested on unseen data, which is standard practice in research on computational language learning. (b) We provide counterexamples, i.e., data sets that the GLA is not able to learn. Examples of this type actually occur in experimental data that the GLA should be able to model. This sheds serious doubt on the correctness and convergence of the GLA. (c) Essential algorithmic properties of the GLA (correctness and convergence) have not been proven formally. This makes it very hard to assess the validity of the algorithm. (d) We argue that by modeling frequency distributions in the grammar, the GLA conflates the notions of competence and performance. This leads to serious conceptual problems, as OT crucially relies on the competence/performance distinction.
Type:Paper/tech report
Area/Keywords:Syntax,Phonology,Computation,Learnability
Article:Version 1