|Title:||The Benefits of Errors: Learning an OT Grammar with a Structured Candidate Set|
|Comment:||Published in: Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition, pages 81-88, Prague, Czech Republic, June 2007. (Copyright: Association for Computational Linguistics.)|
|Abstract:||We compare three recent proposals adding a topology to OT: McCarthy's Persistent OT, Smolensky's ICS and Biro's SA-OT. (During the comparison, the idea of simulated annealing is also presented in a simple way.) To test their learnability, constraint rankings are learnt from SA-OT's output. The errors in the output, being more than mere noise and corresponding to performance errors, follow from the topology (by being local optima). Thus, the learner has to reconstruct her competence having access only to the teacher's performance, which includes errors. In a pilot experiment with a toy grammar, we employ Recursive Constraint Demotion (RCD) followed by the Gradual Learning Algorithm (GLA).
Also available in the ACL Anthology: http://acl.ldc.upenn.edu/W/W07/W07-0611.pdf