[Author Login]
Title:Towards a Robuster Interpretive Parsing: Learning from overt forms in Optimality Theory
Authors:Tamas Biro
Comment:Published (open access) in: Journal of Logic, Language and Information, volume 22, issue 2 (2013), pp. 139–172.
Length:34 pp.
Abstract:The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution’s parameter T is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm significantly improves the learning success rate in a model with standard constraints for metrical stress assignment.
Type:Paper/tech report
Area/Keywords:Formal analysis, mathematical analysis, computational linguistics / Boltzmann distribution, Generalized Robust Interpretive Parsing, learnability, learning algorithms, metrical stress, overt forms, Robust Interpretive Parsing, simulated annealing
Article:Version 1