[Author Login]
Title:Learnability in Optimality Theory (short version)
Authors:Bruce Tesar, Paul Smolensky
Comment:62 pages (double-spaced)
Abstract:Bruce Tesar

The Center for Cognitive Science / Linguistics Department

Rutgers University


Paul Smolensky

Cognitive Science Department, Johns Hopkins University

A central claim of Optimality Theory is that grammars may differ only in

how conflicts among universal well-formedness constraints are resolved:

a grammar is precisely a means of resolving such conflicts via a strict

priority ranking of constraints. It is shown here how this theory of

Universal Grammar yields a highly general Constraint Demotion principle

for grammar learning. The resulting learning procedure specifically

exploits the grammatical structure of Optimality Theory, independent of

the content of substantive constraints defining any given grammatical

module. The learning problem is decomposed and formal results are

presented for a central subproblem, deducing the constraint ranking

particular to a target language, given structural descriptions of

positive examples and knowledge of universal grammatical elements.

Despite the potentially large size of the space of possible grammars,

the structure imposed on this space by Optimality Theory allows

efficient convergence to a correct grammar. Implications are discussed

for learning from overt data only, learnability of partially-ranked

constraint hierarchies, and the initial state. It is argued that

Optimality Theory promotes a goal which, while generally desired, has

been surprising elusive: confluence of the demands of more effective

learnability and deeper linguistic explanation.
Type:Paper/tech report
Article:Version 1