|Title:||Commitment-Based Learning of Hidden Linguistic Structures|
|Abstract:||Learners must simultaneously learn a grammar and a lexicon from observed forms, yet some structures that the grammar and lexicon reference are unobservable in the acoustic signal. Moreover, these "hidden" structures interact: the grammar maps an underlying form to a particular interpretation. Learning one structure depends on learning the structures it interacts with, but if the learner commits to one structure, its interactions can be exploited to learn others. The Commitment-Based Learner (CBL) employs this strategy using error-driven learning (Gold 1967, Wexler and Culicover 1980) and inconsistency detection (Tesar 1997) to determine when to make commitments and what kinds of commitments to make.
The CBL overcomes structural ambiguity by extending branches from a hypothesis and committing to a separate structural interpretation in each branch, as in the Inconsistency Detection Learner (Tesar 2004). It resolves lexical ambiguity by making piecewise commitments to feature values, following the Output-Driven Learner (Tesar, to appear). Each branch has its own lexicon whose values reflect the interactions of underlying forms with the branch's structural commitments.
In computer simulations, the CBL learns all 97 languages in a constructed typology whose linguistic system includes 370 million grammar and lexicon combinations. For each language learned, the CBL takes far fewer steps than needed to exhaustively search for a consistent and restrictive combination. Employing inconsistency detection with Multi-Recursive Constraint Demotion (Tesar 1997) makes the CBL highly efficient, and it compares favorably in success and efficiency to its major stochastic competitors (Apoussidou 2007, Jarosz 2006, to appear).
The dissertation also introduces a previously unrecognized global lexical ambiguity defined by paradigmatic equality. Paradigmatic equals (PEs) have different grammars, but because their morpheme behaviors are identical, their learning data are equivalent and foil learning by inconsistency detection. To distinguish PEs, the CBL finds consistent mappings derived from words with unset features set to mismatch their surface values. A mapping with an error by the current ranking contributes new ranking information, allowing the learner to derive the hypothesis consistent with the PE that includes the mapping. In the system investigated, there are always two such mappings, each corresponding to a different PE.