[Author Login]
Title:Contenders and Learning
Authors:Jason Riggle
Comment:This paper appears in the Proceedings of the WCCFL XXIII.
Abstract:In Optimality Theory (Prince and Smolensky, 1993), the effect
of running an input form through the phonological grammar
is typically illustrated with a tableau showing that a particular
output is more harmonic than a set of alternative candidates.
But could a finite set of losing candidates ever suffice
to prove that one candidate was optimal among the infinite
range of possible competitors? The answer is yes, if the
candidates under consideration are the set of contenders:
those that could win under some permutation of the constraints
in the grammar. This is so because, as Samek-Lodovici and
Prince (1999) have shown, if a candidate is more harmonic
than all non-harmonically-bounded competitors (the contenders)
it is more harmonic than all possible competitors.

I present here an algorithm that takes an input and a set
of constraints and generates the set of contenders. This
algorithm brings together Ellison’s (1994) finite state
approach to optimization and the inconsistency detecting
capability of Tesar’s (1995) recursive constraint demotion
algorithm thereby making it possible to generate the entire
set of contenders in a single derivation without needing
to perform factorially many checks of the range of possible
constraint rankings.

The ability to generate the contenders has important ramification
s for learning in OT, for they are exactly the candidates
whose failure the learner must account for. Access to the
set of contenders allows significant improvements over error
driven learning algorithms because the learner is able to
compare an observed datum with all of the alternative contenders
rather than just the single hypothetical candidate generated
by the learner’s current ranking. In this sense, access
to the contenders makes each datum maximally informative.
Moreover, because the set of contenders does not depend
on the learner’s current ranking hypothesis, the learner
cannot get stuck in a loop and return to an incorrect hypothesis
after it has been eliminated. With the contenders, learning
can proceed by simply taking the union of the ranking entailments
for observed data rather than by the piece by piece reranking
of constraints. This makes it unnecessary to call on an
independent mechanism to identify more ‘restrictive’ rankings
in cases where multiple rerankings would derive the observed
Type:Paper/tech report
Article:Version 1