|Abstract:||Boersma's (1997, 1998) Gradual Learning Algorithm (GLA) performs a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. Data consist of underlying forms paired with the corresponding winner forms. At each iteration, the algorithm needs to complete the current data pair with a corresponding loser form. Tesar and Smolensky (1998) suggest that this current loser should be set equal to the winner predicted by the current ranking. This paper develops a new argument for Tesar and Smolensky's proposal, based on the GLA's factorizability. The underlying typology often encodes non-interacting phonological processes, so that it factorizes into smaller typologies that encode a single process each. The GLA should be able to take advantage of this factorizability, in the sense that a run of the algorithm on the original typology should factorize into independent runs on the factor typologies. Factorizability of the GLA is guaranteed provided the current loser is set equal to the current prediction, providing new support for Tesar and Smolensky's proposal.