|Abstract:||Boersma & Hayes (2001; BH) report that the GLA succeeds at learning variation in three complex realistic test cases. This success is surprising, as nothing is built into the GLA to guide it towards probability matching. And indeed, it is not hard to construct artificial cases of variation where the GLA fails. We thus submit that the proper interpretation of BH's successful simulation results is the following: patterns of variation actually attested in Natural Language have some very special structure which the GLA is crucially able to exploit. Thus interpreted, BH's simulation results of course raise the following question: what is this special structure displayed by BH's test cases (and allegedly shared by other cases of variation in Natural Language) which allows the GLA to succeed? In Magri & Storme (forthcoming), we address this question through detailed analyses of the behavior of the GLA (and variants thereof) on BH's three test cases. In this paper, we offer a preview of our analyses, focusing on BH's Ilokano metathesis test case.