|Abstract:||In this work, I present a recursive algorithm for computing the number of rankings consistent with a set of optimal candidates in the framework of Optimality Theory. The ability to measure this quantity, which I call the r-volume, allows a simple and effective Bayesian strategy in learning: choose the candidate preferred by a plurality of the rankings consistent with previous observations. With k constraints, this strategy is guaranteed to make fewer than k log(k) mistaken predictions. This improves the k^2 bound on mistakes for Tesar and Smolensky’s Constraint Demotion algorithm, and I show that it is within a logarithmic factor of the best possible mistake bound for learning rankings. Though, the recursive algorithm is vastly better than brute-force enumeration in vastly many cases, the counting problem is inherently hard (#P-complete), so the worst cases will be intractable for large k. This complexity can, however, be avoided if r-volumes are estimated via sampling. In this case—-though it is never computed—-the r-volume of a candidate is proportional to its likelihood of being selected by a given sampled ranking. In addition to polling rankings to find candidates with maximal r-volume, sampling can be used to make predictions whose probability matches r-volume. This latter mechanism has been independently used to model linguistic variation, so the use of sampling in learning offers a formal connection between tendencies in variation and asymmetries in typological distributions. The ability to compute r-volumes makes it possible to assess this connection and to provide a precise quantitative evaluation of the sampling model of variation. The second half of the paper reviews a range of cases in which r-volume is correlated with frequency of typological attestation and frequency of use in variation.