|Abstract:||A common `typological criterion' on linguistic models is that they should predict (almost) all observed patterns while minimizing overgeneration. For optimization-based models, it has been argued that constraints should be ranked rather than weighted to minimize overgeneration. Recently, however, weighting has been shown to elegantly capture patterns that ranking misses. To evaluate the issue, we provide software that builds ranked/weighted-typologies. We find that some independently motivated restrictions eliminate much overgeneration but that, in general, weighting leads to numerous novel (and odd) constraint interactions.