The major portion of the literature on statistics and inference is prescriptive: It attempts to identify mechanical methods that dictate what kind of inferences are appropriate and what kinds of inferences are not. In contrast, I begin with the assumption that the inferences experienced researchers draw from their data are, by and large, sound, and that, in any event, the criterion against which these inferences must be measured is the consensus of the scientific community, not rigid rules. Thus, what is needed is a descriptive analysis of the kinds of inferences that are drawn and the circumstances surrounding those inferences. Two conclusions follow from such an analysis: First, there are a wide range of different types of inferences, and only a small portion of these are served by traditional methods. Second, what counts as a valid inference depends inextricably on knowledge of the theoretical and experimental context. I argue that the statistical procedures that best support these varying, knowledge-based inferences are those that provide a transparent rendering of the evidence provided by the data. In that regard, likelihood ratios are vastly superior to traditional significance tests. Likelihood ratios are easy to calculate, straightforward to interpret, and flexible enough to be adapted to a wide range of rhetorical contexts.