""http://www.w3.org/TR/html4/loose.dtd" The Philosopher’s Annual



Volume XXXIV
Introduction


Every year, we at the Philosopher’s Annual anthologize an attempt at selecting the ten best articles of the previous year’s literature. This is, to say the least, an imperfect science: the selection of ‘the ten best’ should be taken with a grain or ore of salt. Our aim, however, is to call attention to some particularly outstanding work. In that we are confident that our procedure succeeds.

The Nominating Editors forward articles of exceptional merit, which are then collected together for a second review by the Nominating Editors as a whole. We, the four Editors, thoroughly read and discuss the merits of the top thirty or so papers from this group, eventually settling upon the ten which stand out to us as the best.

What follows is an introduction to each of those papers, starting with Sam Newlands’ “Leibniz on Privations, Limitations, and the Metaphysics of Evil”:

The attempt to understand the nature of evil has had a storied and complex history. In “Leibniz on Privations, Limitations, and the Metaphysics of Evil”, Sam Newlands traces the evolution (or lack thereof) of Leibniz’s views on the metaphysical nature of evil throughout his career. In early writings, Leibniz clearly opposes the view that evil is a mere privation, but a later position shows signs of converging on a very similar account. Newlands shows how the medieval privation view as expressed by Aquinas and Suarez was shaped by an opposition to two extremes: the Manichean picture of evil as a parallel force to good, and the neo-Platonic picture of evil as negation. By putting Leibniz in dialogue with this medieval tradition, we can see how, from the Medieval perspective, his final view is quite different from the privation account, since it fails to deliver on the central motivations for Suarez and Aquinas. Newlands leverages this disagreement to explain how Leibniz comes to the question of evil, and what we can make of his apparent change in view.

It is always difficult to defend a famous historical passage against a disparaging critical interpretation that has become widespread. It is also pretty difficult to say anything at all original about the ontological argument. It is particularly impressive, therefore, that Anat Schechtman’s paper “Descartes’ Argument for the Existence of the Idea of an Infinite Being” attempts both of these feats, and arguably succeeds on both counts. In the paper, Schechtman calls for a halt to the too-quick dismissal of Descartes’ claim that he has an idea of an infinite being as an unfounded premise to which Descartes simply helps himself without argument in the Third Meditation. Paying careful attention to the text, Schechtman locates and clearly expounds a further argument for this claim buried within a paragraph that is often overlooked because it occurs later in the text than the claim that is the argument’s conclusion. In so doing, Schechtman also calls for—and motivates—some more substantial revisions to our understanding of Descartes’ methodology in the Meditations, especially our understanding of the conditions under which Descartes takes himself to be entitled to accept a proposition, and of Descartes’ conception of the epistemic progress through the Meditations. This novel approach to Descartes exegesis, moving substantially beyond the well-worn ideas of “clarity” and “distinctness”, could well affect our interpretation of many of other parts of his work (Schechtman also applies it to Descartes’ idea of the self), and is therefore of independent interest.

Our next selection also brings to light a new and compelling interpretation of a much pored-over philosophical text. To begin, Kantian moral philosophy is often thought of as giving short shrift to the emotions, given that Kant dismisses them as “pathological” and thus unfit to provide truly moral motivation. At the same time, however, one of the most well known aspects of Kantian moral philosophy is the argument that if an act is to have moral worth it must be performed out of a sense of duty, that is, out of a feeling of respect for the moral law. How to reconcile these seemingly contradictory claims animates Owen Ware’s contribution, “Kant on Moral Sensibility and Moral Emotion.” Insightfully drawing upon a broad range of Kant’s writings, Ware offers a compelling reconstruction of Kant’s considered view on the relationship between the rational grounds of moral principles and the feeling of respect prompted by moral reasoning. Ware also provides a glimpse into how Kant’s position evolved over time, starting from an initial sympathy with sentimentalists such as Hutcheson and Hume before concluding that the feeling of respect must be generated by reason. Finally, Ware deftly weaves into his account the main positions that have emerged from the voluminous secondary literature on the topic, while still managing to carve out a distinctive position that highlights Kant’s interest in the phenomenology of moral agency.

Ware’s contribution takes on one historically prominent account of moral reasoning. Behind and beyond questions of moral reasoning, however, lie thorny questions of the epistemic status of moral beliefs. Though there is a long tradition of comparing moral beliefs to mathematical beliefs, it is frequently claimed that, despite some similarities, mathematical realism stands on firmer epistemic ground than moral realism. According to this version of moral antirealism, while both moral and mathematical propositions are arrived at via a priori deduction, mathematical axioms are self-evident, mathematical axioms are the subjects of widespread consensus, and mathematical propositions figure into our best empirical scientific theories and thus enjoy at least some prima facie epistemic justification. Moral axioms and propositions, by contrast, have none of these features, and so analogies between moral and mathematical belief must fail. Justin Clarke-Doane’s contribution, “Moral Epistemology: The Mathematics Analogy” investigates these purported differences and argues that they do not stand up to scrutiny. Clarke-Doane demonstrates that widespread disagreement over purportedly ‘self-evident’ propositions is pervasive in mathematics no less than in morality and that many of our standard mathematical beliefs are not indispensable to our best empirical scientific theories. He goes on to examine a series of principles that might be thought to ground the claim that mathematical knowledge can be well justified whereas the same cannot be said of moral knowledge. Finding each principle wanting, he concludes that moral realism may actually be more plausible than mathematical realism. Clarke-Doane’s arguments offer a genuine challenge to many moral antirealists and thus advances an important dialectic in contemporary metaethics.

Saba Bazargan’s contribution turns our attention from the nature of moral reasoning and moral beliefs to the practical ways in which both may be constrained. In an interview justifying his wartime actions, Benjamin Murmelstein, the leader of the Jewish council in Theresienstadt, says the following: “usually marionettes are pulled by wires, but in this case the marionette had to pull his own wires”. The phenomenon which Murmelstein is describing is the subject of Bazargan’s paper, “Moral Coercion”. Bazargan investigates the normative elements of moral coercion, offering a definition and an analysis of who is wrong in moral coercion and in what way. Not unlike the image of a marionette forced to pull its own strings, Bazargan’s account involves the coercer forcefully influencing the victim by hijacking their aims. In these cases, the victim finds that having some particular moral commitments leaves them worse off according to those very commitments. After arguing that acts of moral coercion share a similar normative status to analogous cases where the victim is forced to act by their environment, he notes a difference between the two categories: moral coercion carries through the coercer’s intention to generate an intentional wrongdoing, whereas merely functional or environmental coercion does not. Finally, Bazargan extends his account to explain the intuitive idea that letting yourself be morally coerced in some sense allows evil to succeed. This paper highlights a fascinating intersection of ethical issues which give rise to genuine and serious political ramifications.

Our selection includes a number of papers that employ elements of formal modeling in a wide range of subfields, making use of what to our minds is a fascinating and diverse array of technques.

The first of these uses causal statistical methods to break new ground in an old question in philosophy of science. In trying to understand obesity, heart disease, patterns of infection, or the spread of opinion, we are looking for a causal understanding of complex phenomena in terms of all potentially relevant variables. Our evidence, however, is inevitably limited to observational studies or randomized controlled trials on single or limited subsets of variables. The larger picture is available only by ‘piecemeal construction’ from those smaller studies. In “The Limits of Piecemeal Causal Inference,” Conor Mayo-Wilson tackles a problem that is of both deep philosophical interest and of wide scientific and practical importance: how are we to combine studies limited to subsets of variables into a single causal theory including them all? In what kinds of cases will observational data decide between alternative theories only if all variables are measured simultaneously? Assuming a Causal Markov condition and a Causal Faithfulness condition characteristic of techniques from automated causal discovery, Mayo-Wilson offers major theorems on what kind of information can be lost in the process, how much will be lost, and how often the problem will arise. The threat of underdetermination, it turns out, will be quite different in different cases. In some cases the threat will be severe, in that piecemeal evidence will be unable to decide between theories that vary in important ways. In other cases differences between rival theories will be minimal. Given some variable relationships, characteristic of some scientific domains, problems of underdetermination may never arise. For other variable relationships, in other domains, we can expect frequent issues of underdetermination.

The next paper also gets at issues of combining and chunking evidence and beliefs, albeit from a different perspective and through a different formalism. Versions of the Lottery Paradox have been cropping up all over the place since Kyburg’s original formulation in 1961. A quick response, and therefore a tempting one, is to deny the closure of rational belief under conjunction—the idea that if it is rational to believe all of {P1, P2 … PN}, then it is rational to believe their conjunction (P1 & P2 & … & PN). It is interesting, then, that the Lottery phenomenon can be precisely replicated within the logic of counterfactuals without relying on the principle of Agglomeration, the analogue of closure under conjunction, and, indeed, relying instead on a principle in the logic of counterfactuals—Rational Monotonicity—that has no obvious analogue for unconditional belief. In “A Lottery Paradox for Counterfactuals Without Agglomeration”, Hannes Leitgeb demonstrates the existence of such a phenomenon with admirable elegance and clarity. Moreover, he canvasses and explores an extremely wide range of possible responses to the problem, settling on and formally developing a particularly contentious and interesting one—a version of Contextualism according to which what counts as a proposition may vary from context to context (and not merely which proposition a given sentence expresses). Leitgeb’s view entails that there is no context in which “If the host had made it to the studio, there would have been the TV lottery that day” and “There would have been the TV lottery that day if and only if ticket 1 won or ticket 2 won or…” both express true propositions. In addition to the interest of the paradox he presents, Leitgeb’s proposed solution could have far-reaching implications across epistemology and philosophy of language, in which issues of partition-dependence are becoming increasingly salient.

According to some accounts of knowledge, we can know something while realizing that it’s not particularly subjectively likely that we know it. Timothy Williamson goes several steps further to consider cases where we know something, but it’s vanishingly unlikely on our evidence that we know it. Williamson’s “Very Improbable Knowing” presents a model of such cases in epistemic logic, and argues that accepting the possibility of very improbable knowing solves some epistemological dilemmas and unifies others. In the latter case, he shows how Gettier cases fall out of this assumption and even presents a way of dividing them based on the degree of mismatch between appearance and reality. In the former case, for instance, consider multi-premise closure, the claim that if a subject correctly deduces a conclusion from premises which she knows, she knows that conclusion. Williamson demonstrates that accounts which do not allow improbable knowing will fail to validate this principle, unlike those that do allow it. Apparent counterexamples to the principle can be explained by noting that the subject can in fact know the conclusion, she just cannot know that she knows it. In taking on the problem of misleading evidence, Williamson both explores the formal possibilities by evaluating intuitive results, and extends the informal theory by considering its formal consequences.

ZFCU is Zermelo-Frankel set theory with the axiom of Choice, incorporating non-set atoms as urelements. The best intuitive model we have for understanding ZFCU is the iterative conception of sets. But taken together, there at the foundations of set theory, these give us a disconnect. The iterative conception seems entirely consistent with the possibility SOA that there is a set of atoms that is larger than any cardinal number: a set of bounded iterative rank but indeterminable size. ZFCU, on the other hand, entails ~ SOA. In “Wide Sets, ZFCU, and the Iterative Conception,” Christopher Menzel uses graphical re-conceptualizations of the iterative conception together with progressive variations on ZFCU, concentrating on Replacement and Powerset, in an attempt to both diagnose and resolve the disconnect. Menzel’s study is valuable not merely for the system ZFCU* it finally suggests, nor for the implications regarding Lewis’s modal realism that it tries to draw, but for the suggestive approach to variations on the iterative conception linked to corresponding formal systems that the piece represents.

A notorious problem for contemporary decision theory is that its theoretical justification comes apart from its most intuitively appealing practical uses. In practice, we want the theory to have some normative bite; we want to be able to criticize people for choosing and acting in ways that do not maximize expected utility. But the theoretical tradition requires probability- and utility- functions to be ascribed to agents on the basis of their behavior in such a way that it becomes in principle impossible for someone to fail to be representable as maximizing expected utility. Things would look pretty bad for decision theory, then, were it not for the possibility of something like Kenny Easwaran’s “Decision Theory Without Representation Theorems”. In this paper, Easwaran provides the beginnings of a formal framework with which to solve this notorious problem. Beginning with the meagre resources of a strict preference relation, an indifference relation over outcomes, and a simple Dominance principle, Easwaran rebuilds decision theory from the ground up, drawing on the resources of group theory and incorporating technical definitions of correspondences between state spaces, differences between outcomes, and trade-offs. The result is a relation over acts that captures all of the comparisons made by expected utility theory and more. Some of the details are yet to be filled out (and Easwaran is upfront as to when this is the case), but the project certainly has enormous potential. With a range of nonpragmatic vindications of probabilism already in the background literature, Easwaran’s masterful paper puts the prospect of a thoroughly normative decision theory within our grasp.

It is undoubtedly immodest, in a field as active and diverse as ours, to claim that we have succeeded in selecting the ten best articles from the literature of 2014. We nonetheless offer these as an outstanding bunch particularly worthy of attention, and hope that you enjoy reading and discussing them as much as we have.


Patrick Grim
Sara Aronowitz
Zoe Johnson King
Nicholas Serafin





CURRENT PAST VOLUMES