Sydney Levine, MIT
The Use of Moral Rules and Representations
We can think about a moral rule as a computation (or algorithm) that converts one representation (the input) another (the output). My work explores how studying moral rules can give us hints about how the moral world is represented (and often visa versa). For example, if we find robust evidence that we use the moral rule “don’t hit others”, then this suggests that the concept “hit” must appear in our representation of the moral world, or we wouldn’t be able to figure out when the rule applies.
In this talk, I present a case study of this dynamic exchange between the study of moral rules and representations. In particular, I discuss evidence that preschoolers use a certain moral rule (the "means principle"), which, in order to be applied, requires the subject to represent the intention of an agent as organized into a hierarchical structure (that is, with sub-goals -- or "means" -- that are done in the service of super-ordinate goals). This is the first evidence that young children use hierarchically structured intentions (rather than simply the goals of agents) to make moral judgments.
In the second part of the talk I'll discuss cases where subjects' intuitions can be modeled as being generated by an agreement-based kind of moral reasoning (a "virtual bargaining" process) rather than as emerging from strictly rule-based judgment. I argue that this sort of "contract-based" moral reasoning may, paradoxically, be a lens through which we can understand how moral rules get formed and also how they are permissibly violated.