[MUSIC] Hey welcome back. Welcome to the office hours for week three. Where we discussed origins of morality and animal research, and so on. And we have a, a lot of questions, actually. More questions than for the previous week. So we'll just jump in. This is Christina Starmins and she'll lead out. >> Hi. Okay. So first, this week we have an announcement. There was this is, this is for the quiz questions that are actually in the video lectures. So they don't count for anything. But we have, sort of, these questions to make sure that the students are following along. And and to let them think about the issues a little bit. In the 3.3 video lecture there was a question that was asking about whether it's better for a creature's own survival to be a discriminate altruist, an indiscriminate altruist or just a freeloader. And some people pointed out on the forum that, in terms of my own survival, it's best for me to be a freeloader. In terms of my gene survival, it's best for me to be a discriminate altruist, helping my, my kin. And so the, the answer that was on the question that was in the video was that it was supposed to be better for your own survival to be a discriminate altruist and that's actually not correct right? >> Yeah. >> The correct answer should be for my own survival I should just be a freeloader. >> Yeah so, so we messed up and there were people, correctly caught us on that. I'm really glad that people were attentive. So, so like you said, if it's just for yourself, you should just be you know, just take whatever, whatever you get. Don't be an altruist at all. What we had meant to ask was what's best for your genes? And the other answer was to be a discriminate altruist. In that, because we share our genes with our children and other kin. The best strategy for your genes is to be kind to others, and also people you interact with regularly. But not kind to strangers. And that's what we were looking for but we didn't ask the question right? So this is yet another example of the commentators being really on top of things. Sometimes more on top of things than we were. >> Right, and that's now been changed for people who take the, who watch the lecture and sort of answers these questions from here on out. But for anyone who took it before and answered discriminate altruist or sorry who answered freeloader you were correct. >> Yes. >> Okay so now we have a question that sort of is just taking off from the things we were talking about last week, so we talked a lot about empathy in last weeks office hours. And there was one question that we didn't get to, which is from Layla. And Layla is a psychotherapist, and she says, you know, is, isn't there an exception for this, this empathy is bad policy for people like me? So, I, I really sort of like, really call on empathy in my, my job and my practice, and, it seems to be a really useful thing. Is there an exception to your view here? >> I think that's a good question and a lot of therapists, Carl Rogers for instance, spoke movingly about, about empathy and its powers. I think for the case of therapy more so than just about any other case, it's important to make a distinction between understanding what's going on in the mind of another person so called cognitive empathy and feeling what they're feeling. Empathy proper, emotional empathy. So, cognitive empathy is essential for a therapist, a therapist would be useless if he or she couldn't understand what is going on in the mind of the patient or the client or whatever you want to call it. Just be useless, you have to understand what the person is thinking or worrying about. But, I think a ther, a therapist is exactly the sort of person who shouldn't be empathetic in a second sense. If I, so this is like a therapy session here. So, I have the sofa, I'm, I'm the patient. So I'm telling you about all my depression and my anxiety, which I have a lot and I'm there. I'm tearing up and so on. And you give me, you're looking at me, if you're a good therapist you'll give me that sort of impassive therapist face. And a nod, the therapist nod and, and what you're not feeling is anxious and terrified. You would be a bad therapist if you were, you wouldn't last a month. Because it would drive you crazy being with all these people. But also, you wouldn't be helping me. I don't need you to feel my anxiety, I need you to be aware that I'm anxious. But if you were to feel my anxiety and we're both here quivering in tears together, you know that's of no help at all. I need you to be calm. I need you to be supportive and calm and objective and in some interesting sense, distant from me, in that you're not catching all my emotions. And so the best therapist is highly empathetic in the sense of cognitive empathy, but I think, not empathetic at all in the case of emotional empathy. >> Yeah, and I think there's an interesting point that you know, you often make this distinction between cognitive and emotional empathy, but you can use cognitive empathy to know what somebody else is feeling. So it might be, you know, it might be tempting to think of cognitive empathy as just being like I know what you're thinking. >> Right. >> I know these sort of facts about you. But one of the facts about you is how you're feeling and so it's not divorced from how you're feeling which is how I think it sometimes sounds. >> I think that's, that's, that's a really important point which is cognitive empathy and emotional empathy are a little bit misnomers because cognitive empathy could involve the understanding of emotions. It's cognitive empathy if you understand that I'm upset or very angry or bored or whatever. Maybe better terminology, which would be, sort of, understanding empathy and feeling empathy. And understanding empathy is what we need as therapists. The feeling empathy maybe has a part in our lives, I keep reading people who argue that it's essential for, for friendship and romantic relationships and so on. But it isn't what you want as a therapist. >> Okay, okay so moving onto the we talked a lot about research of babies this week and there are a number of questions about the actual babies who participated in the study. Now you talked about them a little bit in your New York Times article about the morality of babies, sort of who are the parents? What are their motivations for coming and you know, donating their child's time to science. But what do you know about sort of who the babies are? And in particular there was a question from Mariah who asks you know, what if the babies had for example, psychopathic parents? Would this change the way they reacted to the helping and the hindering that they were watching? >> Yeah it's a really, really good question that often comes up. I mean sometimes it comes up in an accusatory kind of way which is, people say well you just study Yale babies. And actually the work done. That I described and this is work out of Karen Wyn's infant lab here at Yale, just actually down the hall. Is is study with a broad spectrum of New Haven babies. And New Haven is, is a very diverse ethnically socially, economically diverse city. And we get babies from every aspect of the population. And what we find in our studies is no difference. It doesn't matter whether you're black or white. It doesn't matter whether you're rich or poor. It doesn't matter whether you're an only child or a child with siblings. Male or female. We get these patterns of understanding that seem very, very robust. Now this is in part because we're looking at facets of moral understanding that we think are universal. We're not looking at variance. Like variance at how much you help or how upset you are by the suffering of another person. But at least for these universals, we find that they really are universals. Now we have a representative sample. We don't have a complete sample. So if a baby has parents who are addicted to drugs, or, or in some way, deeply depressed. We actually don't have data on those babies. And that Karen, Carolyn is working to fix that. She's collaborating with people at the Yale Child Studies Center looking at babies who come from very deprived and disadvantaged backgrounds to see if the same phenomena hold for them. My bet is that it will, but it's an open question. >> Okay, and, and sort of related to Maria's question is, this question of, psychopathy that she was interested in particular. So there's one question about whether psychopathy is something you can sort of learn or inherit from your parents. Another question is what if you know, if you have someone who's a psychopath, what do they look like as a baby? Are you going to be able to see signs of that when they're watching puppets or shapes, doing moral and immoral action? >> Yeah, so we always have some babies who don't look towards the good guy. Who punish, want to punish the bad, punish the good guy and get the, respond differently. And the question we always get asked is, are these little baby psychopaths? Is this the psychopath test for babies? And the honest answer is, we don't know. I, I think it's unlikely. I think it would be unlikely to find traces of psychopathy showing up so early. I think more to the point, our experiments study moral judgement. As best we know psychopaths are pretty good at moral judgement. So it's not that an impairment in moral judgement is what you would expect from somebody who later grows up to be a psychopath but the only answer, honest answer is that we just don't know. >> Okay. All right and relatedly while we're talking about sort of participants in in these kinds of psychology studies you know, a lot of studies done with adults as well have similar kinds of issues where there's a small subset of people you know they're either undergraduates at a university or you know, a very small group of people. So there's a general question about how well do these do the findings from these studies generalize. >> Yeah. It, it's a fair question, you know? People used to describe psychology the study of lab rats in college freshmen. And, and a lot of research in the field used to be done just in college students. And you know, so, and, and some of it still is. Like I love the Arielly. In Lowenstein's study. Which we talked about involving sexual arousal and more, moral judgements. But as one of the questioners pointed out, this is done with sort of a narrow college population. All male. And also all the sort of people who would agree to be part of a study. Involving you know, masturbating while recording your, your responses to moral dilemmas. Which probably isn't most of us. And so, and so you could wonder would the result hold on a broader population? And, and those are always good questions to ask, sometimes just for reasons of convenience or reasons of, of you're stuck with, you have to look at a narrow population and then try to extrapolate. And it's, and it's better than nothing. But, but it's good to be mindful of the fact that, that often a lot of the research we make claims about are drawn from populations that aren't necessarily representative of the whole. >> Okay. All right so moving on to the question about children. This is on the topic of religion in children. The question's from Sandra Browning. So she said this weeks material caused me to start think about, how children become religious. And so in some of the articles that we've read, there's a view presented from you and from other people that, religion sort of stems from these broader biases towards a dualistic view or a predisposition towards creationism and so on. What is the evidence for, for this view? Is there other opposing views? And are there any studies on these kinds of topics? >> Yeah it's a really interesting question. Next week, we're going to discuss religion and morality. And look at the data on whether or not religion makes you a good person or a bad person or whatever. That's really, really interesting. But we dont' discuss the origins of religion. And and there are different theories out there. I have an article called is God an Accident? Published in Atlantic. You just google it and see. And it, and in my article summarizes the debate in the field. And roughly you can think of three different views. All which sort of have sort of serious people arguing in their favor, in its favor. One view is that religion is a cultural product, and so it's the same sort of thing as baseball, or, or you know, Greek history or whatever, it's something you just learn. Another view, on the other extreme, is that religion's a biological adaptation. It's like color vision or having five fingers or something. It evolved because it's good for us in some way. A middle view, which is the view I've defended, is that religion is an evolutionary accident. We've evolved all sorts of traits, like a hypersensitivity to other people's beliefs and desires. A duelist world view, things like that. And then out of these spawned religion. And this is, this is my view. I think there's evidence for it. It's controversial. I can teach another course where we argue pro and con about different theories of evolution of religion. I should I should say because people always wonder about this. That the debate over why we have religious beliefs is kind of independent about whether religious beliefs are true. So I, you know, as a psychologist, I don't have anything interesting to say about whether or not there's a God or miracles or so on. That's sort of a separate issue. But as a purely psychologic question, why we believe that there's a God. Why we believe in miracles. And I think that the by product view is something worth taking seriously. >> Okay. All right, the next question is from [INAUDIBLE]. And the question is about morality in rich and poor countries. So if one assumes that money is required to satisfy basic needs in life then the richer one gets, the more sort of outwardly focused it's possible to be. I don't have to worry about, you know, pro, providing for myself, or, you know, whether, where am I going to get the food from and so on, and then I have sort of the luxury of potentially giving charity to others, helping others in other ways. So if you take that a step further does it follow that richer societies are more moral than poorer societies. >> It's a really good question. I think, I think you laid out the argument for why under one reading of the question answers probably yes. Which is that I'd much rather be a citizen of a very rich country than a very poor country. You know, for obvious reasons. But talking about obvious reasons are connected to morality. I'm less likely to be murdered. I'm more likely if I'm in trouble to be helped by the government and so on. But the way you framed it, which is a nice way of framing it, is not cause people are better. It's just cause there's more resources around. So people's psychologies could be the same regarding morality. But but they just have more resources to use. And I think to some extent that's, that's the right way to think about the connection between resources and morality. I think people are at their best. When they feel safe and they have, a roof over their head and a steady supply of food and they don't have to worry about their kids health and so on. Then they can be generous and they can be merciful to other and so on. How far you want to take this is controversial. It's by no means clear that when it comes to day to day interaction, rich countries have nicer people than poor countries. It's by no means clear that within a society you get that sort of relationship, so data on charities in the United States find that rich people donate proportionally less of their money than poor people. I think the difference is between like 1% versus 3%. And if that's true, that would suggest that rich people actually are not as moral, in that domain, as poor people. >> Unless you think about it in terms of the absolute amount that the rich people are donating, which presumably is still more than the less rich people. >> Yes, so rich people definitely give absolutely greater amount, and also somebody could also say, look, the rich people give a lot in a country like the United States, a lot of money in terms of taxes. Where they pay a vast vast amount of taxes, taxes which often go to sort of good moral projects or so on. But a utilitarian, like Peter Singer, would argue that rich people should give a much greater proportion than poor people. Because rich people can give a great proportion and still live comfortable, happy lives. While poor people, even the small amount that they give, could matter a lot. This is also, it's not just Peter Singer, it's, it's in the Gospels. It's, the story of widow's might, where Jesus tells of this, of this, this poor widow who gives just a little bit. And then and then the moral of it is, she although was in absolute terms a little bit, was, was it was far far more, than, than what the rich people gave. So your view clashes with both Peter Singer and Jesus. [LAUGH] So, presumably the, the people in, in both sectors of that study are actually still among the more well off people in America. >> Yes. >> It's not that the, the very poor people are giving more than the. >> Yes, yes, and, and, and what's actually true is that, is that to be poor in America., is often in absolute terms. In terms of equality in life to be rich and very poor country. Poor people in America often have, they do have enough food, they have government, medical care and so on. And so, and so, and, and so and I wouldn't actually be somewhat cautious about reading too much in to a study like that. But it's, it's an empirical question. How money relates to, to kindness. And the answer might be complicated. >> Okay, all right next question is from John Weathers. And there's a sort of a long thoughtful question here, which I don't have time to read, but I'm going to summarize. So basically, you know, we've been arguing that, morality is sort of, or maybe, or maybe ought to be mostly an intellectual kind of thing. Something through, through reason and that we, maybe want to shut off some of our, impulses associated with morality, like, emotional responses and particularly empathy. And John asks, if we do that, then how do we have morality at all, because shouldn't this sort of cost benefit analysis that we end up with, just, just propel us to only care about ourselves, why does anybody else ever come into the picture? This is really hard, really deep issue. I'll just make a couple of remarks about it, and then you may have different views. You may want to respond, but, but one is I think we should be to some extent realists about morality. You know, there's a fact of the matter you know, concentration camps and genocides is bad. Helping poor people is good and so on. But the sort of the fact it is, is a different fact than the facts of mathematics or physics. These are facts that are in some sense dependent on our psychologies, dependent on what people want, and, and how we are, and I think that, that I'm a huge champion of the power of reason, but some things we simply have to accept as premises. And we can't, and to override them, to, to, to say we want to do things differently. Would be to have no morality at all. To get to the specifics of the question. Our natural inclination is somewhat selfish, but also somewhat outward looking. We naturally help our, our friends. Help our children. Close kin. And so, the claim that it's sort of our natural default, should be to be entirely selfish, seems to me just as misguided, as to say our natural default should be to be indefinitely generous. Some people have philosophies, Ayn Rand being the famous case where selfishness is, is great. But that's a philosophy, it's not natural. Animals, non human animals aren't selfish. Humans are not naturally selfish, we have some degree of, some inclination towards altruism. To hold a philosophy that says, you should just care only about yourself, is as much of a sort of reason non-natural philosophy, as a philosophy that says you should care for each person equally. And so my own view is that we can't get that far from our natural inclination. That, that I think we should be much kinder to strangers than we are. I think the arguments for that are good. But I would distrust the morality that said either don't care about your kids at all. You know, eat them, kill them and eat, or that said your kids are as valuable as a stranger and you should not favor your kid over a stranger. Okay, though of course, just because something is natural, is part of our natural inclinations, doesn't make it good. So, all kinds of things racism and so on are natural. And of course those are things we think we should try to overcome. So, are these impulses to, to help our kin and, I mean, not those far away and not just be selfish. Is that sort of impulse something that we should be trying to overcome in the same way? >> Some would say, some would say, you know, as, the answer's sort of a, in my role as a teacher. That's a view which a lot of people have. A lot of philosophers have the view that, that are biased to favor our kid is as, upon reflection, no different in kind than our bias to favor somebody of the same skin color as us, or someone who accidentally lives close to us as opposed to far away. Here's one way of expressing my skepticism. I think we've said this before, but it, it's a fundamental of morality that ought implies can, and that if I say you ought to do something, that means you can do it. I think asking people to dismiss the bonds of family and friends is asking too much. And because it's asking too much, it's not a proper moral requirement. It's as if i said you are morally obliged to, you know, to duplicate yourself into a thousand creatures and draw food all over the world. Well you can't be. Because you can't do that. So it's not fair to say, it's not correct to say you ought to. I think saying that, that you are morally obliged to care for each person as if they were this, your brother, is the same, it's too unnatural. It's asking too much. >> That's an interesting response. I think it's a response that people often have to your view on empathy, which is, you know, but we just are empathetic people, so aren't you asking too much by asking us to, dial it down? >> Yeah I, I, I could. I think that's probably true and, and it is a concern. I guess what I would say in the case of empathy is. you can't dial it down as a person very much. But you have social institutions that dial it down. I think that's true to some extent for what you are saying. So at a personal level, you can't blame me for, and you shouldn't blame me for favoring my kid, over a stranger. But suppose I'm charge of hiring for a job or something. Then you set up rules saying you can't favor your kid. If your kid, if your kid is part of the applicant pool, you don't get to choose. So you have social structures setting it up. I'll tell you, if I had to choose between my me against empathy. And my, my worries that we can't override, the bonds of kinship. I would keep my against empathy stance, and I'd say, maybe you're right, maybe we should give up the bonds of kinship. >> Okay. >> I'm that committed. >> All right, so moving on the animal research, and we talked, we had, a great lecture from Professor Laurie Santos this week. One question relating to that, that topic more generally comes from Tristan Cost who, asks about the moral justification for using animals in scientific research. There seem to be important moral questions in a meta sort of way about whether it's, are we sort of gaining enough from this research to justify? >> Yeah, so, so, one justification is that animals have no souls. But, it's not the way we're going to go I think, I think the way you framed it is exactly right. Now, and it was actually a, a wonderful, discussion, discussion boards where people who are very concerned about animal rights went on about how they admire people who don't, who don't use things that are made from animals, who don't use even, even who reject medical care. If medical care was achieved through animal research. But the way you put it is, is it worth it? And to me that's the right way to put it. Which is. I'm kind of a utilitarian on this. Also regarding human research. Which is that, that the question is does the cost justify the benefits? Now in Laurie Santos's Capuchin research, I think the answer is clearly yes. because these Capuchins are extremely well treated. They're actually treated much better in her lab than they would be in the wild. Where creatures try to eat them and kill them. And, and so it's less natural, but I don't care about that, they're, they live better lives. A lot of animal research, including a lot of animal research done in psychology departments and neuroscience departments, are less cheerful for the animals. There's, there's, work on animal on, on the visual system for instance involves opening up monkey's brains and cats and so on. And, and, and cause the animals there's tremendous effort to cause them minimal suffering. But there's a bit of suffering and then ultimately death. And then the question is are the results worth it? And I think that's kind of a case by case thing. I'm not a, I know there are people, I think about animal research kind of the same way I think about, about torture or capital punishment, or affirmative action, or, or speech codes or whatever. Which is these are difficult questions. Ultimately they would be resolved by saying, there's always some harm being done. If I'm not allowed to say what I want in the classroom, that harms me in certain ways. But, there's also you traded off against the benefits. And for animal research, I think somebody who, somebody who would not kill a cat to save a child, I can't imagine, it seems like a moral monster to me. Yeah, and it's worth pointing out that, it in university settings when we have animal research, in fact for all research, we have these stringent, review boards that sort of go over, the plans, research and what the benefits are and compare that to what the costs are and sort of really go through everything with a fine tooth comb. So that they, you know, try to balance, you know, the more, you can sort of conceive of harm being done, you know restricting your speech in a classroom is maybe a minor harm compared to cutting open your brain. And so you know, the more you might see, harm being done, the more we need to see benefits to kind of counterbalance that. >> You're right and we shouldn't pretend as a society. As scientists are as people are in the discussion with. That we are thinking clearly about this. Because it's kind of a mess. Howe Herzoff for instance compares the good animals to the bad animals in a research lab. The good animals are the ones we are describing, which are the rats and the mice used in studies for instance. Which had be created, there, they're operated on as under anesthetic. If they're killed it's special painless ways. The bad animals are the ones that live in the basement and you get exterminators to kill them. And exterminators are not bound by any of our precise rules, and we're comfortable with this, comfortable with these bizarre distinctions. You know, where the rules of what can do with a dog that's your pet, are very different what you can do with your dog as an experimental subject, which are different than if it's a stray dog on the streets. >> Yeah. >> And, and these are very, very difficult issues and often very uncomfortable ones. >> And lead to very confusing questions when the lab rat escapes and starts running around the basement. >> That, that's right, that's right >> [LAUGH] All right so, so a, another sort of general question, about the animal research, so and, and maybe including the baby research in this as well you know looking at chimps and bonobos and these helpless infants is all very interesting. But how can it tells us more about, humanity, human morality more than just looking at adults who you can talk to? >> Yeah, I mean if you're interested for, for a lot of questions they can't. For a lot of questions about, if I want to know what your average American thinks about you know, the Ebola virus. Best thing to do is go ask the average American. But for some questions, it becomes adults are very difficult to get useful information from. Particularly questions with universality and innateness. So, I could ask if I want to know whether out intuitions about,um, say the distinction between causing somebody to die, versus allowing somebody to die, and we've seen that that's a psychologically very powerful distinction. I do a lot of studies with adults, finding that they have the distinction, but what if I'm interested in question of is the distinction natural? Are we born with it? Or is it a product of our culture? Then you gotta do other things, you gotta study babies, people from other cultures. You might, you might want to argue that perhaps filo genetically, it emerged, you know, long, long ago and it'll study a primate our, our you know our primate cousins like bonobos or chimps. And so it kind of depends what you're interested in. And I should also add there are some people who are interested in babies because they're interested in babies. And there are some people interested in Chimps just because they are interested in Chimps, and so they aren't using these data as a tool to learn about humanity, but they're interested in these other creatures in their own right. >> Yeah. I think another, another reason that or something that people are often very interested in is what which of these sort of sophisticated things that we can do as people are uniquely human. And there's been a lot of discussion over the history of, of psych, psychological research. And I think we keep finding that fewer and fewer things are uniquely human. >> Right. And, and as you point out, this is connected of course to the animal rights issue. So if you believe that that if every discovery about ascensions and emotional power, and, and level of feeling that, say, a dog or a chimp has is further argument against making it suffer unnecessarily. If it turned out that Rene Descartes was right, and dogs, for instance, are just mindless robots. Then you can take it apart, like you take apart a toaster, and it'll have no moral issue at all. But everybody agrees that Decartes was wrong. That, that it's pretty clear that an-, that animals suffer, animals feel pain. At the same time, I think a lot of animal research reinforces issues about human uniqueness. I think that studies that try to each animals language and studies of, at nonhuman communication systems show you a very rich communication system but also show they're very different from what you and I are doing now. And there are other examples as well. >> Okay. Next question comes from Helen Liu and question is about when the ends justify the means. So you typically have put forth a fairly consequentialist view about you know sort of what matters is the outcome of your, of your actions. What matters in a moral sense. And then Helen gives some examples to try and challenge this view. So, imagine a burglar comes into a house and robs the homeowner however all of the sudden the homeowner suffers a heart attack and the burglar calls 9/11 and ends up receiving an award for saving that homeowner's life. So should he have received the reward, award? Should we see what he did as sort of heroic? He saved someone's life, but after all he robbed the owner's house and other such examples like killing hundreds of thousands of people by experimenting on them to cure cancer and presumably save, you know, more people in the future and so on. >> So. I, the, those, that's a great question. Those are great examples. And I think the sophisticated consequentialist analysis response is that consequences, those consequences are all that matters. Still, it's long-range consequences. It's not necessarily short-range consequences. Suppose I break into a house and see, and I discover that someone is having a heart attack and I save their lives. Have I done something good? Well saving the life was good. The break in the house was bad and should be punished. Even if that case, it had a positive outcome, because in the long run society works a lot better if we discourage people from breaking into houses. >> Right. >> And so to for all sorts of other examples. If I try to stab somebody and I stab them and then by stabbing them I removed a tumor from their body and saved their lives, well that was a nice accident. But still you're going to punish me because although my specific act had benefits, that kind of thing must be discouraged. And I think a lot of the sort of, the seemingly unintuitive consequences of consequentialism, actually turn out that when you look at it, you have a broader consequentialist view, turn out that, that the consequentialist choice for the most part tends to be this sort of intuitively moral one. I mean, it's not necessary even to say, you know, we need to look at the long term consequences instead of the short term consequences, but just in adding up long term consequences as compared to short term consequences. So, you know, the, this burglar saved one person's life. But if we sort of commend the burglar, and by doing that we encourage burglary in the future, there's going to be, if you sort of add up all of the negative consequences of doing that and see that it outweighs the positive consequences of saving one person's life, then you should condemn the act. >> That's right. And to me actually that's, I mentioned torture before and I'm actually against torture. Even as a consequentialist I am. I'm not denying that there are certain cases where torture could lead to tremendous benefits. Where you torture somebody horrifically, and they tell you where the bomb is and that saves a hundred people and that seems perfectly great, if that was it. But, but if you allow torture then you allow torture done by real people in the real world, and it sets up an idea where I'll be afraid, where peop, where citizens are terrified of being arrested because they might be tortured, where irresponsible people can do torturing and so on. And on the whole, I think on a whole it's so much worse that we should bite the bullet and say, yeah in that case it would have helped. But you can't do it because we don't want it as policy. >> Okay. All right, the next question is about evolution. And the question is from Worth Swearingen. And he asks, in, in your lectures and in some of the articles we've read, it sound like, you know, we're arguing things like, how could we have this sort of altruism. There's no Darwinian rationale for just being indiscriminately kind to people. And it sounds like we're describing evolution as having some sort of purpose. Or you know, that there needs to be some goal that evolution is working towards. And he's wondering whether it's necessary to think about things like morality in this way. >> So, so there are some theists who see, who see evolution as having a goal. As seeing as guided by God in some way or set into place by God to achieve certain goals. And they're even some scholars of Robert Wright is, is a friend of mine. We've read, we've seen some of his work and and argues that there's a sense in which evolution can be seen as pursuing goals. But most mainstream evolutionary people, including me, in this case, see evolution as entirely without a goal. What gets confusing is we often say, we often use teleological language metaphorically in the case of evolution. Say, all evolution wants is to spread your genes. What the eye is for, is for seeing. We've evolved this in response to that. But in reality, all evolution is, is a type of algorithmic process where you know, a whole lot of variations are generated, and then they're weeded out by exposure to the environment and gradually through a winnowing process things change. But there's no intention. There's no design. There's no greater good that evolution cares about. Evolution doesn't care about anything. It doesn't care about the group, the individual, the species. It's just a mechanism which through its, its stupid mechanistic process gives rise to things that we as people say, wow, that's elegantly crafted to do certain things, that's for that or for this. >> Mm-hm. >> But, but I think talking about evolution goal is maybe an unavoidable metaphor but it's a metaphor nonetheless. >> All right. I think part of the confusion comes from the idea that evolution seems to be and I'll use the, the goal language. Evolution seems to be working toward things becoming better. So it's not as if the eye is going to evolve to see worse, right? And that's, it seems almost like that's sort of a goal directed kind of thing. But of course evolution accomplishes that through very non-goal directed means. >> It, it's a good point. So, through the history of, of through the history of biology things have gotten more complicated. In part because there's like a floor and you have to keep going up and up and up. And you can't go below a certain level, then you're not a cell at all. You're not a creature at all. And so, so there's a gradual increase, but it's not the case that the more complicated mechanisms are necessarily the better ones. Things like amoebas are actually tremendous evolut, tremendously successful evolved creatures. Yeah, and there's controversy over this. So you'll remember again Robert Wright's discussion, where he talks about evolution working at different levels. Continually leading to further what he calls non-zero sum interactions, cooperative interactions. And he sees this sort of force behind this but it's not clear there is one, and most people don't think there is one. >> Okay. All right, next question is a, is a really fun one. The question is, is incest always culturally taboo? So there's a, a long thread about this where people are bringing up different examples. From Rome and Egypt in particular. And particularly sort of royal families throughout history have had a long practice of marrying cousins. Even marrying siblings and so on. And then of course we've got Game of Thrones. >> Game of Thrones. >> With a brother and sister love relationship. >> Yeah. So a Game of Thrones is, is the exception, because they're like incredibly good looking. >> [LAUGH]. >> so, so you know, there's a level of controversy about this. About the universality of incest taboos. So here's one thing that's clear is that the precise shape of what counts as incest differs from society to society. Once you get beyond a certain amount. So the way I was raised is cousin marriage is out. You can't have sex with your first cousins, but not every place has this rule and in fact Charles Darwin who were just discussing married his, his wife Emma was his first cousin. And, and that was not so disapproved of at the time. So, you get some sort of variation. Siblings, I think, I think sex with siblings is always taboo. And so I would view those exceptions as genuine exceptions that can be explained one of two ways. One way is that if you're not raised with your sibling early on, you don't register them as a sibling, as a close family member. If and so to incest, psychological mechanisms against incest don't kick in. If you we're introduced to an attractive person at your stage of life and told this is your long lost brother. It wouldn't be this automatic revulsion against sex or romance with that person, because you just haven't been raised with them. On the flip side, if you grew up with somebody who you consciously knew shared none of your genes, was an adopted person, for instance. Like people being raised on a, on a Israeli kibbutz. Still, the idea of being intimate with them is just, grosses people out. So, I think part of the case where we have siblings marrying our case were to raise separately. The other part is where it's simply enforced. So, often royal families and so on, in order to keep wealth and resources and power stabilized within the group will kind of have rules that say that, that siblings have to marry. And there are cases in in ancient Egypt, cases in China as well. And I remember the argument is that these marriages often don't produce kids. That they're that they, they're not happy and not fulfilling. But, in any case, they wouldn't, so they wouldn't be exceptions in the case, because the incest claim is not that you'd be never intimate with your sibling. It's that you wouldn't want to be. >> Great. Okay. The next question is from Fred Crowen, and Fred is a lawyer and he has been thinking about the ways in which the, the issues of morality or moral psychology that we've been talking about overlap with the legal notio-, notion of justice. So, is our justice based on our moral psychology? Is it the other way around How should we think about that? >> It's a, it's a really, it's a really deep question I think that they interact in every possible way. I think there's a clear sense in which laws are often rooted in our moral intuitions. The laws prohibit behaviors that we view as morally wrong and and I think you don't need to talk in terms of natural law theory anything complicated like that. It's just we have intuitions that how people should behave with regard to one another and we will often want to enforce them. We're all happier if there's a rule saying you can't kill somebody and there's institutional punishments for that. But then there's cases where the law and common sense morality diverge. And I actually talked about one of them in my very first lecture, the case of this guy who watches his friend molest and kill the girl and then and did nothing. And a lot of people say well he should be in prison. And plainly, we view it as what you do is morally wrong, but it's not legally wrong. It wasn't legally wrong in the state of, I think, of Nevada, where this happened because there is no Good Samaritan law. And even places which do have Good Samaritan laws, the punishment for violating them, for watching a crime take place, is fairly mild, often like, a fine. And this could be sort of morally appalling. But you could see the legal argument for letting this happen. That it's very difficult to enforce good Samaritan laws. It requires too much discretion on the part of a prosecutor and so on. But because of that it's sort of a, a whole in the law. And then there's all sorts of things, which are immoral. Other things that are immoral that the law doesn't touch. You know, so, so if I steal a candy bar from the local store, committed a crime it's very clear. But if I betray my closest friend in some dramatic and horrific way. That's much worse by anybody's account, but I'm not going to go to prison for it. The law doesn't touch certain things. >> Do you think in an ideal world they would be closer together? >> So, it used to be, it, it it used to be in America personal domain that they were closer together in some cases. So Jed Ruben felt was a law professor at Yale. There's an article where he points out that, that not too long ago just about all sex was illegal. Sex between unmarried was fornication, when one of them was married it was adultery. If a married couple did many sex acts would be considered sodomy which is illegal. If they use contraception it was illegal, if one of them if somebody told them about contraception, that person could be committing a crime. Incest of course, also homosexuality, masturbation in some places and so on and so only procreative sex within. And so these are laws, now you might say, say well those things shouldn't be moralized. But the point is that it used to be that the law played much more of a role in our private lives than it does. Or take seduction. The crime of seduction. Which, which used to be a crime. And and you could actually see some of the moral laws which is, to seduce somebody almost by definition is, is, is nasty. It involves deception and trickery and so on. But at least in America the law has sort of extracted itself from the private domain. And I, I think that's, on the whole a good thing. But I think that one consequences of it, of it, one consequence of it is leave these moral gaps, gaps we say gosh that's wrong, but you can't call the police. >> Right. >> Okay. Back to babies. The next question is from Evan Evans and we talked about both in your article and in your lectures about the baby studies where babies are looking at good guys helping someone bad guys hindering someone. And and one question is whether so is it that babies really care about sort of rewarding or preferring the good guy? Or are they really really focused on punishing or avoiding the bad guy, is one stronger than the other? >> It's a really good question. Our studies find at least with toddlers, find that there's two independent things. There's a desire to punish the bad and a desire to reward the good. And we don't actually have any studies that pit the two against each other to see which is stronger. My bet through which is I, I think Evans suggestion is that the desire to punish the bad is going to bad stronger. It's certainly stronger at a social level. So go back to law. We have these enormous institutions designed to punish bad behavior. Police, prisons, and so on. There aren't many institutions in place designed to reward good behavior. Like there's some people give out rewards and so on and commendations. But there's a huge difference, and you can see the logic behind the difference. Suppose you could do just one of two things. You could either reward your neighbor if he's nice to you, or you can punish him if he's bad to you. You probably want to do the second, because him being nice to you, well that's nice, but him being bad to you, he could kill you. And so the bad things people can do to you are typically more bad than the good things that people can do to you are good. And I think colleges call this a negativity bias. And there's actually some evidence from our studies with three month olds, these are like looking time studies, showing that they're sensitive to bad behavior, but they're not sensitive good behavior. And I think that as a rule, the, the appetite to punish is going to be stronger than that urge to reward. >> Okay. And last question so since these babies seem to have this very early sense of what's good and what's bad and sort of this moral sense that's built in, how come so many of them grow up to be bad people? >> Yeah so the question of bad people is something we're going to return to over and over again in the course and there's different answers to give, but here's one answer I'll try out. The people we view as adults who are bad people many of them really are bad people, don't think of themselves as bad people. They have in some ways normal moral psychologies. They have a sense of right and wrong, they have they're capable of moral judgements. They understand reward and punishment. They do terrible things because they think they're right. They think they're actually good things, or because they have powerful desires that override their moral sense. Or because they know they're bad things but they think it's necessary to do these bad things to do other good things. And so, so I think there's a real temptation to say the world contains evil people of no morality. And they do terrible things to each other too and to ourselves, to the rest of us. And I don't doubt that some such people exist, but most of the evil in the world is done by I think people, totally normal moral psychologies. Normal allotment of, of rationality, and moral judgement, and empathy, and compassion, and all of that. Who find themselves in situations where they go wrong. So there's whole countries go wrong, as in when, when all of the sudden, a country full of normal, decent people end up committing genocide against some group within them. And it's not like they all have some brain parasite. Rather, it's that our morality can be malleable and can be changed by circumstance. >> All right. Well those are all the questions for this week. >> Thank you. >> Thanks.