The story so far(3)
Story so far
In Story so far we discussed about how we have conflicting views. One time we want to save people, another time we don’t. Our feelings change a lot for the same thing based on time. We further realized that our feelings are inconsistent and that we have circular preferences. It appears that it is quite unclear what we truly want, because there seems to be lot of changes with feelings on the same topic at different times. Looking at people dying makes us cry. We want to do something, but then too bad I have to go on a vacation. If we leave it up to feelings then I am going on a vacation, that is important to me right now. But I will also feel bad for those people who are dying, but then thats that. Our Value system is masked by heuristics and biases. Our heuristics and biases, are our the metaphorical eyes through which we glimpse into our value system. Having seen some of the heuristics and biases and related examples, we understand that the output of the heuristics might not always be the right thing for us, irrespective of the value system.
In the Story so far(2), we realize that everything is a feeling. As much as we might want to frown upon them because they are not consistent/are circular, that is all we have as an indication of what we might want. Having understood that we have feelings and that these attempt to badly inform us what we want, we wonder what options we might have. We check if we can ask someone else, what we should be doing in life, and sadly we can’t. We have a solution inside our head with which we check if an action seems okay or not. The gap between accepting an action as ok and doing it seems large. For example, you can agree(based on the solution inside your head) that people are dying and that you need to save them, but you end up doing squat. Make no mistake, we only partially know what our true solution inside our head is. We are yet to uncover it fully( ).
Alas we stumble via [Eliezer’s words][ele_intuitions] that you can’t give up on feelings. That is the only thing we have. If you give up feelings by not trusting it at all, then you are but an empty philosopher who is but a rock. If you follow feelings to the letter, you are met with circular preferences. Paraphrasing Eliezer, “Feeling is not a curse word when it comes to morality, cause there is nothing else to argue from. Considering the way we have been made, it seems like a good idea to formalize, reflect on, extrapolate out to see if an idea has sensible consequences etcetera.”
We briefly attempt to understand how important life is to us. We deduce that ‘we want to save lives’ and not let go of lives, based on how we emote. We also explore parts where in we see that location, distance, visibility shall not really matter whether we want to save a life or not.
In this weeks episode…
The importance of life
I feel like I am going to have to assume that life is very important to us as evidenced by how strong our feelings are. But how important is it? what all should we be ready to give up for it? How I can determine from my faulty feelings what exactly is the way to go, I am not sure. In other words, what does my true value system want (but for now only with respect to life)?
As discussed in earlier posts, 2 Israeli scientists, observed that people are willing to donate money for the costs of a very expensive treatment, more for an individual rather than for a group of 8. How can it be better to have 1 extra happy child in this world, while it might be somehow worse to have 8 more happy children? How can the value of one child be reduced to such an extent because there are 7 other children? In another study, The willingness to pay is the same whether we want to save 1k birds or 10k birds. Looking at this and the fact the brain never appears to understand large numbers we conclude that there is a high chance that the brain is blatantly making a mistake. We concluded earlier that this brain cannot but do arithmetic subconsciously. So now we are left with the option of understanding what the brain might want, you know cause our brain is faulty and stuff. We know that the brain is faulty, its results are lite, and that we need to do something, but we don’t know what exactly we should do.
Well Eleizer says that we need to see this as that we have to ‘shut up and multiply’ the value we might get from saving one life. Use cold calculations and come to the answer. If one life is worth X, then 2 lives should be worth 2*X. Every life shall be worth the same, because there is no reason to discriminate one life from another. Unless of course there is reason to say that blacks are worth 0.5X (which there isn’t), then essentially one life is worth X and n lives worth nX. But the brain does not scale feelings like this. you don’t understand the meaning of saving 1k lives or 10k lives.
Why does the value of saving every successive life have to be the same? Why can’t it be that the first life we save is worth more to us than the second life we save and so on the value of each successive life? Don’t we see such an observation with the birds study? Earlier we only concluded that the conclusion by the brain regarding value of ‘10k birds = value of 1k birds’ seems wrong considering that it doesn’t understand large numbers. We are yet to discover what needs to be done. In real life though, if I see a person dying and I just saved him, I can’t imagine scenario any different when I see another person dying after that. Looking at examples of natural disasters, I have seen videos over videos where people work round the clock, for some stupid reason to save as many people as they can. This merely suggests that the total value gained is at a peak. It doesn’t offer any information as to the value of successive lives. Nevertheless, people being saved has been maximized in each of the cases, where emotional arousal is high. Look at this for example, man works round the clock to save people, goes out of the way to stay awake overnight and saves so many children.
In other words, I don’t care regarding the value of each successive person. Fuck that. Its useless for me, right now. What is the action I hope to see when I see two people dying, or many people dying? I hope to see people running around to save as many lives as they can. I believe that I am justified in looking at the doctor in this example as our potential role in society, as we as humans are flawed. For example, When something happens around us, the emotional arousal is much higher, and in other cases not so much. We have seen earlier that the emotional arousal for an event happening in a closed sealed room is 0, as we don’t see it. There are things that we would have emotional arousal to, and things that we wouldn’t have emotional arousal to, despite the fact that we both lead to the same outcome. Someone telling us that someone is dying 20 km away, and you seeing a video of someone dying 20km away, has different emotional arousal, while both are events of stoppable death of someone.I believe that is why I am justified in looking at the response of someone who is in a place of very high emotional arousal. Hence we look at the doctor to provide us data regarding our potential role in the society.
There were many other doctors, in the same hospital, but only one had the balls while others gave up (according to the story atleast). There were many in the world, but it took Gandhi to make change. Gandhi went all the way, right from his clothing, to not harming anyone at any cost and all his values. Basically, shouldn’t emotional arousal be the same for everybody, who was in the situation? Why didn’t everyone in the scenario act? I am afraid of naively analyzing the situation but anyways here goes.
How am I justified in assuming the actions of gandhi or this doctor was the right one and not the other doctors or compounders who did little or gave up. Having emotional arousal, doesn’t necessarily translate to action. Emotional arousal is only a telling for us as to what is important? Also, Capability, belief in oneself, leadership could be one thing that differentiates one from the other. Not everyone can get work done, or creatively think and get the work done. When you think of yourself, and the cost you will undergo, it is possible I guess for the other doctors to stay away still. Could it be that the other doctors didn’t have the capability or the … Would they have done it if they had the resources such as money and such as ideas to save people. The emotional arousal could have been the same… But anyways, we do not want to naively comment about things we don’t have information over. So what do we want to do?
Why don’t we find it in all humans? We see MLA’s doing all sorts of shit, not caring for people, allowing them to suffer, why not use these people as role models? Does the same emotional arousal still hold?
How can I say Gandhi represented the value system of ‘life importance’, and not the others who were killing people? I seem to be coming back to this over and over again!
So maybe, one life is worth X and n lives are worth nX. Now some other event maybe worth Y and m times that might be worth mY. We know we can’t necessarily trust the brain with all sorts of calculations. But then the brain can point to what we might want in life. Maybe we can take an isolated incident and run in our minds and determine its possible value. Then we check it against actions in the world. So it maybe possible to know if X>Y or Y>X. It may also be possible to determine its value say using money. I am sure I can come up with something other than the value of life that is worth to me say Y. m*Y can be greater than X (saving one life), where m can be sufficiently large. What happens in this case? What if Y is something as silly as having sex.
And could there be a case when one life is worth more than the other? For example, we have this innate thing I think, where we think that saving a child and mother is much more important, dating back to the days when this heuristic made sense (stone age).
Naive reasoning
- I am extremely worried about the way I am reasoning. Sometimes I might say that ‘this is all we have, we just need to consult with the solution inside our head’, and leave it at that, because thats all we got. But you could easily be wrong. I guess thats why we have peer-reviews, but what if the peer is also misguided, like the souls of terrorists?
For me it appears that I know what I wan’t the solution to be, I am consciously adding reasoning to get me there.