Re: The Morality Project
Posted: Sun Dec 28, 2014 5:43 pm
Working out from my first principles...
To begin with, I'm an existentialist. My metaphysics is materialism and my (meta) ethics is moral nihilism.
The one leads to the other. No value system has objective worth over any other. Some may be more or less self-consistent, but that only matters if you care about that (which I do, because I want to actually fulfill my terminal value as effectively as possible and not just senselessly reach for it).
I choose to assign complexity of experience as my terminal value. I have only one terminal value because having more than one can lead to a conflict where you are left unable to act unless you can choose between them. Because I do not want to be left unable to act, I would have to choose one over the other, so I decided to choose my terminal value now and not later.
My value means that I support things that produce complexity. The optimal universe of the hedonist, where there are, say, 20 units of pleasure (hedons) and 0 units of suffering (dolors) is not my optimal universe. There are a few details that make this a little less simple but, ignoring those, I want a universe with 10 hedons and 10 dolors.
Certain kinds and amounts of dolors are undesirable not because they are dolors but because they permanently damage a being's ability to produce and experience complexity (psychological trauma, physical disability, &c).
Beings with wills are possibly the most valuable thing in the universe, because their ability to produce and experience complexity is without match, but this is in part because their wills are independent, or each is an independent agent. A hive mind is a single will with many hands, and while it might make for a more complex universe to have, say, 10 of 100 bodies to belong to a hive mind instead of 0 of 100, we wouldn't want to go further than this. 10 out of 100 leaves us with 90 beings with independent wills, with all the possible interactions that we could have between them, and so on and so forth (similarly, multiculturalism is preferable to monoculture).
Further, if a being is unable to exercise its agency then its will is effectively erased, and this to the extent that its agency is overridden. Therefore, beings have the right to act for themselves insofar as they do not infringe upon the agency of others, even to the point of self-destruction.
My value system is thoroughly consequentialist, but it is rule consequentialism. Which is to say, I attempt to look at decision points in context. Maybe murdering 10 people to save 100 will work out in this particular instance, but (to spit out a random number) it will only work out 1 out of 5 times and the other 4 times it will work out so badly as to override the good from the previous time. Therefore, I will apply the general rule "do not murder people in order to save more people" because
(Murdering people is bad because it violates someone's agency. It can also be wrong in that it may cause a net loss of complexity, but the calculus here is fuzzier and more open to interpretation, and in any case it is never so great as to permit violating someone's agency to keep zem from committing suicide.)
All of this leads to some interesting decisions in the trolley/train thought experiment: If I have to choose between saving 1 person and 5 people then I will, all things being equal, choose to save the five. This does not change based on who the train is originally aimed at or whether I must flip a switch to change the track or divert the train myself. Choosing to not act is still a choice, and therefore I am still having to choose one way or the other: One choice leads to one person dying, and the other choice leads to five people dying.
An important detail for me in this thought experiment is that you cannot warn any of the people on the track. This is important generally because being able to warn them defeats the purpose of the exercise, but it is extra important to me because of the matter of agency.
Sometimes there is then given another form of the thought experiment, where instead of changing the track or diverting the train there is a fat man that you can push in the train's path (it is assumed that you are not large enough to make sacrificing yourself effective). If there is no time to communicate with the fat man then I am still left with the same choice as before: acting in such a way that one person dies, or acting so that five people die. This is because in this situation the fat man really doesn't have any agency to be taken away (in terms of effective/practical agency). If I have time to communicate with the fat man, however, then it would be a violation of his agency to push him in front of the train without his consent. In fact, it would be a violation of his agency to not offer him the choice, too. That is to say, if I decided to just assume that he wouldn't agree to sacrificing himself, or if I thought that it would be awkward to bring up, or anything else, then I would be in the wrong for having not talked with him about it.
Extra note: That no value system is objectively superior to any other does not mean that I am fine with letting a host of hypothetical cloned Hitlers run free because Their Value System Is As Valid As Mine. Some value systems complement mine. Some can at least coexist with it. But other value systems cannot do either of these, and instead they represent an active threat to what I value. Murder, bigotry, and people getting too concerned with what other people are putting in their own bodies and doing in their own bedrooms with consenting individuals capable of giving consent in the first place, are all things that work against my value system. Ergo, I oppose them. But I oppose all sorts of things that I deem horrible not because they're horrible according to some objective measurement but because our respective value systems are incompatible and so they are horrible to me. And so long as they're incompatible with the value systems of enough other people, we can collectively agree that the cloned Hitlers need to be opposed in every ethical manner.
(this is where we fall apart into disunity over arguments about which methods are or aren't ethical, but that's another story)
...
There are three main types of arguments for claiming an objective meaning to life, an objective morality, or an objective anything else that exists only in "mindspace," so to speak. The first says that, like the jar can be said to be a "good jar" or a "bad jar" insofar as it fulfills the purpose for which it was made by the potter, so too are we "good humans" or "bad humans" insofar as we fulfill the purpose for which we were made by a divine being.
Obviously this doesn't work for me, because it doesn't seem to be the case that even hypothetically there are any beings which differ from humans in kind rather than in degree (that is, sufficiently advanced aliens might appear to be godlike to us, but this is not due to any inherent difference in nature which will forever separate us).
The second type of argument supposes that there is a Platonic Form of Goodness or something similar, against which we can compare our actions. We can see it in various forms, including in some/most kinds of Rationalism (of the classical variety, not the more modern kind that talks about just being rational). Again, being a materialist, there's no room for Platonic Forms or anything like that, and cases of Rationalism that avoid any such types of existence fall into the third category.
Which is that of taking a thing that happens to be so, and deriving some kind of moral idea from it. This is Hume's Is-Ought problem. Just because, for example, humans seek to get pleasure and avoid pain, this does not actually mean that the universe ought to be one way or another.
To begin with, I'm an existentialist. My metaphysics is materialism and my (meta) ethics is moral nihilism.
The one leads to the other. No value system has objective worth over any other. Some may be more or less self-consistent, but that only matters if you care about that (which I do, because I want to actually fulfill my terminal value as effectively as possible and not just senselessly reach for it).
I choose to assign complexity of experience as my terminal value. I have only one terminal value because having more than one can lead to a conflict where you are left unable to act unless you can choose between them. Because I do not want to be left unable to act, I would have to choose one over the other, so I decided to choose my terminal value now and not later.
My value means that I support things that produce complexity. The optimal universe of the hedonist, where there are, say, 20 units of pleasure (hedons) and 0 units of suffering (dolors) is not my optimal universe. There are a few details that make this a little less simple but, ignoring those, I want a universe with 10 hedons and 10 dolors.
Certain kinds and amounts of dolors are undesirable not because they are dolors but because they permanently damage a being's ability to produce and experience complexity (psychological trauma, physical disability, &c).
Beings with wills are possibly the most valuable thing in the universe, because their ability to produce and experience complexity is without match, but this is in part because their wills are independent, or each is an independent agent. A hive mind is a single will with many hands, and while it might make for a more complex universe to have, say, 10 of 100 bodies to belong to a hive mind instead of 0 of 100, we wouldn't want to go further than this. 10 out of 100 leaves us with 90 beings with independent wills, with all the possible interactions that we could have between them, and so on and so forth (similarly, multiculturalism is preferable to monoculture).
Further, if a being is unable to exercise its agency then its will is effectively erased, and this to the extent that its agency is overridden. Therefore, beings have the right to act for themselves insofar as they do not infringe upon the agency of others, even to the point of self-destruction.
My value system is thoroughly consequentialist, but it is rule consequentialism. Which is to say, I attempt to look at decision points in context. Maybe murdering 10 people to save 100 will work out in this particular instance, but (to spit out a random number) it will only work out 1 out of 5 times and the other 4 times it will work out so badly as to override the good from the previous time. Therefore, I will apply the general rule "do not murder people in order to save more people" because
(Murdering people is bad because it violates someone's agency. It can also be wrong in that it may cause a net loss of complexity, but the calculus here is fuzzier and more open to interpretation, and in any case it is never so great as to permit violating someone's agency to keep zem from committing suicide.)
All of this leads to some interesting decisions in the trolley/train thought experiment: If I have to choose between saving 1 person and 5 people then I will, all things being equal, choose to save the five. This does not change based on who the train is originally aimed at or whether I must flip a switch to change the track or divert the train myself. Choosing to not act is still a choice, and therefore I am still having to choose one way or the other: One choice leads to one person dying, and the other choice leads to five people dying.
An important detail for me in this thought experiment is that you cannot warn any of the people on the track. This is important generally because being able to warn them defeats the purpose of the exercise, but it is extra important to me because of the matter of agency.
Sometimes there is then given another form of the thought experiment, where instead of changing the track or diverting the train there is a fat man that you can push in the train's path (it is assumed that you are not large enough to make sacrificing yourself effective). If there is no time to communicate with the fat man then I am still left with the same choice as before: acting in such a way that one person dies, or acting so that five people die. This is because in this situation the fat man really doesn't have any agency to be taken away (in terms of effective/practical agency). If I have time to communicate with the fat man, however, then it would be a violation of his agency to push him in front of the train without his consent. In fact, it would be a violation of his agency to not offer him the choice, too. That is to say, if I decided to just assume that he wouldn't agree to sacrificing himself, or if I thought that it would be awkward to bring up, or anything else, then I would be in the wrong for having not talked with him about it.
Extra note: That no value system is objectively superior to any other does not mean that I am fine with letting a host of hypothetical cloned Hitlers run free because Their Value System Is As Valid As Mine. Some value systems complement mine. Some can at least coexist with it. But other value systems cannot do either of these, and instead they represent an active threat to what I value. Murder, bigotry, and people getting too concerned with what other people are putting in their own bodies and doing in their own bedrooms with consenting individuals capable of giving consent in the first place, are all things that work against my value system. Ergo, I oppose them. But I oppose all sorts of things that I deem horrible not because they're horrible according to some objective measurement but because our respective value systems are incompatible and so they are horrible to me. And so long as they're incompatible with the value systems of enough other people, we can collectively agree that the cloned Hitlers need to be opposed in every ethical manner.
(this is where we fall apart into disunity over arguments about which methods are or aren't ethical, but that's another story)
...
There are three main types of arguments for claiming an objective meaning to life, an objective morality, or an objective anything else that exists only in "mindspace," so to speak. The first says that, like the jar can be said to be a "good jar" or a "bad jar" insofar as it fulfills the purpose for which it was made by the potter, so too are we "good humans" or "bad humans" insofar as we fulfill the purpose for which we were made by a divine being.
Obviously this doesn't work for me, because it doesn't seem to be the case that even hypothetically there are any beings which differ from humans in kind rather than in degree (that is, sufficiently advanced aliens might appear to be godlike to us, but this is not due to any inherent difference in nature which will forever separate us).
The second type of argument supposes that there is a Platonic Form of Goodness or something similar, against which we can compare our actions. We can see it in various forms, including in some/most kinds of Rationalism (of the classical variety, not the more modern kind that talks about just being rational). Again, being a materialist, there's no room for Platonic Forms or anything like that, and cases of Rationalism that avoid any such types of existence fall into the third category.
Which is that of taking a thing that happens to be so, and deriving some kind of moral idea from it. This is Hume's Is-Ought problem. Just because, for example, humans seek to get pleasure and avoid pain, this does not actually mean that the universe ought to be one way or another.