Moral Courage

“We judge ourselves by our intentions and others by their behavior.”

-Stephen Covey

Just about everybody thinks of themselves as good people. There are very few people who will admit that they themselves are just bad. Even people whose sole motivation is selfish behavior will often justify their actions, saying something to the effect of “hey, if he didn’t want me to take advantage of him, he should have read the fine print.” That is, its ok to take advantage of somebody who isn’t on his guard – our proverbial speaker has a moral code. It’s not one that the speaker actually follows mind you, I suspect that in most cases it’s one the speaker modifies after the fact in order to justify his actions. But it remains, the speaker at least feels the need to justify himself. Other such justifications are simple, “The world never did anything for me, why should I help out anyone else.” Something that a selfish person would say, but it still reveals a moral code, presumably if the world actually did something for him, he would be bound to do things for other people.

We all have moral codes, some people will use theirs as a guide, some as an excuse, but its very rare to find somebody with no moral dimensions whatsoever. Just about everybody likes to feel that they are inherently good.

There are two main ways to do this; the first is to be inherently good. This is hard for many reasons, it requires sacrifice, discipline, patience, humility and deep introspection. And if the goal is to feel better about yourself, well the first three conditions are hard, and the second two work against feeling good about one’s self.

The second way is much simpler; compare yourself to others! But this is also hard, sometimes people are better than you. In fact, it doesn’t seem to work too well in other fields. Take income or education, for instance. When comparing ourselves against others, we frequently (or at least I do), compare ourselves against the most successful person in a group. “Ugh, why does this person make more money than I do?” Same for education “ugh, that person went to Yale, aren’t I as good as he is?”

Yet it works wonders for our sense of moral superiority. While financially, vocationally, and educationally we always compare ourselves to the best, yet in a bizarre method we tend to compare ourselves to the worst people morally. Part of the reason is that “We judge ourselves by our intentions and others by their behavior.” We hold our moral beliefs to be the evidence of our virtue, even when we fall short of these beliefs. This to me is amazing, the loftier our goals, the more we fall short of them, yet the better we feel about ourselves because of it?

Thus, so long as we mean well, we are excused and even rewarded, but when others act unfairly, we seize upon the example to remind ourselves that we are better than they are, as their actions must be a reflection of their beliefs, and therefore their beliefs aren’t as good as ours.

But it gets worse. We choose sides on controversial issues, and then believe ourselves better for taking whatever side we take. We feel good about ourselves just for supporting (or opposing) gay marriage, even if it requires no personal sacrifice or courage or risk of ostracism┬áto support (or oppose) it. (I’m not saying that nobody ever faces these things for their opinions on gay marriage or whatever else, I am merely saying people hold themselves up as good merely for holding opinions). There are people who sacrifice things for their beliefs, and there are people who have labored for years on causes and, when they finally win, feel a sense of triumph and vindication. However, my guess is that the majority of people who changed their facebook icon to become a rainbow weren’t facing any real risk retaliation for doing so.

Going along with the crowd in any one instance may occasionally (or even often) be the right thing to do. But its virtually never a courageous thing to do. So if you’d like to advertise on facebook how much you dislike the fact that Cecil the lion was killed, by all means do so. But if you’ve never sacrificed anything to prevent poaching or preserve habitats or save the lions, you don’t really have anything to brag about.

Now while I could go on talking about morals or politics here, and one can seriously argue that what I’ve described is benign or even beneficial. After all, choosing sides is a political activity, and political activity is one way things can change. Take, for example abolitionists in pre-civil war America. If they really all had the courage of their convictions, they would have helped organize the underground railroad and help escape fugitive slaves; yet only a very small percentage of them actually did. Yet, if the abolitionist movement were confined to those people who were willing to break the law and face serious punishment for really enacting what the believe, slavery never would have ended.

This works for taste and art too. They want to be original, but lack that ability to be original, so they instead join a movement. They co-opt the opinions of somebody who was original, then try to impress their friends by repeating these opinions. Its most pronounced on the internet, where we, depending on where you surf, you’ll find people with whole allegiances to hating various movies; (is there any reason that so many people hate Inception or Prometheus? They’re not bad movies).

To form actual intelligent opinions about art is tough, to parrot opinions is much easier. If part of your identity is to talk a lot about culture on the internet or even in person, its a lot easier just to repeat things than it is to think for yourself, especially if you’re trying to impress others.

Its easier to appear sophisticated than to actually be sophisticated. Likewise, its easier to appear moral than to be moral. This is true even if you’re only trying to appear moral or sophisticated to yourself.

Air Travel Safety

I found this website, basically its a list of all plane crashes in the US going back 15 years, its kind of insane how safe air travel is.

Looking at serious plane crashes (planes with a capacity of 10 or more with at least one fatality), there have been only 14 since the year 2000. The fatality count by incident is below:


*includes the 9/11 hijackers in fatality totals.

The total number of people killed in while on major incidents is 886. 867 if you don’t count the 9/11 hijackers.

The vast majority of them happened in four instances: a 747 from Taipei to LA in October 2000, the Concorde crash in July 2000, the bizarre crash of an American Airlines plane in New York city just months after 9/11, and the September 11 plane crashes themselves.

All of those took place in 2000 or 2001; since 2002 there have been no crashes with 100 or more fatalities in the US (including flights into our out of the US).

Since 2000, there have been about 10 billion passenger flights in the US (about 2 flights per person per year), or about 1 in 11 million chance of dying on any given flight.

If we only look at the last ten years, we get an even better picture, 116 fatalities with 7.2 billion passengers, for a one in a 62 million chance of dying on any given flight. The past five years it becomes one in a billion chance; although with only one incident it probably understates the chances somewhat.

What’s equally crazy is the change in safety numbers. In the 1990’s a US passenger had about a one in 5 million chance of dying in any given flight, which is pretty damn safe; in comparison it was about as dangerous in the 1990s to board any given flight as it was in 2007 to drive 16.5 miles. The 2000s have been about 70% safer (or if you prefer, 42% less dangerous), meaning that boarding any given flight is about as dangerous as driving 9.5 miles (in 2007 miles). The past ten years? Well, you’re about as likely to die from boarding any given US flight as you are from driving 1.15 miles. And if we limit our data to the past five years, boarding any given flight is as dangerous as driving the length of a football field.

Of course, the last five years probably aren’t actually a good indicator; if we’re in a point where air crashes happen on average once every 10 years, then looking back five years will necessarily give you a bad estimate, (ie, either higher than average, if the previous 5 years had a plane crash, or lower than average if it didn’t). So you’re chances probably aren’t one in a billion of dying when boarding a flight, they’re probably more like one in a hundred million or so.

All this is to say that, as measured by safety, the US has done an incredible job at promoting flight safety. Regulators have a clear mandate to make things safer, there isn’t much of a opposition group (while there are people who might want fewer regulations in principle and some people who might want to cut corners on safety, nobody is against aviation safety), and can be clearly measured. When these things happen, well, you get government success; the NTSB and FAA are examples of government greatness. Boring greatness mind you, and perhaps they are impressive because they’re boring. We have been able to take something mad, to travel at a speed of 500 miles per hour suspended miles above the ground by nothing but air, and have made accidents as unlikely as powerball victories.


Number of Air travel passengers:

Number of auto fatalities and air fatalities before 2000:

Number of air fatalities since 2000:

The Great Filter, Part Three

When you have eliminated the impossible, whatever remains, however improbable, must be the truth

As promised, here is my third essay on the great filter; lets talk about whether civilizations lose their desire to colonize the galaxy.

As a refresher, in order for something to be a filter, it needs to have the following characteristics.

1: It must prevent the colonization of the galaxy.

2: It needs to be stable (or long-lasting), if it effects a civilization in time period x, it must still do so in period x+1.

3: It needs to be universal, and effect (nearly) all civilizations, regardless of biology or culture.

So will we all lose not our ability to colonize the stars, but our desire to do so? What could cause this? The simplest answer is that we will create virtual worlds, and then lose ourselves within those worlds; at such a point, we simply wouldn’t want to colonize anything anymore; dead planets hold no interest compared to the imaginative worlds we can create for ourselves.

The problem with this is that while a human in some sort of computer induced dream state may use orders of magnitude less resources than normal humans, we would still need some energy. And if we still have some sort of desire to multiply ourselves, then we should expect us to use as much of the universe’s energy as we can.

In fact, I think its a fairly easy step to say that creating a virtual world would lead to MORE reason to colonize the stars, not less. After all, we wouldn’t have to care about habitability of planets, computers are shown to work quite well in space and other hostile environments (Mars, etc).

Another reason we might not want to colonize the visible universe is because we find something better; maybe all the cool alien species are hanging out in hyper-space right now. While this may be the case, but we have no evidence of this hyper-space yet, so this is firmly in the realm of speculation.

One final idea, put simply is that, as civilizations advance, their preferences become similar. That is, there is some sort of universal truth which, every civilization, as it becomes more advances, begins to believe in and adhere to.

This truth would have to have something to say about the virtues of reproducing indefinitely, either because its not utility maximizing, or because its not morally correct (or both).

These ideas seem very weird, the first more so. It seems quite odd that all civilizations, regardless of the starting point of their culture, biology, genetics, etc, will, on a long enough time scale, become very similar with regards to their desires on a civilization scale. While it’s always possible that there’s some mechanism which would cause this, I think that it is bizarre enough that we can dismiss it.

The alternative to this is that all intelligent civilizations are basically the exact same in terms of utility; that if we were to suddenly find another alien species, they would basically be us, the same fights over religion, the same consumerism, the same concept of ascetics, etc. This also seems very unlikely to me, because on the first part there is enough diversity in behavior between human cultures here on earth, and because even if this were true based on what we know about human ideas it increase, not decrease, the desire to colonize the stars.

The other option is that we lose the desire to go among the stars not because we don’t gain utility from doing so, but because its somehow not morally right. To put it simply, all civilizations, as they become more and more advanced technologically, also become more advanced philosophically, and they begin to reach the same conclusions as all other civilizations at the same level of advancement, regardless of starting point.

Lets use an example, imagine an insectoid like species; it has a queen which lays thousands or millions of eggs; the vast majority of which grow to be things which themselves don’t reproduce; instead they somehow serve the colony. Some, perhaps all of them, become sentient conscious beings (basically, think of a termite or ant colony if termites or ants were intelligent). This species not only has “worker” drones, but “thinker” drones as well, who’s job is to consciously design things, philosophize, advance the bug civilization, etc. We can probably assume that the moral framework of this civilization would be radically different from our own.

Yet, if we observed such a civilization, and over time it became more and more like ours in a moral dimension (or we became more and more like theirs), well then what would our conclusion be? Furthermore, lets assume that all civilizations everywhere become more like each other, from civilizations populated by telepaths to those populated by intelligent asexual slime molds, as they get more advanced they become more alike morally.

Lets pose another question. Lets say that they all develop hyper-speed spaceships independently, and they are all diverse. Yet over time, their spaceships become more and more alike, even though the civilizations have made contact with one another. What this tells us is simple, that, due to the laws of physics, there is one type of hyper-speed drive which is better than all the others, and that regardless of the original design of the drive, by constantly improving the drive it will become more and more like the “ideal” hyper-speed drive. Of course, the reason this happens is that there is a single law of physics (or set of laws of physics) universal to the entire cosmos.

Returning now to our speculation regarding the alien bugs; if all civilizations become more and more like each other morally (despite no contact between civilizations), then by far the most likely conclusion is that there is a single law of morality (or set of laws of morality), universal to the entire cosmos.

So, to relate this to the question of the great filter, we get the following: There is a universal observable law of morality, to which all civilizations sufficiently advanced to colonize the galaxy will have discovered and will adhere to, which proscribes against the colonization of the galaxy.

I’m proposing this as an explanation for the Fermi Paradox. Of course this is a stretch, what I’m basically saying is that when we look to the stars, we don’t see stars with a certain level of infra-red radiation, and therefore we can conclude there is objective moral truth. Now, it’s entirely possible that I’m making some mistakes on some of the possibilities; maybe I’m underestimating the possibility of nuclear war, or that I’m misunderstanding some argument, or perhaps there is no great filter, the universe is teeming with intelligent life that we just can’t see or recognize, or that there is another filter which I just haven’t considered. However, I do believe if nothing else, the existence of the Fermi Paradox should increase (if perhaps slightly), our belief in the existence of universal moral law.