The Great Filter, Part Three

When you have eliminated the impossible, whatever remains, however improbable, must be the truth

As promised, here is my third essay on the great filter; lets talk about whether civilizations lose their desire to colonize the galaxy.

As a refresher, in order for something to be a filter, it needs to have the following characteristics.

1: It must prevent the colonization of the galaxy.

2: It needs to be stable (or long-lasting), if it effects a civilization in time period x, it must still do so in period x+1.

3: It needs to be universal, and effect (nearly) all civilizations, regardless of biology or culture.

So will we all lose not our ability to colonize the stars, but our desire to do so? What could cause this? The simplest answer is that we will create virtual worlds, and then lose ourselves within those worlds; at such a point, we simply wouldn’t want to colonize anything anymore; dead planets hold no interest compared to the imaginative worlds we can create for ourselves.

The problem with this is that while a human in some sort of computer induced dream state may use orders of magnitude less resources than normal humans, we would still need some energy. And if we still have some sort of desire to multiply ourselves, then we should expect us to use as much of the universe’s energy as we can.

In fact, I think its a fairly easy step to say that creating a virtual world would lead to MORE reason to colonize the stars, not less. After all, we wouldn’t have to care about habitability of planets, computers are shown to work quite well in space and other hostile environments (Mars, etc).

Another reason we might not want to colonize the visible universe is because we find something better; maybe all the cool alien species are hanging out in hyper-space right now. While this may be the case, but we have no evidence of this hyper-space yet, so this is firmly in the realm of speculation.

One final idea, put simply is that, as civilizations advance, their preferences become similar. That is, there is some sort of universal truth which, every civilization, as it becomes more advances, begins to believe in and adhere to.

This truth would have to have something to say about the virtues of reproducing indefinitely, either because its not utility maximizing, or because its not morally correct (or both).

These ideas seem very weird, the first more so. It seems quite odd that all civilizations, regardless of the starting point of their culture, biology, genetics, etc, will, on a long enough time scale, become very similar with regards to their desires on a civilization scale. While it’s always possible that there’s some mechanism which would cause this, I think that it is bizarre enough that we can dismiss it.

The alternative to this is that all intelligent civilizations are basically the exact same in terms of utility; that if we were to suddenly find another alien species, they would basically be us, the same fights over religion, the same consumerism, the same concept of ascetics, etc. This also seems very unlikely to me, because on the first part there is enough diversity in behavior between human cultures here on earth, and because even if this were true based on what we know about human ideas it increase, not decrease, the desire to colonize the stars.

The other option is that we lose the desire to go among the stars not because we don’t gain utility from doing so, but because its somehow not morally right. To put it simply, all civilizations, as they become more and more advanced technologically, also become more advanced philosophically, and they begin to reach the same conclusions as all other civilizations at the same level of advancement, regardless of starting point.

Lets use an example, imagine an insectoid like species; it has a queen which lays thousands or millions of eggs; the vast majority of which grow to be things which themselves don’t reproduce; instead they somehow serve the colony. Some, perhaps all of them, become sentient conscious beings (basically, think of a termite or ant colony if termites or ants were intelligent). This species not only has “worker” drones, but “thinker” drones as well, who’s job is to consciously design things, philosophize, advance the bug civilization, etc. We can probably assume that the moral framework of this civilization would be radically different from our own.

Yet, if we observed such a civilization, and over time it became more and more like ours in a moral dimension (or we became more and more like theirs), well then what would our conclusion be? Furthermore, lets assume that all civilizations everywhere become more like each other, from civilizations populated by telepaths to those populated by intelligent asexual slime molds, as they get more advanced they become more alike morally.

Lets pose another question. Lets say that they all develop hyper-speed spaceships independently, and they are all diverse. Yet over time, their spaceships become more and more alike, even though the civilizations have made contact with one another. What this tells us is simple, that, due to the laws of physics, there is one type of hyper-speed drive which is better than all the others, and that regardless of the original design of the drive, by constantly improving the drive it will become more and more like the “ideal” hyper-speed drive. Of course, the reason this happens is that there is a single law of physics (or set of laws of physics) universal to the entire cosmos.

Returning now to our speculation regarding the alien bugs; if all civilizations become more and more like each other morally (despite no contact between civilizations), then by far the most likely conclusion is that there is a single law of morality (or set of laws of morality), universal to the entire cosmos.

So, to relate this to the question of the great filter, we get the following: There is a universal observable law of morality, to which all civilizations sufficiently advanced to colonize the galaxy will have discovered and will adhere to, which proscribes against the colonization of the galaxy.

I’m proposing this as an explanation for the Fermi Paradox. Of course this is a stretch, what I’m basically saying is that when we look to the stars, we don’t see stars with a certain level of infra-red radiation, and therefore we can conclude there is objective moral truth. Now, it’s entirely possible that I’m making some mistakes on some of the possibilities; maybe I’m underestimating the possibility of nuclear war, or that I’m misunderstanding some argument, or perhaps there is no great filter, the universe is teeming with intelligent life that we just can’t see or recognize, or that there is another filter which I just haven’t considered. However, I do believe if nothing else, the existence of the Fermi Paradox should increase (if perhaps slightly), our belief in the existence of universal moral law.


The great filter Part 2

A while ago I wrote about the “Great Filter,” or the reason why we don’t see aliens everywhere we look in the universe. Read about it here:

Last time, I argued that the great filter cannot be a totalitarian regime, is very unlikely to be either berserkers or environmental damage, and is somewhat unlikely to be nuclear war and/or pandemic. Which leaves us with two more filters, the starships are hard, or that civilizations aren’t interested in colonization.

Today, I’ll talk about whether starships are hard.

In order for something to be a filter, it needs to have the following characteristics.

1: It must prevent the colonization of the galaxy.

2: It needs to be stable (or long-lasting), it if effects a civilization in time period x, it must still do so in period x+1

3: It needs to be universal, and effect (nearly) all civilizations, regardless of biology or culture.

So does difficulties building starships meet all three categories? For item 1, definitely. For item 2, also definitely, the laws of physics governing space travel aren’t changing. If its hard to build a spaceship today, just waiting won’t make it easier. At first glance, it appears to be universal as well, as all civilizations are facing the same laws of phyics. However, there may be two reasons why this would be different. First, some species may be more able to survive on starships, for instance they may be smaller. Secondly, some civilizations may start out closer to colonizable star systems than others. However, even with these two, we can say that space ships hit the “nearly” universal tag.

So, having established that difficulty in building spaceships can lead to Fermi Paradox, we tackle the more interesting question which is, has it?

At first it seems obvious that it will. Assume that the highest speed of a starship is 10% the speed of light. Next, assume that the average colonizable target is 80 light years away. Simple math says that it will take 800 years, or roughly 23 human generations to complete the journey. So you have to have a starship big enough to house enough people to preserve genetic diversity over such a time period (we’re talking about hundreds at the bare minimum, probably more realistically in the thousands), and enough space to grow food, house recycling functions (for not just materials but water, air, etc), provide living quarters, and enough energy to run the whole thing. Furthermore, we’d need to transport all sorts of animals, fish, livestock etc, to populate the new world, in addition to feeding people along the way. Also, all the equipment needed to actually colonize the new world. We’re talking about a big spaceship, and enough energy to get that spaceship that fast is enormous, not to mention slowing it down when it gets to the target.

All this seems to lead us to the conclusion that yes, space colonization is very hard.

But there are a few things we can do to modify this. First, let’s assume that people aren’t busy living on board the spaceship, but instead in a state of suspended animation. Ditto for all the cattle, fish, plants, dogs, and whatever else we want to bring. Suddenly, the total power requirements goes down, a lot.  Also, since we don’t have to worry about storing or growing food, we can cut our speed down, say to about 2% the speed of light, making the journey take 4,000 years instead of 800, which means that civilization on earth may no longer exist, but if the new colony can become self sufficient and expand, then it will be able to colonize the galaxy.

One might object by saying that I’m making up a technology that we haven’t proven to exist. Furthermore, while suspended animation may be possible, it might not work for long periods of time (it may work for 50 years, but not for 4,000), or that it may still take a lot of energy to keep the suspendee alive. All of this may be true, but I would argue that of course we assume there must be some technology which doesn’t exist for us yet which could lead to space colonization after all I don’t think we’ve discovered everything. And while there may be difficulties in suspended animation, there is nothing that I know of in the laws of physics which would prevent it, unlike warp drives for instance.

There is however, a much easier way to transport people across the vastness of space than suspended animation. And that is to transport not fully grown humans, but fertilized eggs. While we certainly don’t have experience freezing embryos for thousands of years, there’s no reason to assume that storing them at near absolute zero temperatures wouldn’t work. Furthermore, in the coldness of space (2.7 Kelvin), you wouldn’t need to spend energy on refrigeration technology. Now, we’d need a way to take those embryos and develop them outside of the womb, and then raise/educate those children, which could either be done by a subset of humans (if suspended animation is feasible), or by robots.

The same holds true not just for humans, but for all manners of plant and animal life, we can take an entire genetic ecosystem worth of genetic material in a series of canisters no bigger than a large room. And there may be even more compact ways. Instead of storing embryos, we could potentially just store DNA sequences of organism, then “build” them when the starship reaches its destination. Whether this is feasible or not is up in the air, but it certainly seems possible to me.

Now, how big would a spaceship need to be in order to do this. How about 10,000,000 metric tons of starship? That’s big, about 15 times the mass of the largest ship every built (the Seawise Giant), but small compared to something like the fleet of oil tankers on planet earth right now (less than 10% of the mass of Ultra Large Crude Carrier, when loaded with petroleum). Now, can a ship that size hold all the things needed to colonize a planet? Truth is, I have no idea, but lets run with it for a second.

So if we have 10 million tons, that’s 10 billion kilograms. Doing some math (e = 1/2 mv^2), it will take about 3,000 years worth of current us electricity consumption to get the ship up to a speed of 1% the speed of light, which is a lot, but is it too much? If we reach the point of harvesting energy using space solar panels, it becomes a bit easier. We would require only a solar panel of 136 miles on each side, placed at 0.1 AU, to harvest that amount of energy over a year (assuming 22% efficiency). This is about one and a half Marylands worth of solar panels. This seems like a lot, but (according to this source: it is only about half the total surface area of highways in the US. In short, if we get to the point where we’re mining asteroids, we can do it. Storing that energy and then using it to power the ship are another matter, and while it seems hard, it doesn’t seem impossible. (Math allows us to proportionally change things easily! If you want to increase the speed of the ship, to .02 c, for instance, just double the length of the solar panel side. If you want to double the mass of the ship, use two years instead of one).

One final thing, which I’ll comment on because I spent a long time figuring this out, is how to slow the ship down. There are plenty of actual starship designs out there, including hyrdogen scoops and the like (Bussard Ramjets), but I thought, hey, why not use a parachute to slow a ship down. I did the math to determine how big a parachute you’d need to slow down such a ship, and it was one of my favorite problems to solve ever. Feel free to skip all the math, but here it is:

To solve it, we use the drag equation, f = 1/2 p * v^2 & a * c

Where F is force, P is fluid density, V is velocity, A is area of the parachute, and C is the drag coefficient. For reasons I won’t go into, we can say that C = 2, so the formula becomes

f = p*v^2*a, or
v’ = -p*v^2*a/m (m is mass)

I put the negative sign in because the acceleration will always be negative, ie the ship will always be slowing down. In order to translate this to a function of time, we use the initial value problem: (

Since -p*a/m is a constant, lets just call it k, that gives us

dv/dt = v^2 k


dv/v^2 = k* dt

Integrate both sides and you get

-1/v + C = k * t + B

subtract C from both sides

-1/v = k*t + B-C

We can call B-C a new term (D), then simply isolate v

-1/v = k*t + D

1/v = -k*t – D
v = 1/(-k*t – D), which means that the function is

v(t) = 1/(-k*t – D), we know everything except D, but we know what the starting speed is (.01 c, or 2,997,925 m/s)

v(0) = 1/(-k*0 – D) or

2,997,925 = 1/-D, which gives us a D of -0.0000003335640952.

so v(t) = 1/(-k*t + 0.0000003335640952)

k, if you remember, is -fluid density * area / mass. So for units of k*t we get (mass / length^3 * length^2 / mass * time), this reduces down to 1/length * time, or -1/velocity, which is great because that’s the unit we need, our units match.

Now, it would be absurd to use this to slow the spaceship down to zero (it would take forever), but we can use it to slow the starship to say the speed at which the earth revolves around the sun (30,000 m/s). Finally, lets say how long we want it to happen (over a period of 3,000 years, for instance), we get

v(3000 years) = 30,000 m/s

Since we never defined how big the parachute is, we can now solve for it, given our constraints above:

v = 1/(-(-P*A/M) *t + D))

Rearrange to isolate A:

A = M/pt * (1/v + D)

throw in the numbers we know: (I originally did the math based on a 824 thousand metric ton ship)

A = 824,000,000 (mass of ship) / 2.39E-21 (density of space in kg/m^3) * 94,672,800,000 (3000 years in seconds) * (1/30,000 (speed in m/s) -0.0000003335640952 (our constant D, in s/m)

I love this so much because it uses ridiculously large and small numbers (giant ships, giant sails, the density of outer space!!)

Anyway, we get a value of 120,123,349,601,661, square meters, which is pretty big, or 10,000,000 meters on one edge of the (square) sail, about 6,850 miles, which is a sail about the diameter of earth. A sail built from any substance would be prohibitive in terms of mass, but using an electromagnetic field wouldn’t.

All of this is to say that I think it would be possible to slow the ship down. Can we speed it up? Put it this way; 1% of the speed of light is only 200 times faster than space probes we’ve already built. Surely we could built something to go, if not that fast, 50 times faster than the Voyager 1 spacecraft.

All this, though, leads us to the easiest path of all, while humans may or may not ever be able to colonize the galaxy, surely self replicating robots could? We’ve already built robots which can function for years on other planets, building some sort of robot or collections of robots which could construct more versions of themselves on other planets makes not only the difficulties of getting there, but the difficulties of transporting humans there almost disappear.

Building a collection of robots to colonize the galaxy might not seem romantic or noble, and it may not even be wise; in fact we might say that it is a very bad idea. But that doesn’t matter for our purposes, all we need is the idea that a: it’s possible and b: that somebody somewhere decides to do it. If we have those two conditions, then its pretty much inevitable that we get a galaxy full of robots, which, based on our observations, doesn’t appear to be what we have.

Starships are hard to build, no question. But I don’t think they are so hard to become the great filter. If there are enough intelligent civilizations, one of them will build self replicating robots and conquer the galaxy.

Next up, do we lose our desire?

The Great Filter, Part I

Go watch this video

Its only 15 minutes long. Or don’t, but the rest of this post will be a commentary on the video, so there wouldn’t be much point to reading this.

Robin Hanson argues that there is a “great filter” something which is killing everything in the universe. The argument goes that, because we don’t see any aliens, we can be reasonably sure that there aren’t any, and therefore we have to figure out why.

A couple of notes on the Fermi Paradox:

It’s probably true; we can be reasonably sure that, if life anywhere got to the point of distributing itself across the galaxy, it would very quickly get everywhere. Basically, it’s exponential growth; when civilizations get to the point that they can colonize star systems, even if each star system can colonize another once every 100,000 years, that means that galactic population doubles every 100,000 years, which means it goes to one star system to every star system in the entire galaxy within 3.6 million years (log 2 of 100,000,000,000 * 100,000), or about .03% of the lifetime of the galaxy. So within galactic time, if any civilization is willing and able to colonize the galaxy, it will do so within the blink of the galactic eye. That means we can be pretty sure that there haven’t been any such civilizations within our galaxy yet.

What it doesn’t mean is that there aren’t one off civilizations on various planets. Yes, a single planet could be populated by a civilization which doesn’t want to colonize the galaxy.  There could be dozens of these, perhaps hundreds.  But not thousands and certainly not millions.  If intelligent civilization is common in our galaxy, then either galactic colonization must be impossible, or the galaxy most already be colonized.

If we assume that a Galactic civilization would, in some manner effect the galaxy itself (a common candidate for this is the Dyson sphere*, which would encompass an entire star system for the purpose of extracting energy) we would theoretically be able to notice it in other galaxies (in the case of Dyson spheres, we would notice bodies which emit no visible light yet emit large amounts of infra red radiation); which we do not notice. So it doesn’t appear that there have been colonizers at any point in the near universe, (which Robin Hanson estimates covers 10^18 planets).

There are couple of other explanations for the Fermi Paradox.  Most likely that there are civilizations our there, but we don’t see them, because they use something better than radio to communicate, and don’t bother building Dyson spheres because a single zero point energy source has twice the power of a star, or they all quickly migrate to subspace, because nobody would ever want to live in boring space if they can help it.  Or maybe there is a sort of prime directive, and that advanced civilizations aren’t allowed to contact unadvanced ones.  These are all possible, but not certain.  We still must give some possibility to the theory that the universe is just as it looks: dead.

So what prevents life from populating the universe is called the great filter. It can be something in the past, such as the development of single celled life (if this is really “hard” then maybe about of a billion planets we would only expect three or four to develop single celled organisms, of which we are one). They could also be in the future, for instance maybe we will destroy the environment, or maybe a giant pandemic will destroy humanity.

In order for something to be the great filter, it must posses these three traits:

1: It must prevent galactic colonization
2: It must be stable
3: It must be universal (or near universal)

Item one is self explanatory. Item two simply means that in order for a thing to prevent colonization, it has to be long lasting; at least in terms of galactic time (even setbacks of thousands of years don’t count for a galaxy that is ten billion years old).  Finally, it must be universal. That is, it has to be something that effects every candidate for colonization (or in the case of multiple filters, each filter must affect enough civilizations that the combination of filters affects all civilizations).

Robin Hanson proposed a list of possible filter candidates for the future:

Robot Rebellion
Totalitarian world
no starships?
lose desire

Lets examine them

Asteroid and  Supernovae: Hanson dismisses these; we can see that they are not universal. That is, we are reasonably knowledgeable about how often Asteroids and Supernovae happen, and they aren’t common enough to be universal. There may have been individual civilizations destroyed by either of these things, but we wouldn’t expect every civilization to be destroyed by them. Earthquakes and supervolcanoes can also be dismissed for the same reason (or can be dismissed as a future filter.  Perhaps supervolcanoes are common enough in the typical planets that they prevent civilization from starting on almost every planet.  But we can be reasonably sure that the Yellowstone supervolcano won’t go off for tens or hundreds of thousands of years.  Unless we think we are 100,000 years away from colonizing the galaxy, then we don’t have to worry about the supervolcano stopping us).

Robot rebellion is another filter that Hanson rejects, Robots may destroy us but then they would colonize the galaxy (and would probably do a better job at it!).

Totalitarian world is an interesting one, but for various reasons I think we can reject it.  Maybe a despotic government will rule the world, blocking all progress. This actually fails on all three levels. Looking at 20th century dictatorships, we can say two things. First, that they aren’t very stable. Of the totalitarian governments, most of them no longer exist. Totalitarian governments in Germany, Russia, Italy and Spain have been dissolved one way or another. We can maybe put an expected lifespan of 150 years for a totalitarian government, not exactly very stable. Even if I’m grossly underestimating the stability, it doesn’t really matter, these governments would have to existing for hundred of thousands of years to be a filter.  Thank God that Nazi Germany lost, but even if they won and established a thousand year Reich, it would have set humanity back, well, one thousand years.  Again, a horrible thing from a human perspective, but in galactic time it would be inconsequential.

Secondly, based on our history, totalitarian governments aren’t exactly bad at space exploration. Nazi Germany made great advances in rocketry, the Soviet Union had Sputnik and Mir, China has a not unimpressive space program. By all accounts North Korea is a living nightmare, but they still have if not a space program, then a rocketry one. One may argue that these governments aren’t as good as the free world at space exploration (and I think the evidence bears this out), but even if we assume that a totalitarian government would progress only 10% as fast as a free one then we would expect to colonize the galaxy, maybe instead of 500 years from now then 5,000 years from now.

Finally, in order for Totalitarian government to be the filter it must be universal.  Such a government might occur on earth, but for it to be the great filter, it has to occur for all civilizations; regardless of the underlying biology (or culture or history) of the species.  Basically, if you believe that totalitarian government is a real filter; then you’d have to basically believe that there is a form of government such that it will occur in every intelligent species, once established it will last for the duration of habitable epoch of the planet on which it occurs, (in our case, that’s in the hundreds of millions of years), and which absolutely prevents space colonization.

For berserkers, we can also think that there is reason to doubt. There are, from what I can tell, two subsets of this.

The first idea is that nobody wants to begin colonizing the galaxy because if they did they will be “found out” and then destroyed by all the other civilizations. I find this very hard to believe, that every civilization is in fear of every other civilization. If the fears are grounded, then colonizing space would lead to their destruction. But whoever destroyed them, well, they would be revealing themselves; leading to their destruction. The cycle repeats itself until every civilization but one has been destroyed, and the one left has effectively colonized the galaxy. If the fears aren’t grounded, then it only takes one civilization trying otherwise to break the who system.  To me, it looks like a very fragile equilibrium.

The second berserker hypothesis is that there is a single, dominant civilization that destroys all other civilizations who approach space colonization. There are basically two problems with this. The first is that we should expect to see signs of the berserker civilization and we don’t. In fact, the fact that we still exist is a good indication that there aren’t any berserkers out there. The second problem is the universality of the berserker. Maybe the berserkers have effectively stopped space colonization in this and in neighboring galaxies. But when we look at galaxies farther away, we should expect to see the end of the berserker influence.  If we look a billion light years in one direction, and a billion light years in another, then we should expect, even if the berserkers have been going at it for 4 billions years and expand at half the speed of light (both generous estimates, in my opinion), to see the entirety of the berserker influence. There’s a good falsifiable test here. That is, after building a better telescope, if we see signs of life in galaxies farther away (perhaps looking 2 billion years back), then we can expect that there may be something preventing life in our neighborhood, and that can be berserkers.  But for now, since the dead spots in the universe don’t seem to be local, we can reject (or at least reduce the possibility) of berserkers.

Finally, that leaves us with the following:

War/Pandemic/Environmental destruction
No Spaceships
Lose Desire

I will talk about the No Spaceships and the Lose Desire in two other posts, but for now I will talk about war/pandemic/environmental destruction.

For pandemic, there are two types. The first is naturally occurring pandemic, the second is human designed pandemic. We can reject naturally occurring pandemic due to universality. Yes, we might be destroyed by a virus, but we wouldn’t expect every civilization to randomly be destroyed by viruses, any more than we expect the human race to become extinct by everyone having a heart attack at the same time. The second is more worrisome, that we will design a supervirus (or superbacteria or superfungus) that will destroy us. This would be stable and prevent colonization – but is it universal?

Let’s change focus briefly to talk about war. Well, we’ve had plenty of wars in human history, but I guess the kind of war we’re talking about here is nuclear war. Nuclear war would certainly stop space colonization so it would be effective, but would it be universal?

Well, the best way to determine how likely things are is to look at how often they happen (this may seem like it is obvious but it isn’t. I will have to write something about it at some point). But in this case, it’s hard, because we don’t have multiple examples of civilizations; and the one example we do have (us) it is kind of necessary that we can only have observations before it happens.

However, there is a way we can look at this. Lets assume that we are in the xth percentile of luckiness. Then, see how long we’ve been lucky for, and we can calculate the yearly odds of destroying ourselves.

Lets give an example; say we’re in the luckiest 1% of the galactic population. Therefore, we can assume that at most, 99% of the galactic population has destroyed itself by this point in its history. Its been 62 years since the Soviet Union developed hydrogen bombs (I’m assuming that we only have world destroying possibility if two parties have the bomb). If we are in the luckiest one percent, then we can do some math, and assume that at most, there is a 7.16% chance the average civilization will destroy itself via nuclear weapons in any given year. (the math for this is (1-x)^y = z, where y is the number of years since self destruction became possible, z is the percentile of luck that we are, and x is the chance that we will destroy ourselves in any given year.) If we assume there is a 7.16% chance any given civilization will destroy itself in a given earth year after possessing the bomb, then we can expect that if there were 10,000 civilizations, the longest one would last 124 years after building the bomb.  This might be  a little too short to colonize the galaxy. Anyway, instead of writing more, let me add a chart:

How long the last civilization will last based on…
How Lucky We are (percentile) Worst case Chances of Self Destruction per Year 10,000 civs 100,000 civs 1,000,000 civs 10,000,000 civs
1% 7.16% 124.00 155.00 186.00 217.00
5% 4.72% 190.62 238.27 285.93 333.58
10% 3.65% 248.00 310.00 372.00 434.00
25% 2.21% 411.92 514.90 617.88 720.86
50% 1.11% 823.84 1,029.80 1,235.76 1,441.72
90% 0.17% 5,419.88 6,774.85 8,129.82 9,484.79

If we assume we’re in the 50th percentile (which might be the most reasonable assumption), we can see that even if there are only 10,000 civilizations in the galaxy, we can expect at least one of them to last about 824 years after the invention of the bomb, which might be enough time to begin to colonize the galaxy.  If we assume the galaxy produced 10 million civilizations (about one intelligent civilization per ten thousand star systems), we see that we can expect one civilization to last 1,442 years.  Also, since we’re so close to the invention of the H bomb, these numbers will become out of date quickly. If you’re reading this in July of 2015 (assuming that you’re not reading it from a fallout shelter), you can replace the 824 with 830 in this paragraph.  If we’re still around in 20 years, that number will be 1,089.

Finally, let’s think about the universality of this: this analysis assumes that we all civilizations are like us. In reality, if there are many civilizations out there, we can assume a diversity of characteristics. We may be in the luckiest 1%, but I find it very hard to believe we are in the most peaceful 1% of intelligent species in the galaxy. Also, think about how much less likely nuclear war would be if we had a single world government (or single superpower, as opposed to the four or five we have now). A single world government would have little reason to ever launch nuclear weapons or do anything else to end the species. If we assume that there are other civilizations out there which are, either significantly more peaceful or significantly more likely to have a world government than we are, then we can revise those numbers above upward. If anything, we are in the worse case scenario; we had the invention of nuclear weapons coincide with the development of a major ideological divide (communism vs democracy/capitalism), so there perhaps reasons to think ourselves further down on the chart in terms of luck (paradoxically, this is a case where we want to be less lucky; if our current success in not killing ourselves depends less on luck, then it depends more on “skill”, which means we are likely to last much longer).

I think we can assume the same for the intentional pandemic I mentioned above. The motives for pandemic are similar, but if anything the execution is more difficult. Its very easy to understand how a nuclear winter could literally kill everyone, but with a pandemic, all you have to have is .01% of the population survive, and within a few thousand years we’ll be right back where we are. If it only kill 95% of the population, we might be talking about only a few hundred years to get back to where we are (if that, as our technology wouldn’t necessarily be destroyed).

Finally, on the topic of war, lets talk about stability of destruction. I don’t know exactly how destructive a nuclear war would be (although I don’t want to find out). If it blasts us back to the stone age, then it only sets us back a few thousand years, which isn’t much in galactic time.   We could potentially see 10,000 civilizations from a single planet alone (although that would assume that the negative effects of nuclear wars wouldn’t begin to accumulate, or that the effects wouldn’t retard civilization development). If destroys all big animals, leaving only rats and pigeons and cockroaches, then it very well may be that there will be another intelligent civilization on earth within a few million years (lets say 50 million years). If we have another 400 million years left of habitable earth, then we have 8 more chances just from one planet. Even if nuclear war destroys everything but bacteria on this planet, it is still conceivable that Earth would eventually get another chance at colonizing the stars.

There is one last category, which is environmental damage. This is a very hard one to speak about, because, even if we assume that global warming is real, it’s difficult to really tell how harmful it will be. It may cause the ice caps to melt and for low lying populations to be displaces and for famine to break out, it may kill billions. This would be very bad, but it wouldn’t be a filter. It would literally have to destroy the human race to really be a filter. Also, we can look at the same logic we talked about earlier. We are at some level a responsible species, after all we were threatened by the destruction of the ozone layer, but we pretty much solved that through the Montreal Protocol.  Again, I find it very hard to believe that there have been thousands or millions of civilizations in the Galaxy, and we just happened to be the most responsible one.

Now, there can always be something that I’m missing, and I will discuss the other filters at a later point, but for now I’m ready to at least temporarily dismiss the above as causes of the great filter.
*I once met Freeman Dyson’s Granddaughter and Son-in-law.