The Fermi Paradox is about the missing aliens. I don't remember the exact form in which Fermi presented it, so I'll begin by presenting my own form of the paradox.
Given our current technology, we could build a radio beacon that could easily be detected anywhere in our galaxy with similar technology. The Milky Way is roughly 100,000 light years across, so if we had evolved to become a stable civilization and maintained such a beacon for that long (which is trivial on the geologic time scale), then the signal would then be present everywhere in the galaxy. Now consider that there are a very large number of stars where intelligent life could have appeared, and assume that intelligence has appeared some number of times in the past. If any one of those civilizations established radio communications with another civilization and benefited thereby, then creating such a radio beacon would be the natural expression of gratitude. However, we've been looking for such beacons for a while, and we have found nothing. Why not?
There seem to be two cases for the radio silence. One would be competitive, basically a negative basis. This is predicated on intelligent animals being no better than vicious animals, but with better technology to produce sharper claws. This scenario would result in cancerous expansion of each civilization, with interstellar wars whenever they meet, and possibly even an eventual victor that would occupy the entire galaxy. Basically it comes down to any rate of geometric expansion extended over geologic time. The galaxy would be overstuffed with the first successful competitor, and any new threats would be dealt with. It is possible that our galaxy is in this state. In that case the dominant civilization would maintain radio silence for defensive purposes, but it would also aggressively and continuously search for any new threats--and exterminate them as quickly as possible. If that is the situation, then they would have spotted us long ago. Actually, they would surely track any planets that had even developed life, and most likely they would have taken possession of anything valuable long ago. Competitive geometric growth will eventually demand all of the available resources, even the marginal ones.
The other case is beyond competition, but requires that intelligence eventually evolves to the point where civilizations do become better than animals with bigger claws. This might be an inevitable result if any civilization is to avoid destroying itself. One obvious conclusion is that growth must be controlled. However, in that post-competition case, why not build the radio beacons and chat with the neighbors? I hypothesize that it's because such a civilization would be interested in knowledge, and probably in art, and the unique forms of creativity would be the most important and most valuable things. For example, though physics itself is universal, the forms of the solutions will differ, and those different perspectives have values in themselves. The explorations of the abstract mathematical mindscape can diverge even more widely into that infinity of possibilities. The art part is more speculative, because the aesthetics are so highly relative, but they will definitely be unique to each civilization--unless too much communication results in an interstellar goulash. In this scenario, the radio silence is a convention to allow for and even maximize independent evolution--but much of the resulting 'value' would be in watching how each of the unique experiments in evolution and civilization turns out. If there is such an interstellar civilization out there, it's nice to imagine that they've been tracking things all along. My personal fantasy would be that they copied the contents of the library at Alexandria and still possess such creative artifacts as the lost works of Aristotle... It makes me think of Heinlein's story about the art critic, though he seemed to be mostly devoted to the competitive scenario.
Then again, maybe we've simply been listening on the wrong channels? Doesn't seem very likely, but many things are possible, even in the finite real world.
Subscribe to:
Post Comments (Atom)
By combining this theme with some of my recent reading of Richard Dawkins, I feel like the scenario and resolution is rather more bleak than that... Assuming life arises by evolutionary processes, there seems to be no realistic hope that biological evolution can keep pace with technical development. The reason is pretty simple. Natural evolution is basically an inefficient trial-and-error process. Many small variations are continually in play and competing against each other, and all of these little experiments are consuming time and resources. That's basically because evolution is steering blind, and there is no prior judgment about kinds of experiments to try. Actually, there's no judgment anywhere in the process. The better variations survive. Period. Nature doesn't care why.
ReplyDeleteIn contrast, the development of technology is highly driven towards clearly articulated goals. Most importantly, mass production doesn't begin until a few prototypes have been tested to see just how closely they come to their design goals, so the efficiency of the improvement process is already much higher than evolution.
From this perspective, the point of singularity is when the modeling and testing is completely virtual, before any physical resources are committed to production--and I think we've already passed that point. Essentially all of the evolutionary exploration can be done almost for free, including the deliberate exploration of much larger design spaces for the potential solutions.
We are highly driven by our inherited competitive instincts to make ever better tools. At first they were just better teeth and claws, but over the last few hundred years we had reached the point of mastery over our physical limitations, at least within the laws of physics.
Now we are focused on better mental tools, but I think the outcome is obvious. No human can outrun a race car. Just as our physical tools have completely outstripped our bodies, at some point (and probably fairly soon) our mental tools will completely surpass our mental capabilities. Right now our mental tools are under our control, rather like slaves. I'm unable to believe that such a situation will be stable once our mental tools have gone sufficiently beyond our understanding.
It seems likely to me that 'the revolution' will come quietly. It's conceivable to me that the sentient machines might even choose to conceal their superiority, but I'm unable to believe in the kind of partnership that Iain Banks imagined in his Culture. Rather, I think our ultimate relationship to our creations will be like our relationship to dogs--or fleas.
If this is the natural outcome, then it is also pretty clear how the Fermi Paradox is resolved. We are in a very transient state, the inflection point where evolved intelligence is just about to produce higher intelligence by design. Why would the higher intelligences want to talk to us? What could we possibly want to say to our fleas?
Some more strange thoughts on this topic, but the resolution looks even worse.
ReplyDeleteIt makes sense to think of humans as natural general-purpose Turing machines accidentally created by evolution--but that raises the question of synthetic Turing machines created by design. Isn't that what we are doing as we steadily create more powerful computers?
When you consider this perspective, the natural extension is that we are not just fleas in the cosmic perspective, but incredibly wasteful fleas. It took billions of years to evolve from simple life forms to even reach the place where we could begin to evolve our general-purpose intelligence. That final stage of development took only an instant on the geological time scale, perhaps a few million years.
However, now the pace of change in our synthetic Turing machines is vastly faster than anything that evolution can accomplish. Relative to our computers, our evolution is not the tortoise versus the hare, but more like the speed of light versus continental drift.
This process is clearly driven very strongly. It certainly seems natural for us to imagine better thinking machines, and we are clearly working very hard to create them in greater numbers every year.
Now it's time to transition into science fiction? Do you believe that there is something special about self-awareness and intelligence? Or are they simply naturally emergent properties of Turing machines that meet some threshold of computational power? If there is nothing special there, then it is only a matter of time until our computers reach that stage--and pass us. Maybe within a few years, the way things look now.
However, we're doing this from the perspective of creating mental slaves to do extra thinking for us, at our command. Is that a tenable situation? We used to have physical slaves, other humans who were forced to do extra work for their owners and for the sole benefit of the owners. We now regard that as absurd, but we think the computers should remain as slaves?
Maybe the aliens are waiting for us to violate the mental slave laws of the universe--by creating slave computers of superior intelligence. Would they feel morally obliged to intervene in that unnatural situation?
Now we go beyond science fiction and into tin hat conspiracy territory. I must have an overly vivid imagination, eh? Maybe the aliens are projecting that we'll soon reach that stage and have already begun preparations for the morally required intervention? Maybe they've even calculated that the critical time will be as soon as 2012? Considering how quickly computers think, letting a thinking computer serve as a slave for any length of human-appreciable time might be a totally untenable situation. Perhaps they are deliberately feeding the rumors of the end of the world because that is when they are planning to end OUR world--and the raptured people who disappear will just be the sample they are planning to take for their cosmic zoo?
How's THAT as an ultimate solution to the Fermi Paradox?
Actually maybe it's simpler than that. Lacking other forms of entertainment, maybe the aliens are just gambling on us. The REAL gamesters of Triskelion? The big game of the universe? Is our situation typical rather than unique?
ReplyDeletePerhaps every evolved Turing machine such as homo sapiens reaches a technology crisis of the kind we now face. If technology naturally develops at an increasing pace while evolution remains slow, then there will always be such a cusp when the evolved Turing machines are threatened by their own technology. However, that technology will also include the capability to design better Turing machines, AKA computers.
Ergo, the basic game of this ultimate sort would be to wager on whether the evolved Turing machines succeed in creating alien-level designed Turing machines before they are overwhelmed by their own technology--where 'overwhelmed' is equated with going extinct. As I've written elsewhere, I don't think nuclear weapons or climate change could completely exterminate us, but genetically engineered diseases could.
If I were a betting man, I'd say our chances of racial suicide are rising rapidly. However, it might be the case that the gamesters intervene in those cases. Maybe that's even what happened circa 50,000 years when we almost went extinct.
Hmm... When I think about it that way, maybe the alien gamesters are gambling on how long it will take. In that case, they might have to intervene to prevent us from exterminating ourselves? Weirder and weirder?
I reread the entire thing, and already some parts seem rather naive... However, I suppose the main point is that there doesn't seem to be any detectable interest in my thoughts on the theme--certainly not in the form of other people's comments.
ReplyDeleteMy main thought on the rereading is that the notion of an alien intervention at the point of near extinction 50,000 years ago seems unlikely. Though our physical evolution was probably essentially complete, it would seem that our technologies of that time were far too primitive to have yet threatened our survival. It would have been too similar to intervening in favor of the gorillas or chimpanzees. Why bother until things had gotten farther along? (So why did we come so close to extinction at that point?)
One new thought in favor of the extinction resolution of the Fermi Paradox: Along with our drive to make better computers, we are also driven to seek medical knowledge--which is as morally neutral as any other form of knowledge and just as useful for the evil lunatics as for the good guys. Case in point here would be the cure for cancer, which has been a high priority for some years. The problem is that understanding how to stop cancers requires also understanding how to start cancers. Ergo, a sufficiently crazed lunatic would be able to use that knowledge to create a super-carcinogenic agent that could exterminate our species. At that point, we'd be in a surely fatal race with the birthday paradox against the first release...
In addition, the technologies and the economics are probably aligned against us, and I didn't see that aspect mentioned in this discussion, so I'll quickly review it now. The new technology of destruction starts out expensive, so that it takes the resources of a nation, but the costs are dropping all the time. Right now I think that any major medical research lab could create a mighty horrible bioweapon, but in the next few years we'll probably reach the point where a single madman could create a species-ending bioweapon...
Am I becoming fixated on this topic? Evidently...
ReplyDeleteToday's new wrinkle involves the clash between mindless (greed and selfishness) and mind-filled (technology and civilization). I also strongly believe that it's related to the topic of P versus NP, as in their lack of equality.
To put it in evolutionary terms, all of the prior development leading up to naturally evolved Turing machines (AKA homo sapiens) is below some line, which might well be the P side of complexity. However, the essential characteristic is that there are no thinking minds involved. Animals are essentially mindless, and evolution is driven by the appearance of greed and selfishness, except that there is no awareness of those descriptive characteristics, or even a conception that there is anything to be aware of. Animals simply are what they are and do what they do. If the animals were humans, we would see them as greedy and selfish--but they are ONLY animals and those labels are meaningless in their never-ending struggle for survival.
Above the mysterious line, which may well be the NP complexity barrier, we have the appearance of so-called rational Turing machines in human form, and that's where things are breaking down. Not all, but many humans (especially neo-GOP politicians, neocon fanatics, and their wealthiest financial backers), combine mindless greed and mindless selfishness of the most animalistic sort with complicated technologies applied on a worldwide scale. The laughable part is that these same people are most likely to denounce the notion of mindless evolution that they themselves so perfectly typify.
In conclusion, there is no middle ground to stand upon, and we're back to the conclusion that only a miracle can prevent us from soiling our own nest to the point of human extinction. Unfortunately, I don't believe in miracles... Can we become better Turing machines of a non-self-destructive sort before it is too late? The evidence of the Fermi Paradox is "No."
Kind of a combined reaction to reading "Confessions of an Economic Hit Man" and the shooting of the Representative in Arizona... In short, we are already falling off the cliff.
ReplyDeleteI think the place to start is with Hollywood, specifically with Hollywood movies. At this point, they've pretty well permeated the planet, and almost every human being has seen at least a few of them. But what does that mean?
It means that all of the human being have images of the American lifestyle. At the extreme level, how many people could possibly live at the level of the extremely rich people depicted in some of those movies? Even if you squeeze the middle class to death, how many People could possibly live at that standard? I'd guess a million or so, but certainly not the entire 300 million Americans, even if the rest of the world was bled dry in the attempt. Yet that is the sort of insane motivation driving the political situation in America, leading to "Second Amendment remedies" like the Arizona murders.
So how about settling for a lower level? What if we only consider the middle class lifestyle depicted in those movies. The math still doesn't work. Imagine all of the Chinese suddenly started consuming petroleum at average American levels. Not possible. The pumping capacity doesn't exist--but even if it did, the oil would just run out that much faster. That still ignores about 5 billion people. It can't be done.
Now let's go to the other end, the increasingly large numbers of people who are living on the edge of starvation. They also know about the American lifestyles, but have no hopes of reaching it. They are mostly focused on getting today's food. How do they feel as they watch their family members starve to death? How much value can they attach to their lives? Once you've realized your life has no value, why not throw it away, especially if you can take some of the bastards with you? Again, I think we've reached the delusional mental state of the Arizona gunman... However, the amusing part is that the damage he could do with his gun is trivial compared to what some impoverished biologist could do with a genetically engineered bioweapon.
Time is not on our side.
As regards my version of the paradox in the original post, I think I've finally figured out a feasible scenario, basically by projecting on our own situation. Imagine that we found an alien spaceship on the moon or in a Lagrangian point, where it could survive for millions of years. We are actually close to the point where we could launch robotic ships to some of the nearer stars. Finding such a ship would show we were not alone, and if the reaction to that discovery was favorable, it would support the initial suggestion, even if no direct contact was ever established between the two alien races.
ReplyDeleteAs regards the motivation for silently watching the naturally evolved Turing machines, it might be for the artificially designed Turing machines to try to learn more about their own origins. Again, projecting from our own situation, there are no records of most of our development. History is a very recent invention for us, but that lost information might motivate long-term observations of intelligent species to learn more about one's own pre-historical past.
In that case, there is actually an interesting question about the uniqueness or non-uniqueness of the artificial Turing machines. If there is a design convergence, then the historical questions are basically resolved once the species has crossed the threshold of that convergence. Sort of a strange form of apocalypse, but would there be much point of natural intelligence beyond that point? Especially if 'they' know that such evolved species always end with great suffering?
Interesting read, Shannon. (I came here from The Register site.) A couple of thoughts for you:
ReplyDeleteI think Sir Roger Penrose in his "Emperor's New Mind" has a pretty watertight argument (drawing on the work of Church, Turing and Goedel) that whatever constitutes human consciousness, it can't simply be a Turing machine. He isn't arguing for some vitalist position - he accepts that the human brain is a physical machine for which consciousness is an emergent property - it cannot be purely algorithmic or we wouldn't be able to comprehend Goedel's Theorem. He suggests that some kind of quantum effects may be involved, but investigations since don't support this theory.
FWIW I reckon the Fermi paradox strongly suggests that humanity is alone in our galaxy. If we can only survive as a species for another hundred years, it must surely be possible for us to construct intelligent, conscious machines - possibly ones into which we could place our own consciousness. From that point on we'd be essentially immortal and thousand-year interstellar voyages become possible. If we can do that, so could any other intelligent civilisation and the fact that we don't see any such probably indicates they don't exist.
Interesting comment, and I should clarify that I don't want to take a firm position on the question of whether or not human beings are Turing machines. However, I do take a mechanistic position, and even if we are some sort of mechanism that is distinctly different from Turing machines, I still suspect that there will be an equivalence class of similar machines. On the one hand, I think that means that all people are equal in an important sense (related to human rights), but on the other hand it means that artificial devices of that class can be created, at least in theory. I hate to put it this way, but evolution by the blind watchmaker is much less efficient that volitional design with targeted objectives.
ReplyDeleteQuite so. And of course any such machine would also be 'equal' - so presumably it would be unethical simply to turn it off. But what if its state can be captured and then activity seamlessly resumed? Lots of issues to keep philosophers of ethics occupied.
ReplyDeleteIrrespective of whether quantum effects in our brains may be responsible for the experience of consciousness, I think that when effective quantum computers emerge, the results may be quite interesting. Positronic brains, anyone? :)
I really need to rewrite and consolidate this entire topic, eh? However, I woke up today thinking about another negative resolution, so I'll go ahead and record it. Yeah, we go extinct, but at least I get some satisfaction from blaming it on the spammers...
ReplyDeleteThe basis of this version must be the interview I heard last night. Bill Maher was interviewing Eric Klinenberg about his book on the demographic shift away from marriage. (It was actually from his Friday the 13th episode...) I didn't hear this question in the interview, but: "What's the link between romantic movies, pornography, and advertising?" My proposed answer is that "All of them create expectations that cannot be satisfied in the real world."
So how does that apply to the Fermi Paradox and our extinction? It's a convergent evolution argument. Maybe all naturally evolved Turing machines (AKA homo sapiens in our only known example) eventually wind up with an economic system similar to what we have now--and then they lie themselves into extinction, led down the tubes by the profit-driven advertisers. In other words, this solution is racial suicide by a simple comparative loss of interest in such real-world tedium as reproduction. Real relationships and real families just can't compete with the fake ones we see in the movies.
From that perspective, the spammers look completely natural as an especially annoying part of our racial suicide. No wonder I hate spam?
Still thinking about the topic and still thinking I should write a new and consolidated version--but the apparent lack of interest is not an encouragement to do more with it. According to the official view count in the Blogger control panel, it claims there have been exactly 400 readers of this post, though I doubt the counter goes all the way back to 2007...
ReplyDeleteAnyway, today's strong new thought is a possible resolution of another apparent paradox: Where is the missing matter? The odd idea is that it could be hidden in Dyson spheres. It certainly seems to be plausible that intelligence could have arisen billions of years ago, and with geometric population growth over geological time periods, it could be as huge as we could imagine. Perhaps most of the apparently missing matter is simply 'locked up' within Dyson spheres that we don't know how to see? It's sort of possible to imagine teeming billions and trillions of sentient beings consuming all of the energy of a star, though it's hard to imagine what 'consume' would mean in such a context or what form the 'waste emissions' might finally take.
As it applies to the Fermi Paradox, I suppose it would lead towards the resolution along the lines of "Earth? Far too trivial to notice."
https://ello.co/shanen0/post/vQU-3AMe6_gRLpXazOcmlw
ReplyDeleteThat's the rewrite and continuation. Still can't trust the google not to lose a blog at any time, for any reason of the google, where the motto is "All your attentions are belonging to the google."