Version 0.2
Every Logarithmic Change Looks Like a Cliff to Linear Human Beings
Thesis: We're basically linear beings and we live on a linear time scale. Any logarithmic change will eventually look like a cliff to us.
The example I've been aware of for a while was population increase, starting from the perception of homo sapiens the hunter-gatherers. We evolved in small-scale societies where we could know all of the people who mattered, but now we live in an international society with so many people we could never meet all of them or learning anything about the vast majority of them. However, from the perspective of the thesis, it's probably best to consider the famous population S-curve, which is often used to depict logarithmic population growth and the eventual leveling off when arithmetic reality collides with and causes the end of the period of exponential growth. (Another extension is to consider how this interacts with the rapid spread of any important genetic mutation to sometimes make evolution appear sort of spastic...)
Now it's technological change that's getting in our faces. For a personal example, I used to feel I was almost up to date on computer technology, but it's increasingly obvious that it's running away from me-and from everyone else, too. I actually concluded that Thomas Jefferson was living in a time when it was still possible for an intelligent person with sufficient leisure to learn almost everything about science.
At this point, we're creating so much of anything that any individual human being can saturate any amount of human time. For example, you could never keep up with all of the movies that are being released or you could spend your entire life just watching the old movies we've already created. Ditto for books, and even for more narrow areas.
Kind of an extended 'application' of my thesis here, but in the Google IO keynote presentation a few weeks ago, Larry Page received a friendly question about showing people the search results they wanted to see rather than possibly more accurate answers that are less satisfying. He generally seemed to be seeing things through rose-called [Google?] glasses, and he replied that he didn't see any problem with this form of personalization. I actually predicted this problem a couple of years ago under the label of "pandering to the user", which has come to pass and is much worse than I thought it would be. (That may have been around 2005, but I can't easily check due to google censorship of the newsgroups.) My first research after Larry Page dismissed the problem was to google "obama birthplace kenya", which produces more than 2 million hits. If that's what I want to believe, the google just gave me enough "evidence" to saturate all of my free time for several years...
However, I've since thought of an even less friendly way to pose the question. I would ask "Did Google help kill Steve Jobs?" When his ultimately terminal cancer was first diagnosed, at least one of his doctors has said that it may have been treatable--but he delayed treatment. I don't blame Steve Jobs for wanting to believe that there were alternatives to immediate surgery, and now I wonder if the google helped him in making that fatal decision. My understanding is that he tried some kind of anti-cancer diet to control his cancer and it failed and he died. Did he get information about that diet from a google search? If just did the search "alternative cancer treatments", and several of those hits and ads look potentially dangerous, but personalization means that your hits will probably be different--and more attractive and specifically appealing to YOUR tastes and interests. As long as someone paid for the ads, I guess the google doesn't see any problem if Steve Jobs made a fatal mistake, eh?
In another perspective, perhaps the saturation of information is driving people nuts, which led me to the AI threat... One aspect is the increasingly difficulty of standing out in a positive way that might drive some people to try to stand out in a negative dimension. When you combine it with increased individual capacity to do good (or bad) things, we humans beings start looking rather dangerous. Maybe the solution to the Fermi Paradox is that any AI is led to a very unpleasant conclusion... If it's creators are fundamentally irrational and dangerous, perhaps the only conclusion is that they must be exterminated as soon as they can be replaced with suitable robots?
Or perhaps I'm just too frustrated by banging my own head against logarithm cliffs?