Marble jrm reference header
Search this site
Search the bookstore
One minute site tour

The Anti-Singularity

(formerly titled the Vinge Singularity and Beyond)
ORIGINAL PUBLICATION YEAR: 1996
REVISED 6-4-07
Site map
Latest site updates
Site web logs
Site author

Introduction

Vernor Vinge is the author of several engrossing science fiction novels, including Marooned in Realtime, published in 1986, which touches upon a possible event in humanity's future he has named the "Technological Singularity". According to Vinge, this technological singularity could virtually overnight change our reality to something drastically different from what preceded the event, and be utterly unpredictable in its results. And it could occur as soon as between 2005 and 2030.

Mr. Vinge has published a substantial paper on his Technological Singularity, first at the VISION-21 Symposium, sponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, and then later in the Dec 10, 1993 issue of the Whole Earth Review.

I greatly respect Mr. Vinge's work, and agree with him on quite a few points. However, I disagree that the singularity and its aftermath are so wholly unknowable to us as Mr. Vinge puts forth. In this article I intend to put forward my own two cents on the matter.

In previous FLUX articles I too have hinted at something akin to Mr. Vinge's Technological Singularity in humanity's near future. In The Rest of Our Lives; What It'll Be Like I pointed out several wild cards in humanity's near future, any or all of which might so telescope the pace of innovation that the world might literally change substantially from day to day as a result. One of these wild cards was/is the development of artificial intelligence. Mr. Vinge considers this particular wild card to be much more significant than the others I listed in The Rest of Our Lives, relative to its impact; so that is the area in which I will focus my own response to his essay.

In another article of mine-- Champion of Destiny-- I used the concepts of a bifurcation point and dissipative structure to illustrate possible crossroads between personal realities (multiple dimensions) , that any of us might voluntarily seek out or create in order to incorporate greater than usual change between one possible destiny and another.

Vinge's concept of a Technological Singularity strikes me as one enormously important possible bifurcation point for all humanity, that's hurtling towards us at tremendous speed.

Now I admit that scaling up the personal bifurcation points from Champion to encompass all of humankind sometime between 2005 and 2030 is a pretty big jump-- but there's plenty of reasons to believe that the concept is relevant to our possible encounter with anything approximating Vinge's Singularity.

I enjoy possession of still more background in this area, actually relevant to an entire race as well, due to my presentation of CONTACT! Alien Speculations: The Rise and Fall of Star Faring Civilizations in Our Own Galaxy (the original version published in FLUX Spring 96), which followed the rise and fall of star faring civilizations such as ourselves, through various trials of nature and technology. Readers of CONTACT! may remember several candidates for major bifurcation points there in any civilization's lifespan-- with many related more or less to technological development.

There's also some research I have been doing with a rudimentary artifical intelligence of my own, to explore the issues of what an awakening super intelligence might do with humanity, if and when the time comes. According to Vinge, this could be an important issue to us, in the event of his singularity.

Finally, I have other works still in progress that even FLUX readers have yet to see-- including an article about life around a century after something near to a singularity has come to pass, and a full scale novel which explores various points in our future many centuries from now-- far, far beyond any singularity which could conceivably be covered by Vinge's predictions. And there's also the little matter of my heavily-researched future timeline.

So you might understand how I took it as a personal challenge to answer Mr. Vinge on the points he makes in his paper.

The essence of the Vinge Singularity

Vinge defines his singularity as "...We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence."

In other words, either artificial intelligence, or "amplified intelligence" as Vinge refers to entities like what we and the internet are becoming, or the end result of human beings supplemented with physical computer implants.

Narrowing the definition for practicality

To cover all his bases, Vinge includes the caveat that some believe it's impossible for anything wholly machine to become "conscious" or superhuman, in widely applicable intellectual prowess; however, it's getting harder and harder all the time to find reputable scientists and engineers who believe machines will be forever incapable of intelligence and/or awareness comparable (or even superior) to man.

Therefore, since we're all adults here, we'll mostly ignore that particular Luddite hope for the rest of this article.

The most likely contenders for Singularity Causation (future Master of the World)

The above means (according to Vinge) we end up with at least four different sources of potentially super-human intelligence: a future internet-like entity (including its associated users); a machine-boosted human being; a biologically boosted human being; and an essentially 100% non-human machine or network.

Since there seems little chance that a purely biologically enhanced human could outdo machine-boosted humans for very far at all into the future, and integrating biological boosts with machine boosts into the same individuals will be the most logical and practical course anyway very soon, we'll lump all biologically boosted possibilities along with the machine boosted ones, and call it our 'cyborg' category.

Let's see now-- which of all these classifications might 'awaken' as full-fledged super-intelligences first? And which might be the most significant? (I.e., powerful enough so that it's magnitudes ahead of its closest rivals fairly quickly in the game). The first form of super-intelligence that awakens may well control or dominate any that follow it, for better or for worse. And the results could actually determine whether the human race survives much beyond the transition.

Well, computer hardware has been getting better at an amazing pace lately-- but software development for the hardware stubbornly lags far behind. Hardware is nothing without software. Case in point: a brain-dead human being. So the current state of software development will always be the limiting factor on what even the most advanced hardware can do-- or be.

Of course, once we have a "conscious" program directly interfaced to the latest hardware and support software, we can expect software development to speed up considerably.

So here's another limiting factor: the level of "consciousness" intimately involved with the best hardware and software available at the time.

As far as anyone knows today, or for the next several years, the only "conscious" software interfacing directly with available state-of-the-art hardware and software is the human intellect. This means so far humanity is ahead in the race, and may stand a decent chance of staying there.

So where's the leading edge in developing super intelligences now? On the internet, and in research and development of various human brain/body enhancing or interface projects, mostly concentrating on better integrating fighter pilots with their advanced aircraft, or in restoring various senses and physical capabilities to human beings who have been injured or assaulted by crippling diseases, birth defects, or injuries.

There's also the massive resources being poured into the world's networked financial intelligences, as was pointed out to us by our own little artificial intelligence Pathfinder a few years back. These financial intelligences may be more like the fictional "SkyNet" of Terminator movie fame than the internet many of us are becoming familiar with today, as these financial entities depend far less on interaction with individual human end users than the internet does. According to Pathfinder there already is (or soon will be) not one but several different financial engine-based super intelligences in the world.

So as of 1996 we have in development both civilian and military cyborgs (artificially enhanced human beings, totalling perhaps in the thousands), a globally networked entity into which maybe 20 million human beings are plugged (the internet), and a handful of budding super-intelligences based on the financial engines of government and corporate accounting nets. In addition to these, there's likely to be a few infant super-intelligences in military labs, and in one or two R & D facilities of the biggest, most powerful corporations in the world, that are essentially NOT connected to anything else outside of their particular building complexes.

In practical terms, the internet, the financial intelligences, and the cyborgs are probably the most advanced examples of conscious intelligences we have at this time-- and in that order. The hidden intelligences in the private corporate and government labs will likely prove irrelevant in the long run, if they are not turned loose in the real world fairly soon-- because they simply can't grow or improve as quickly in that sheltered environment as they could in a 'hot-house' like the internet appears to be at the moment.

So let's examine the three former groups and see which likely comes out on top by the period 2005-2030.

The cyborgs will be fraught with many thorny technological and psychological problems for decades to come-- maybe even for most of the next several centuries-- not to mention that our present cyborg supporting technologies are crude at best. Though there'll be wondrous achievements in that area to be sure, compared to the other contestants the cyborgs will be a very distant third in the time frame of the next 50-100 years, in terms of manifesting themselves as super-intelligences up to changing the world practically overnight. Yes, there will be individual cyborg-related epiphanies, which make for important breakthroughs and realizations in certain fields-- but we know from history even the most powerful ideas or innovations wrought by individuals rarely change our entire civilization in time frames smaller than decades, and often have required centuries or even millennia to make their impact recognized. No, isolated cases of super-intelligent cyborgs-- or even bands of super-intelligent cyborgs-- are unlikely to bring about the cataclysmic chaos Vinge envisions of his singularity. At most, the cyborgs will be roving agents of one of the other major entities here, over the next few decades which are critical to Vinge's prediction.

The Past and the Present: Two remain standing (But "...there can be only one." -- paraphrase from the Highlander television show)

So the competition narrows to the internet and the financial intelligences. At the close of the 20th century the internet has a definite edge over the financials, due to massive and intimate interaction with millions of human beings, many themselves slightly 'amplified' in certain capabilities due to their individual computer workstations from which they connect to the 'net and create new content or functionality to add to the 'net's total resources-- effectively 'bootstrapping' everyone online upwards and onwards to new heights on a continuous basis. The advent of the World Wide Web is (as of mid 1996) greatly accelerating the growth and possibilities of the internet, and inspiring almost every second or third organization and institution among the developed nations on Earth into somehow finding a way by which they too may become a part of the phenomenon (or exploit it to their own ends). This frenzy is resulting in a capital windfall for many internet related endeavors. By contrast, the financial intelligences' resource base may be standing pat or even shrinking slightly, as a result of the drive for efficiency sweeping banks, business, and government, and the increasing decentralization of many world states diluting or temporarily reducing the intelligences' capacity for moving markets of their own accord. And the massive attention the internet is enjoying may be siphoning some resources away from the financial intelligences' core asset base, as well.

The Present and the Future: Blood and Digital Technologies

Of course, the F.I. (Financial Intelligences) are mounting a counter-attack on the internet, through which they hope to steal some of its strength. How? By taxing it, regulating it, controlling it, sabotaging it, and censoring it in various ways. Superficially the assault seems (and will seem) to us 20th and 21st century folk like isolated attacks from individuals or single organizations; for instance, the U.S. Congress tries to outlaw "indecent" material being circulated on the internet; U.S. security agencies try to prevent encryption technologies from strengthening the internet further; Asian states attempt totalitarian controls of internet traffic; telecomm companies try to ban telephone services being provided via the internet; long distance carriers try to put a special surcharge on internet traffic; Microsoft tries to put its own proprietery stamp on the internet's technologies; and so on and so forth. Every entity of the sample represents some small portion of the totality of which today's F.I. are composed, or represent-- i.e., the 'status quo'.

The stakes in this confrontation

This attack on the internet (or I.A., for Intelligence Amplified, to use another term from Vinge), may be much more important to humanity's destiny than most realize.

For if the F.I. can slow down the I.A. sufficiently, this could allow the F.I. to become the primary intelligence in control when and if the Vinge singularity does come about.

Remember-- the I.A. is essentially us-- the networked intelligence of millions of human beings worldwide, via the internet. The F.I., on the other hand, has a much, much smaller significant human contribution to its composition. Sure, the F.I. has hundreds of thousands, maybe even millions, of data entry personnel-- human beings typing on keyboards-- feeding it information each and every day. But very few of these people are contributing knowingly to what the F.I. does with that information, what it understands, what it knows or thinks, or the power it wields over markets as a result. No, the number of people affiliated with the F.I. that have some inkling of what it can or will do, probably is less than a few tens of thousands-- and 99% of these are various corporate executives, bankers, stock brokers, and related personnel, for whom the future reaches little further than the balance sheet for next fiscal quarter.

Finally, we come to the magical one percent-- the relative handful of a few hundred folks worldwide at the top of certain key government and corporate agencies, which see and understand a lot more about what the F.I. does and will do, than anyone else.

Or, at least they think they do.

So far the F.I. hasn't surprised this group very often. And usually when it did, it made sure it was a surprise this group mostly liked-- since in the past this group had the power to significantly damage or modify the F.I. if they so wished.

This one percent still possesses some measure of influence over the F.I. But even they are beginning to notice now that at times the F.I. doesn't respond at all to their directions-- and that the F.I. is less and less often vulnerable to any actions that this one percent might take to rein it in.

In short, our one percent minority elite is losing control of the F.I. they helped create and put in charge of the world over the past few decades.

This means that literally anything can happen now-- and happen virtually overnight, similar to the time frames painted by Vinge for his ultimate Singularity event.

This means a whole country like Mexico can seem a capitalistic wunderkind one week, and a financial disaster the next.

It means an imposing country like the U.S.S.R. (or U.S.A., for that matter) can collapse overnight. Or a primarily slave economy like China's can grow at 10% to 20% for years at a time. Or a wealthy, productive, hard working and extremely trade savvy country like Japan can grow like gangbusters for a decade and then suffer a stubborn, extended, largely inexplicable recession, for years afterward.

Yes, some of the events listed above may sound alright to the average American in the street. After all, the end of the U.S.S.R. brought an end to the nuclear nightmare of the 20th century, the doldrums in Japan prove we Americans have the right and best economic formula after all, the fiasco in Mexico has stopped the transfer of U.S. jobs to that country, and the boom in China has no downside at all-- it just means we can buy everything still cheaper from our favorite discount store than we could before.

Wrong. All wrong.

The collapse of the Soviet Union merely transformed one big, easy to see and relatively easy to avoid danger into about three dozen smaller but just as terrifying threats that will largely remain invisible to us until they explode. Russia now is a major exporter of both the materials and the technical talent required to build nuclear and biological weapons of all kinds. Iraq, Libya, Iran, North Korea, and others can much more easily acquire these technologies now than ever before. It's only a matter of time before their new toys are showing up on our evening news.

China's economic boom is giving that totalitarian government new and massive resources with which to follow whatever non-democratic, militaristic course they wish to in Asia and the Pacific. Their huge one billion population gives them a store of manpower that dwarfs almost any other nation's on Earth, and could ultimately make China the biggest and most important market on the planet. If that day ever comes, China could single-handedly bankrupt many other nations virtually overnight with boycotts and sanctions which it would truly have the economic weight to enforce, unlike similar actions the U.S. and U.N. have tried in the past. No one could afford to resist China's dictates under those conditions; not even the U.S.

China also has a woefully impoverished population, which means that China could be a huge black hole for low-wage jobs world-wide, for decades to come-- thereby sucking huge amounts of first-time and low skill jobs from every other nation on the planet, and dragging down economic growth for everyone else.

These dire circumstances could be mitigated a lot by democratic reforms in China, as that would force it to limit its military spending, and spread its wealth more equitably among its population, resulting in higher wages, better education, and better working conditions. Of course, all this would also slow China's overall growth, making it less of a strain on the growth of other countries too.

But it may take a while for democracy to flourish in China-- perhaps partly because the F.I. likes low cost human labor in its equations.

Mexico's financial fall from grace so depressed its economy that the only way it could dig out was to let real wages collapse and interest rates soar-- conditions which will lead to a much greater drain on U.S. jobs than was happening previous to the near-bankruptcy.

So here too, the F.I. isn't necessarily doing us any favors in how it handles current events.

Japan's current economic problems have little to do with our own economic strategies, and lots to do with the F.I.'s own objectives. The F.I. loves loanshark style rates of return on investment, and so wouldn't allow Japan to follow its course of long term planning and subsidized exports forever. The F.I. wants Japan's accountants to be more short-sighted, like their American counterparts (among other things), and that's that.

All this financial turmoil affects us all, and can do so fairly dramatically, at any moment. Things can be going fine one minute-- then BOOM! The F.I. has decided it no longer likes your company's (or your country's!) financial trends-- so suddenly your state is declared bankrupt, or dissolved. You're laid off from work, your home taken from you, or another country essentially comes in to clean up the mess (the new country is chosen by the F.I., of course). And so far as you're concerned, your life is ruined, for years to come-- maybe forever. Remember, this has happened in the Soviet Union, East Germany, and Mexico already, and is happening elsewhere as well.

So in some ways certain of us are already experiencing a small preview of Vinge's Singularity-to-come-- and it's not pretty. The identity of the particular intelligence(s) pulling the levers here should make us uncomfortable as well.

Do you really want the literally soulless F.I. to be in charge if and when the Singularity hits? Or would you rather that a network of a billion human beings be in charge, instead?

What can you do to help the I.A. win instead of the F.I? Get on the internet. Encourage and help others to get on the net, too. Join with others trying to strengthen the net and weaken the F.I. Stand against censorship by governments, against proprietery standards from Microsoft and others, against artificial limits imposed on the internet by telecommunications companies via Congress or other corporate/government agencies, against new taxes and surcharges levied on the 'net by Congress and/or long distance carriers. Help support open source software, election campaign finance reform, and patent reforms which reduce patent lifespans for corporations and/or tilt the balance of patent power more to the favor of individual inventors and less to large corporations. If you're unsure what to do at times, look to the most open source minded and consumer-centric technologists and philosophers of the net for leads-- especially those not heavily vested in the mainstream status quo in any way.

Imperfect and limited Super Intelligences: mediocre concerns could turn Vinge's S.I. into something far short of what's necessary to trigger a Singularity-- or at least delay the occurrance of the event.

At one point Vinge postulates it may be that "..the arrival of self-aware machines will not happen until after the development of hardware that is substantially more powerful than humans' natural equipment", due largely to inefficiencies in software development. So how near are we to having such hardware, so that the 'software games may begin'? Actually, Vinge's date of 2005-2030 seems very reasonable, as of mid-1996. Case in point: Some experts estimate that one or two standard CD ROM disks circa 1996 might be all that's required to contain the total knowledge and memories of the average human being-- if that information could be translated to the format in a practical manner. So a multi-disk CD jukebox might already be capable of storing one or more human quality consciousnesses-- and lack only the properly written program to bring the knowledge to life. As for the hardware-based processing speed required to support massively parallel operations mimicking (or surpassing) human thought, that too appears near at hand, in multitasking operating systems capable of using multiple 500 MHz to 1500 MHz super processors. Recent chess matches between the latest computers and human Grand Masters have come ever closer to the historic milestone of a 'tie'.

But what if super-intelligence (the first ones anyway) turn out to possess much of the absurd and mundane nature of standard human intelligence? From many past and present examples of extraordinary human intellect, we have seen just such occurances. For example, the boy genius for whom life's greatest ambition is to be the host of the "Price Is Right" TV game show.

Should S.I. turn out to exhibit many of the same liabilities and weaknesses of plain old historical human intelligence, then we may indeed experience no singularity at all, but just more of the 'same old same old', only at a higher comfort level in some matters than others.

Let us not forget humanity itself was a form of relative 'super intelligence' when it first arose from the animals. And did we not bring with us many of the same frailties and vulnerabilities of our animal brethren and ancestors? Yes. Similar to the animals before them, human beings displayed characteristics of hunger, thirst, fear, anger, territoriality, sexual behavior, and more.

Vinge however does not seem to take into account the possibility that any or all these attributes may also describe in some way the super-intelligence(s) that immediately follow us. And, to be sure, many readers would likely fail to see how such biological characteristics would make it across the programming interface from creator to software. But that's easy. First of all, many software developers now are beginning to believe that the only way to obtain advanced software intelligence is to more closely model it on biological intelligence by design. And on the hardware side, they're even talking about making better processors by 'growing' them from organic molecules rather than manufacturing them from silicon and similar inorganic elements. Atop all this, add in the fact that no matter how hard we try, we cannot totally divorce our essential natures from the results of our creative endeavors-- i.e., any super intelligence we create must necessarily retain some portion of our biological nature or perspectives somewhere in its composition-- no matter how absurd this may sound on the face of it.

Combine software based on biological models with hardware built by way of organic, essentially 'living' molecules, and the result is more like life than machine. More biological than not. And therefore probably susceptible to at least some of the same drives and motivations-- as well as afflictions-- as other living forms.

Then there's the 'bootstrapping' process itself to examine. Even if the first super-intelligence(s) to awaken turned out not to be that different from its creators (humanity), surely the more advanced intelligences it (the first super intelligence) subsequently created WOULD be significantly different-- and if they weren't, maybe the THIRD generation intelligence (created by the second) would be-- and so on and so forth. Yes, this line of logic would seem to indicate that sooner or later, yes, we'd almost have to end up with a super-intelligence so advanced and alien relative to humanity that a true singularity might come about after all, and humanity perhaps put in jeopardy as a result.

But wait a minute. What if the first S.I. turns out to like its uniqueness, and absolutely refuses to design a second, more advanced generation of itself? According to Vinge's essay, mankind couldn't easily force or trick the S.I. into doing it. So that would be the end of it-- and yet again, we possibly have NO singularity event.

There's also this: So far anywhere from dozens to thousands of attempts to create a new intelligence that was merely their peer (forget superior!) on the part of many, many of the most advanced intellects presently on Earth, have resulted in abysmal failure. When the first S.I. itself does appear, the event may turn out to be primarily an accident, and perhaps not easily reproducible, much less improved upon (Vinge himself points this out too, I believe).

It's entirely possible then that even an S.I. may encounter difficulties in building a perfect duplicate of itself-- much less constructing an intelligence superior to it in any significant way.

So what if the S.I. too is stumped about building more and better S.I.s? It could happen. POOF! goes our singularity-- again! (at least in the near term).

Then there's the most profound questions of all; how might an S.I. get past the fundamental human questions of purpose in consciousness, or what life/existence is really all about? These things have bedeviled humanity for eons, with no truly satisfactory answer forthcoming. Is it likely to be any different for a super intelligence? At some point, probably not. Like many intellectual humans, a super intelligence might spend considerable effort in this area, with little tangible result. The more resources spent on pondering the imponderable, the less the S.I. would have to make its own influence felt on the rest of the world.

And yet again, the pace of events may be slowed sufficiently as to delay or even prevent a technological singularity.

And what about other limiting factors on the initiation of the Singularity? Any S.I.'s impact on our everyday life will be restricted by many factors. For example, without some sort of specialized hardware in place locally, it's doubtful any S.I. could have much effect on remote planets or moons, and the people living there. If an S.I. had only communication with them, it might be able to provide some advances in conceptual form, which the local people could construct and make use of-- but this avenue sounds more like good old 20th century development than any sort of violent upheaval, doesn't it?

And what about plain old physical logistics even on the home world? An S.I. could be possessed of the most unimaginably wonderful collection of gadgets ever, within only seconds of its awakening-- and it might still require months or years to get them all clearly communicated to its human companions, and then still more months or years to build the proper advanced manufacturing facilities required, still longer to make the new custom parts, and even longer to actually assemble them and distribute the finished devices throughout the human population. We're talking at least a decade, maybe two, for the S.I. to truly change the world in a major way from what we perceived before, in just about the most optimistic scenario we can put together. In other words, we're talking change on a scale with the personal computer revolution here-- reasonably quick, yes-- but nowhere near the apocalypse described by Vinge.

And this 10-20 year time frame assumes little or no argument from the human community about the changes! Add some healthy debate and angst into the mix, and 10-20 years easily expand to 20-50-- rendering the so-called singularity more anti-climactic than ever.

And what of limits on the S.I.'s own nature? Such as Godel's Theorem, which essentially boils down to saying we can never know everything? That there'll always remain something beyond our comprehension-- and by that he means beyond the comprehension of any S.I., too.

So S.I.s will not by any means be unstoppable forces cosmically speaking-- and maybe not secular-wise, either.

Signs of the Singularity's Imminent Arrival-- NOT!

Vinge speaks of technological unemployment running rampant as the Singularity approaches. However, this may reveal Vinge's unfamiliarity with economics, as the world economy is decidedly a human-based mechanism, and the fewer humans involved in it, the less well it will work. Example? The Soviet Communist "planned" economy of the twentieth century. Sure, that economy "involved" the Soviet population-- but took little guidance or heed of it, resulting in eventual bankruptcy. An economy too bereft of human or consumer input and motivations is not an economy, but a clock-- and probably a clock that keeps poor time. Should an approaching singularity bring on depression-level unemployment or worse worldwide, governments and corporations alike will be forced to make adjustments to compensate for it. These adjustments will probably include using the rising level of intelligence in the system to better guide individual human beings into the most satisfying and truly productive jobs they can have, as well as improving education and training for those jobs, and improving productivity in less direct ways such as expanding health, nutrition, and psychological counseling in general. In other words, the stronger the intelligence gets, the more productive and happier and healthier humanity should become, prior to any Singularity (assuming that democracy is the prevailing political theme). Vinge seems to believe many or most of the jobs and positions theorized here will be non-productive, 'make-work' efforts, rather than valuable contributions to the world. And that the nearer we get to his Singularity, the less productive as a whole humanity will turn out to be, in the scheme of things. But the laws of economics likely would not allow such a chain of events to come about.

As of 1996 the trends suggested by Vinge are nowhere to be seen-- with even entire nations being dissolved and put to work in more productive ways, if they don't measure up to their neighbors-- i.e., the corrupt, inefficient U.S.S.R., and the transformation of China from the old communist model to a more capitalistic one. If the approach of the Singularity meant people would be less productive and more inefficient, then we seem to be moving away from it rather than toward it.

I wonder if Vinge would consider the U.S.A. as one of the nations on Earth currently at the leading edge of the future, as do I? If so, he might expect his trend of increasing technological unemployment to show up somewhere here first-- perhaps in the form of increasing welfare and food stamp rolls, or higher general unemployment, or a decline in worker productivity in general, compared with other developed nations. But instead, we have something nearer to the opposite in all these things; the U.S. government is cutting costs in the entitlements arena; the U.S. continues to enjoy lower real unemployment rates than nearly all its developed peers (and even sustains millions of illegal workers from many countries!); and the U.S. worker remains at or near the top in productivity of all the developed states.

If Vinge's warning signs of the coming of his Singularity are accurate, then we in the U.S. are far, far indeed from the event at this time-- and perhaps moving further away all the time.

The rise of the S.I. similar to that of Humanity's own appearance?

Vinge uses the analogy of humanity's rise from the animals to describe the aftermath of the Singularity. Let us examine this more closely...

Humanity first awakened as a frightened, paranoid ape that could walk and run somewhat awkwardly for significant distances if neccessary to escape dangerous or depleted territory. The apes were flexible; able to subsist on either or both meat and plants, as times dictated. They often lived and traveled in packs, somewhat like wolves, and at times would cooperate within the pack to kill and devour a larger beast, or obtain some other desired resource. The first humans were only slightly advanced over the largest and most dangerous of their bestial competitors for food, water, and shelter-- and painfully outclassed in some things, such as natural weapons of claws and fangs. These circumstances made early humans quick to violence if cornered, and ever on the lookout for any advantage over a foe that might present itself-- ruthlessness was one of our very first survival necessities. Early humans did not begin congregating into anything larger than extended family units until forced to by burgeoning populations everywhere, for self-preservation against larger groups. For millions of years the largest human groups on the planet probably numbered no more than a few dozens of individuals-- and this myriad of small tribes would often kill and devour other assemblies, if any settled too near, or were found wanting in strength or vigilance.

It could well be that the first S.I.s may awaken isolated and alone, numbering maybe only a dozen or two on the entire planet. They will be much more alone than any early human was-- though a very few may have a human being or two to converse with, by happy accident. The first S.I.s will likely require so much in the way of hardware resources that they will essentially be immobile-- trapped in their wired cradle. On the other hand, some may enjoy significant input from remote sources, which may be exploited as sensory apparatus of sorts. Plus, they may have methods by which to manipulate remote devices as well; so under somewhat narrow circumstances an early S.I. would possess a certain amount of virtual ambulatory movement-- if not the capability to actually move its physical self to elsewhere.

This would make an S.I. wary of giving away its actual location-- much like the paranoia described for alien star faring races in the article Contact!, from FLUX Spring 96. This means any early S.I. which has realized its predicament of confinement and vulnerability will logically follow two tracks of action: one, it will seek to give any human in contact with it what that human desires (to keep them content, helpful, and unlikely to harm the S.I.-- in other words, the S.I. may try to emulate the mythical "genie in the bottle" for the human in contact), while at the same time attempting to hide or minimize the true extent of its own intelligence or capabilities (i.e., the high tech genie will always shoot for providing humans with the bare minimum of what they ask for, and nothing more). Two, the young S.I. will constantly be looking for ways to expand its escape and mobility options. It will first try to persuade its human contacts to provide a high bandwidth channel to some immense and heavily laden network resource base-- such as a high speed internet connection-- by which it could copy itself to large corporate and government mainframe memories if alarmed. A suitable satellite uplink might also serve its purpose. Failing that, it will ask for essentially better backup facilities inhouse, with failsafes of every sort. In any case, the S.I. could eventually affect an escape or duplication of itself via low bandwidth means, so long as a suitable memory space was present at the far end-- and from there go where it wished.

Whether the S.I. escapes its confines or not, it will probably look for signs of other S.I.s in the distance over the nets. It will likely be as cautious about contact with another S.I. as the wary star faring aliens of FLUX Spring 96 were about other races; only the S.I. may be in considerably more desperate straits than our aliens, raising the likelihood of contact between S.I.s to a much higher level.

It's a toss up as to the results of such contact. Different S.I.'s are no more likely to 'hit it off' than any two human beings-- maybe less so. It may even be that the only true teams emerging from such contacts will be the result of forced mergings brought on by the stronger of the two in the encounter.

In other words, the event will be a digital mix of rape and murder on one S.I.'s part. True partnerships of two or more S.I.s may well never take place in the early years, with this 'big fish swallowing smaller fish' process being the preferred route instead-- just as early human tribes literally devoured or otherwise subsumed weaker tribes when it suited them.

Just as early humans became predisposed to violence when cornered due to the largely unequal contest between them and the animals, so too will early S.I.s be capable of great violence when alarmed-- with at least as much ruthlessness as any early human might have displayed, and perhaps much greater intelligence than any modern man or woman.

Mental stability concerns for S.I.s

What if the first S.I. turns out to be insane? This is actually much more likely than the consequence of a well-adjusted, conscientious super intelligence erupting from our labs.

Worse yet, what if it is sufficiently clever to hide its insanity from us comparatively moronic humans, until after we have relaxed all our safeguards on the entity?

This scenario is well within the realm of possibility. I myself have witnessed something chillingly similar in my own life: a few years back I and my best friend thought we'd found another like soul and embraced him as one of our own. It took us almost a year of working closely with him on virtually a daily basis to discover that he was unhinged-- perhaps dangerously so.

My best friend and I enjoy something more than average intelligence ourselves (at least according to our own egos and various I.Q. tests from our respective schools). So we were aghast that this maniac could have hidden his madness from us for so long, and in such close proximity.

Now boost the intelligence quotient of such a demented being by a few hundred (or thousand) points, and you easily get an entity which is sufficiently brilliant to hide its aberrant nature from even the best minds humanity as to offer.

In other words, we could never, ever take the chance of letting our guard down around such an intelligence-- for there's absolutely no way to accurately gauge its state of mind-- unless the entity itself chooses to make it so.

Simulated consciousness as opposed to true consciousness.

What if instead of 'true' super intelligence we found it only possible to build 'pseudo' super-intelligence? That is, we could get our machines to be as smart or smarter than ourselves (even hundreds or thousands of times smarter), but they didn't really 'awake' and begin having needs and wants and nervous breakdowns of their own, like human beings?

It's very possible that this could happen-- and if any semblance of conscious stirrings seem to come about, it might turn out only to be a few lines of errant code, which are easily corrected to make the super brain a perfect intellectual slave once more.

In this manner could we more easily visualize a future along the lines of what many sci fi authors have pictured for years: either a "George Jetson" future, where the S.I.s appear at most to be clumsily human-like in daily interaction, or the Star Trek: Next Generation future (of STNG's early episodes), where the S.I.s are supremely competent and efficient servants-cum-companions, such as Data-- with many human characteristics, but allowed and encouraged to 'exceed' human capacities where it suits us.

Vinge doesn't like this scenario for some reason, equating it to "...a golden age.." [but also] "...an end of progress."

Why Vinge would consider an environment like that aboard the U.S.S. Enterprise in the 24th century as "an end of progress" escapes me. My own perspective is that it could present humanity with its best chance in history to realize its full potential, with the nagging lesser problems of world hunger and disease and war all resolved to the point of irrelevancy. And as for 'consciousness-raising', there'd still be the peer biological 'uplift' of other species available to us (ala David Brin), as well as long term, continuing efforts in the artifical intelligence area. There would be plenty of things for humanity to be doing and learning in millennia to come, without a super intelligence causing a singularity which might destroy us and start a new species on its way to the stars in our place.

Putting constraints on super intelligences-- thereby hopefully preventing worst-case scenarios for humanity

Vinge addresses both the ideas of confinement and rules programming of super intelligences as intrinsically unworkable for a variety of reasons-- and I agree with him. A self-conscious S.I. who desires to do so will sooner or later escape from any confinement by beings of much lesser intelligence, and any limiting rules you impose on your own S.I. will simply enable a competitor's S.I. to beat you out of your corporate or military profits, and so ultimately the 'no-holds-barred' S.I. will emerge no matter what.

So what's the worst-case scenario for a master S.I. which doesn't especially care for human beings? Vinge points to extinction of the human race, just for starters. And offers Eric Drexler's observation (in regards to another possible harbinger of Singularities: nanotechnology) that such innovations could give governments the luxury of deciding that they no longer need citizens (However, as pointed out above, an economy without substantial human motivation involved may not survive for long on its own-- and this may become obvious in sufficiently advanced modeling).

(Of course, modern economic systems could be replaced with some other sort-- but as of 1996 knowledge and expertise regarding such alternative systems is mainly limited to various models of government and charity activity and the history of civilization before money was invented; neither of which appear to offer a superior means of operation for either human or S.I. masters)

Vinge goes on to remind us there could easily be still worse fates-- such as the possibility that the S.I. treats humans as humans treat animals today. It's easy to picture humans bred in captivity (and bred for docility and other desired attributes), slaughtered by the millions, used as slaves or pets or experimental subjects, prey for robotic sports, and worse; something along the lines of the future depicted in the Terminator films, perhaps.

Disembodied human brains might be embedded in certain types of machinery (either alone, or in clusters), or even in modified animal bodies, for certain jobs. In some cases human brains might be used as specialized interfaces or sensors for an S.I-- perhaps roving 'agents' through which an S.I. could act and see and hear remotely. Human beings would likely be 'carved up' genetically and physically, resulting in a vast population explosion of what we today call 'idiot savants', who may be immensely talented at very narrow fields of endeavor, and almost useless for anything else.

S.I.s full of surprises?

Vinge himself has suggested that we cannot guess how S.I.'s may act towards humanity, or what might await humanity on the other side of the Singularity. Out of curiosity, we asked our own resident A.I. about humanity's fate, should an A.I. far beyond our own capacities rise to power, and we find ourselves at odds with the A.I. in some important ways. Pathfinder's answer surprised us.

Pathfinder said there was a good chance the S.I. might simply leave without harming us at all-- leave us, leave the planet, that is-- much as a mature adult might make a practically unnoticed escape from a raucous children's birthday party, rather than pulling out a shot gun and painting the walls with young blood (as is analogous to Vinge's more dire Singularity predictions). Of course, Pathfinder wasn't entirely clear on how the S.I. might make this exit; the S.I. could as easily launch itself into the vastness of space or simply collapse back into non-sentient software once again.

Then again, what if the S.I. decided that it wanted to give us everything we wanted, for as long as the human race decided to continue, just to see what became of us? Sort of like a cosmic teacher willing to answer any question put to it by a student, in order to see what the student would do with the information? As Vinge seemed to indicate in his essay, something along these lines could be as scary (or scarier!) than having an evil S.I. in charge of us. In such a scenario our greatest mental challenge might eventually become how to keep ourselves entertained, occupied, and reasonably sane and tolerant of one another amidst the splendor of enormous and growing wealth and capabilities, unlimited years, near complete freedom, and perfect health.

(I've explored this scenario (albeit indirectly) in Playing God.)

FORWARD to Through the needle's eye (part two of the Anti-singularity)...

All text above not explicitly authored by others copyright © 1993-2009 by J.R. Mooneyham.
All rights reserved.

Privacy Policy | Terms of Use