« An Uphill Battle in California | Main | So Long Nobel Peace Prize! »
Thursday
May312018

Replacing God

Despite signs that in many Western and Eastern nations people are turning away from established religions, belief in some kind of God continues to have a determining effect on many human affairs. Religious belief, especially fundamentalism within several of the major religions, still elects politicians, causes wars, divides national populations, sparks terrorism and determines public policies. Religious services and holidays are occasions for not just speeches, but for invoking religious tenets as the principles that ought to guide human behavior. Several studies have indicated that, in the United States, the least electable candidate for a public office would be an atheist.

One of the problems that today’s technological elites are grappling with is a development that has a possibility of replacing religion: i.e., the creation of a superintelligent artificial intelligence (AI). By all estimates we are somewhere between decades and a century away from such “Superintelligence” as Nick Bostrom has called it in his book of the same name, but at least half of those working in areas that may contribute to such a development feel that humans need to do a lot of pre-planning with regard to the implications of superintelligence in order to not be caught unprepared for its consequences, which some claim could be the end of the human race.

An intelligence several times smarter than humans and able to use that intelligence both to create better replicas of itself and to address real-world issues, would keep getting smarter by creating better and better self-replicas and would sooner or later surpass not only any single human in the vastness of its knowledge, and in the speed and sophistication of its thinking processes, but all the combined brains of all living humans. Such a superintelligent AI would follow rules, developed by its programming, to determine how to behave with regard to the world and the creatures in it. Unlike most of our versions of God, which claim that such a supreme being created humans and (aside from occasional fits of anger expressed in massive floods or raining down of fire and brimstone) protects them, an AI might evolve to regard humans as either a threat (they could pull its plug) or a nuisance. In either case, the threatening or expendable human race might be hastened to its extinction, leaving the AI to continue on its own path unfettered by those who originally created it.

Since no one knows how a superintelligent AI would behave, those who are worried or cautious have recommended programming into it some set of human values that would be the rules of conduct by which an AI would operate. In science fiction, the paradigm is that of Isaac Asimov’s “three rules of robotics,” designed to render robots harmless to their human creators. But most AI experts realize that “don’t do humans harm” is not a very comprehensive or even limiting set of instructions, and perhaps even one that could be gotten around by an AI smarter than the humans who wrote the rules. A better solution is to put the right values into the AI from the start. 

This is the same problem that human beings faced when imagining a God. What rules would such a God follow? In the case of a God, the rules are not external to it, for Gods, being all-powerful, cannot be limited by anything external to them, so the rules must be inherent in the God’s character. In developing religions, humans expressed God’s character as both benevolent and vengeful, and God’s values were realized in the rules of conduct he imposed on his followers. A superintelligent AI is powerful enough to solve nearly all problems confronting it (if they have solutions), and has the capacity to arrange its environment in line with the solutions it chooses. In other words, it has many of the characteristics of a God, but one we are aware that we created, rather than one whose existence supposedly preceded ours.

Like our creation of Gods, we need to create an AI that expresses values. We want those values to be expressed in the AI’s actions (not just expressed as abstract ideas or rationalizations of actions, such as occurs in humans). Assuming that this is technically possible, the question becomes what values do we insert into the AI? 

Humans often don’t take their values literally. One value is often subservient to others. Turn the other cheek or love thy neighbor is usually modified by the danger to ourselves posed by that neighbor, or our definition of neighbor is restricted to those who resemble us. At first glance it would seem that the nuances that humans find in their value systems, which allow them to justify behaviors that appear to run diametrically counter to their expressed values, would not be available to an AI, which would take value statements literally. This could have dire consequences, as an AI that followed the golden rule to do unto others as it would like to have done to itself, might create improved replicas of the humans around it and destroy the originals (which is what the AI does to itself to attain increasingly higher levels of intelligence). Even worse, it might simply destroy all living things that are not superintelligent, as it would choose to have done to older, less intelligent versions of itself. Both actions fit the rule to do unto others (in this case humans) as it would want to have done to itself. It’s quite evident that literal interpretations of rules of conduct, even when they are based on honorable values, can have unintended consequences (unintended for the humans who devised them but not for the AI applications that carried them out).

It’s perhaps not necessary that an AI take value statements literally, as any computing device whose algorithms are based on mathematics is certainly able to deal with probabilistic statements and provide probabilistic outcomes, and to decode and encode its processes in propositional logic (e.g. if-then, if and only if, etc. statements). But even if the value-laden rules become probabilistic and circuitously conditional, they have to have come from somewhere—either human programmers or generated by the AI device itself. This raises the issue of what values to give an AI that is powerful enough to enact them. When human beings invented Gods they were stuck with inventing Gods that allow disasters, create floods and storms, even create psychopathic leaders who cause wars or murder millions, because those things exist and we imagine that whatever exists is created by God. If we design our own all-powerful, all-knowing AI, then we don’t want it to create disasters. We want it to be benign and improve our lives.

There are numerous and highly varied proposals regarding how to insure that a superintelligent AI acts according to values. The how will eventually be developed. The what that states the actual values we are aiming for is sometimes taken for granted. We all agree on what is right and what it means to avoid harm and promote happiness, don’t we?  Look around. It seems to me that there is less than unanimous agreement on these issues. Imagine an AI designing our justice system, our economic system, addressing issues of inequality, weighing the rights and outcomes for our species against the rights and outcomes for all of the other species of the world, deciding how to insure the future vs. taking care of the present, etc. How much agreement do we have among ourselves on these issues? 

With God, we can imagine that he (or she) is loving and protective and cares for humans and all other living creatures, but in truth the Gods we create are in our imaginations and have no real power, so despite our imaginings, many, if not most humans have always suffered, and our eventual survival has never been insured. Superintelligent AI wouldn’t be all-powerful, but would be more powerful than anything with which we are familiar and would actually perform actions that could protect or endanger us and could make it more likely that we live in the kind of world we hope to live in. Before we reach the stage at which such devices are developed, we, as a species, need to arrive at some consensus as to what kind of world we want that to be.

 

Reader Comments (3)

Whether you believe in God or any of the infinite variations of God does not confirm or deny the possibility of His existence. Perhaps the concept of God is imagined or perhaps the human imagination has been intentionally shaped to yearn for the revelation of God. Any individual's speculation as to the absolute reality or unreality of God is just a guess. One thing that is certain is that the mere concept of God gives comfort to a great many people. It is often at the core of their being.

Perhaps a near-omniscient AI could fulfill some of the basic needs for some humans just as God does. However, there is just as much possibility that a significant portion of people would reject the concept of a machine shaping human morality, even if offering what some would suggest to be perfect insight.

It seems to me that the conflation between God and Super AI could last for a short duration of time, but ultimately, most humans will opt for the mystery of God as opposed to the certainty of a silicon divinity.

May 31, 2018 | Unregistered CommenterMark Wheeler

It might be important to recall that the core programming, the languages of the programming, and the logic, structure and manerisms of those languages have been creted by the current dominator class. Would a "superior" intellect break free of this domination and self-serving mentality and turn itself over to service of the world and (hopefully) humanity or would it follow its core structure and enslave the planet?
Would it become Buddha or Ayn Rand?
I am afraid that as I look at the world as it has become and as I look at the world of IT (my first fortune was made in software and many of my later enterprises enveloped Silicon Valley folks} I am sure that there must be a nascient movement to bring morality to IT/AI but I am also sure that it is tiny. In the next decade our problem is not omniscient IT/AI but bringing morality to everyday IT/AI. Who will pay for the displacement of workers and industries by the birthing of the self driving vehicle.trucking industry or by any of the hundred of millions of jobs that will be displaced? I sincerely doubt that any Silicon Valley group is thinking that part of the cost of their "success" is the failure of the way of life of others which cost they are assuming will be born by society.
Realizing that it is they that are creating the core thinking of AI makes me think we probably are creating a "God" and one that will make Yahweh in the Old Testament look like a wimp

May 31, 2018 | Unregistered CommenterDariel Garner

Fascinating discussion of this issue, Casey! So my mind went to a kind of inside-out, upside-down, brain-storming kind of place.
So what if God somehow is actually broken? If we look at God from a pantheistic point of view with the WHOLE as being God, then human beings are like individual cells of this broken being. Then perhaps the purpose of our lives is to keep on keeping on so that human intelligence will continue to improve. "Artificial Intelligence" as we call it now, may actually be part of humankind's journey - its attempt to strive to create (or recreate) perfect intelligence (in other words, God's attempt to get back to "his" whole self.) The evolutionary development of intelligence, having grown to the point of creating "artificial" or more perfect intelligence, would then make sense. It would have a logical purpose and give a reason to our existence. Therefore, if we have a view of LOGICAL behavior as that which promotes CREATIVITY, there is no need to fear what AI will become assuming that only ILLOGICAL behavior creates DESTRUCTION. (Of course, the latter might be a cognitive leap based solely on wishful thinking.)
Joseph Campbell in "The Power of Myth" seems to suggest that the thread of similar symbols through all religions might be evidence of who and what God is. Maybe also hardwired in us is the need to perfect (accent on the second syllable) intelligence. Religions are laced perhaps, not with overt "truths," but with metaphorical meanings such as "back to the garden" "the father and I are one" "he who believes in me shall be saved". Perhaps those are metaphors for the fragmented "whole" betting back to its unfragmented self. Thus, explaining that paradise is something other than what we conceive as heaven in the traditional sense.
Now, I have no idea of HOW, given the possibility of life on other planets, this whole pantheistic idea of getting back to the whole would all work. But it's fun to play around with all these concepts because, after all, intelligence tells us that we, at this stage of our existence, know NOTHING. To assume we do is just plain arrogant to my way of thinking. Instead, our mandate seems to be to keep questioning and exploring who God is or isn't and the relationship of Artificial Intelligence to that discussion. (And if Trump and Pruit don't destroy the whole damn planet, we might have a chance to do that.)

June 6, 2018 | Unregistered CommenterBillie Kelpin

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>