Artificial Intelligence in the Garden of Eden People in the tech world want, unconsciously, to be God and on some level think they are God.

The dawn of the internet age was so exciting. I took my grade-school son, enthralled by Apple computers, to see Steve Jobs speak at a raucous convention in New York almost a quarter-century ago. What fervor there was. At a seminar out West 30 years ago I attended a lecture by young, wild-haired Nathan Myhrvold, then running Microsoft Research, who talked about what was happening: A new thing in history was being born.

Apple & EveBut a small, funny detail always gave me pause and stayed with me. It was that from the beginning of the age its great symbol was the icon of what was becoming its greatest company, Apple. It was the boldly drawn apple with the bite taken out. Which made me think of Adam and Eve in the garden, Adam and Eve and the fall, at the beginning of the world. God told them not to eat the fruit of the tree, but the serpent told Eve no harm would come if she did, that she’d become like God, knowing all. That’s why he doesn’t want you to have it, the serpent said: You’ll be his equal. So she took the fruit and ate, she gave to Adam who also ate, and the eyes of both were opened, and for the first time they knew shame. When God rebuked them, Adam blamed Eve and Eve blamed the serpent. They were banished from the garden into the broken world we inhabit.

You can experience the Old Testament story as myth, literature, truth-poem or literal truth, but however you understand it its meaning is clear. It is about human pride and ambition. Tim Keller thought it an example of man’s old-fashioned will to power. St. Augustine said it was a story of pride: “And what is pride but the craving for undue exaltation?”

I always thought of the Apple icon: That means something. We are being told something through it. Not deliberately by Jobs—no one would put forward an image for a new company that says we’re about to go too far. Walter Isaacson, in his great biography of Jobs, asked about the bite mark. What was its meaning? Jobs said the icon simply looked better with it. Without the bite, the apple looked like a cherry.

But I came to wonder if the apple with the bite wasn’t an example of Carl Jung’s idea of the collective unconscious. Man has his own unconscious mind, but so do whole societies, tribes and peoples—a more capacious unconscious mind containing archetypes, symbols and memories of which the individual may be wholly unaware. Such things stored in your mind will one way or another be expressed. That’s what I thought might be going on with Steve Jobs and the forbidden fruit: He was saying something he didn’t know he was saying.

For me the icon has always been a caution about this age, a warning. It’s on my mind because of the artificial-intelligence debate, though that’s the wrong word because one side is vividly asserting that terrible things are coming and the other side isn’t answering but calmly, creamily, airily deflecting Luddite fears by showing television producers happy videos of robots playing soccer.

But developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling and funding it want, possibly unconsciously, to be God and on some level think they are God. The latest warning, and a thoughtful, sophisticated one it is, underscores this point in its language. The tech and AI investor Ian Hogarth wrote this week in the Financial Times that a future AI, which he called “God-like AI,” could lead to the “obsolescence or destruction of the human race” if it isn’t regulated. He observes that most of those currently working in the field understand that risk. People haven’t been sufficiently warned. His colleagues are being “pulled along by the rapidity of progress.”

Mindless momentum is driving things as well as human pride and ambition. “It will likely take a major misuse event—a catastrophe—to wake up the public and governments.”

Everyone in the sector admits that not only are there no controls on AI development, there is no plan for such controls. The creators of Silicon Valley are in charge. What of the moral gravity with which they are approaching their work? Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute, noted in Time magazine that in February the CEO of Microsoft, Satya Nadella, publicly gloated that his new Bing AI would make Google “come out and show that they can dance. I want people to know that we made them dance.”

Mr. Yudkowsky: “That is not how the CEO of Microsoft talks in a sane world.”

I will be rude here and say that in the past 30 years we have not only come to understand the internet’s and high tech’s steep and brutal downsides—political polarization for profit, the knowing encouragement of internet addiction, the destruction of childhood, a nation that has grown shallower and less able to think—we have come to understand the visionaries who created it all, and those who now govern AI, are only arguably admirable or impressive.

You can’t have spent 30 years reading about them, listening to them, watching their interviews and not understand they’re half mad. Bill Gates, who treats his own banalities with such awe and who shares all the books he reads to help you, poor dope, understand the world—who one suspects never in his life met a normal person except by accident, and who is always discovering things because deep down he’s never known anything. Dead-eyed Mark Zuckerberg, who also buys the world with his huge and highly distinctive philanthropy so we don’t see the scheming, sweating God-replacer within. Google itself, whose founding motto was “Don’t Be Evil,” and which couldn’t meet even that modest aspiration.

The men and women of Silicon Valley have demonstrated extreme geniuslike brilliance in one part of life, inventing tech. Because they are human and vain, they think it extends to all parts. It doesn’t. They aren’t especially wise, they aren’t deep and as I’ve said their consciences seem unevenly developed.

This new world cannot be left in their hands.

And since every conversation in which I say AI must be curbed or stopped reverts immediately to China, it is no good to say, “But we can’t stop—we can’t let China get there first! We’ve got to beat them!” If China kills people and harvests their organs for transplant, would you say well then, we have to start doing the same? (Well, there are people here who’d say yes, and more than a few would be in Silicon Valley, but that’s just another reason they can’t be allowed to develop AI unimpeded.)

No one wants to be a Luddite, no one wants to be called an enemy of progress, no one wants to be labeled fearful or accused of always seeing the downside.

We can’t let those fears stop us from admitting we’re afraid. And if you have an imagination, especially a moral imagination, you are. And should be.