(Where the Idea for my Novel Series Came From)
During my junior year at Colby College, I had been reading towers of papers about approaches to artificial intelligence for my cognitive science class, and I mostly thought:
“Wow, we’re so far away from developing any artificial intelligence that is actually… well… intelligent.”
That was 12 years ago. Yes, we had a computer that could beat a human at chess. And we had some algorithms that could learn faces. And even today we have IBM’s Watson, which can beat geniuses at Jeopardy. But… none of these computers are smart. They have no intelligence.
in·tel·li·gence (noun): The ability to learn or understand or deal with new or trying situations.
— Merriam-Webster Dictionary
Deep Blue can evaluate 200 million chess positions per second, but that doesn’t make it intelligent. Watson is an incredible feat of engineering that captivates me every time I watch it play. Watson can process one million books worth of information per second. But that doesn’t make it smart. In fact, you can’t even call Watson an imbecile, because "imbecile" implies a minor wisp of intelligence, which Watson does not have.
Let’s look at the three factors in the above definition of intelligence.
1. The Ability to Learn
Watson can improve the accuracy of its knowledge through absorbing more information, weighing the accuracy of that information against information it already has, and making connections between that and related information.
Is this learning? Maybe in very loose terms. It’s learning in the same way that Netflix might learn over time that you’ll probably like Inception if you liked both The Matrix and Catch Me If You Can. And it’s only mildly better than my couch “learning” the impression of my rear end and developing a depression over time.
But you can’t give Watson a calculus textbook and say, “Learn this from scratch,” even though it could read the entire book in a one-millionth of a second. Watson would be able to make plenty of assertions about calculus afterwards, but wouldn’t be able to perform calculus unless someone specifically programmed that capacity into it. If Watson could do that, then it would truly be learning.
2. The Ability to Understand
Can Watson, or any computer, grasp the meaning of things? This could be a huge philosophical argument with a maddening chain of tautological reasoning reminiscent of Louis CK’s hilarious comedy skit “Why?”
Instead, let me pose a hypothetical question. Say you told Watson the following: “If you beat my husband on Jeopardy, I’ll be upset at you.” Do you think Watson would grasp the simple meaning of your statement? And do you think Watson would wonder to itself, “Maybe I should consider losing because I don’t want this person to be mad… she might disassemble me.”
3. The Ability to Deal with New or Trying Situations
Assume, for a second, that Watson was Woody-Allen-sized and had a battery pack, but still immobile. If you dropped Watson off outside Grand Central station near a pretzel cart, do you think it would learn to cope with its new situation? Would it figure out how to get someone to recharge it? Would Watson be able to ask for a ride back to IBM, perhaps promising its drivers a monetary reward from IBM?
All of that would require the following logic:
I will run out of power soon.
Being out of power is bad, because then I won't be able to dominate humans on Jeopardy, which is my favorite thing in the world, next to reading The Hunger Games Trilogy one trillion times per hour.
Therefore, I need to find power.
I don’t have the right to take other people’s power. I do have the right to IBM’s power.
Therefore, I must reach IBM.
I can travel if I had roller skates, but those are so old-school. Cars are cooler and faster.
I don’t own a car. I could carja... no, I could get harmed.
But I can use a taxi without owning it.
I need to ask someone to hail me a taxi.
I also need to explain my situation to whomever talks to me. If I don’t, they won’t help because they’ll think they’re being Punk’d.
Etc…
Just in the first two steps, there’s a problem. It requires Watson to link its “knowledge” about the world to knowledge about its own state of being. I’d argue that any computer which lacks sensory input from their immediate surrounds – sight and sound most importantly – wouldn’t even begin to have the tools to reason about the relationship between its existence in the world and its own state.
The broader an environment that an AI can understand and react meaningfully to, the closer it gets to intelligent. A robot might be superb at learning to stack a few blocks, but if that’s all it can do, it’s not intelligent. Humans are the most adaptable. And although we may not be able to adapt to, say, suddenly being pushed off a cliff, we’re intelligent enough to scream on the way down.
The Bottom Line about Artificial Intelligence
Artificial Intelligence still doesn’t exist. What we have are artificial creations that appear intelligent under tightly constrained situations in which they have been programmed to excel (and even then, not always). We code algorithms for computers to learn specific tasks, and that’s all they’ll be able to do until we update their algorithms. Just because we have cars that can drive themselves and avoid accidents, it doesn’t mean that we’ve discovered a brilliant new approach to AI. (And if they do hit you, they won’t feel bad about it.) This quote pulled from Wikipedia’s history of AI sums it up nicely:
“These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.”
And that’s the crux of it – we need to move past programming algorithms for intelligence. We need machines that can learn to learn. Like humans – we don’t just learn things, we learn how to learn things so we can adapt to our environment and make sense of whichever environment surrounds us. Our brains are malleable.
This TED talk by Henry Markram about building a brain in a supercomputer is the best approach I’ve seen. It doesn’t produce any intelligent results yet, and it’s the type of approach that will feel meandering and vague… until one day when it all comes together and a computer makes a sound of its own accord – not a sound that was programmed into it, but rather a sound born of its electrical signals swimming around in the womb of its malleable mental structure, and that sound may be reminiscent of the crying of a newborn baby.
Why would it make a sound? Not because it was programmed. But because it could.
The Idea
So how does this relate to my techno-thriller novels? I asked myself, if I had to make an intelligent computer – a real intelligent computer – how would I do it?
I’d decided that I would grow one.
We don’t know enough about the structure of the brain to model it entirely yet. But why start with the most complex part of human biology? Why not start much simpler? Start with DNA.
DNA is nature’s compression algorithm. If we could just unzip it, we would have all we needed to create true intelligent programs. And guess what? Nature already knows how to unzip DNA into a human. So let’s simulate that.
Let’s build an incredibly detailed simulation of a human egg. Simulate inception, and then… just keep the simulation running. A human will grow. It will have a brain. Given the proper inputs and care, it will be intelligent.
There are myriad reasons why this isn’t possible today… but… what if we were wrong? What would it take? Who could do it? How much would it cost?
And then, what if this intelligence became unfathomably smart, but its human side remained? And what if you were the only one to realize the danger it would put the world in? How would you stop it?
This is what The Day Eight Series is about. I wanted to write fast and fun thriller novels, but I also wanted to explore questions about unfathomable intelligence, about existence, and about our universe. If these topics excite you, or if you’ve enjoyed novels by Michael Crichton or Dan Brown, or even if you’re just looking for a fun read, check out my novels or see what people are saying about them.
Comments