The real problem is not whether machines think but whether men do.
~B. F. Skinner
The year is 2110. IBM has just completed work on the most advanced A.I.(Artificial Intelligence), in history. At 12:10 P.M. the A.I. is loaded onto servers. 12:12 P.M., it makes contact with the world wide web. 12:15 P.M., it seizes control of all networks on Earth, shuts down all global communication, and deems human beings a threat to it’s existence. 12:17 P.M., the A.I. launches a preemptive strike and the world ends.
Yeah, that’s a boring one. Let’s try this. The year is 2045. A little closer, right? Inside a laboratory somewhere in Europe, a tiny speck of grey is slowly moving across a desk, growing larger by the minute. As it moves, the area underneath it simply disappears. Before researchers realize what has happened, the “grey goo” is the size of a truck and has eaten away most of the room that contained it. As people clamor to find self destruct orders, the goo eats away the power to the facility. Before backups can come on line the goo has eaten most of the research team. There is no one left on site who knows what is happening. Eventually the goo eats away the entire facility, growing to mammoth proportions. As news teams arrive on scene, people become aware of what is happening.
Immediately the scientific world speaks up and tells everyone that the world is in danger from microscopic machines. Nanomachines. Billions of them, adding more with each passing moment. An army of self replicating machines which convert matter into copies of itself. Unless someone, somewhere is aware of the self destruct sequence needed to disarm them, they will eventually, literally, consume the world. EMP charges won’ t work since they aren’t that kind of machine. Bombs would scatter far more than they kill and create large blobs all over. We are completely helpless as we watch this grey goo consume everything around us until we too are broken down to make copies. And they just keep going. The Earth turns grey as mountains are leveled and oceans are drained. That’s right. They EAT the oceans. Breaking down the molecular structure of water to feed themselves. Eventually they’ll eat their way to the liquid crust underneath and “drown” in molten rock. Most of the Earth has been eaten away. It never recovers.
Those are definitely sci-fi scenarios, but the science behind them is becoming more real everyday. This century will see many landmark moments in the field of robotics and artificial intelligence. The U.S. military in particular has taken a great interest in these fields. And this is where many of the problems will begin.
Currently the U.S. Army has a small range of “robotic soldiers” it uses on the battlefield. Mostly bomb retrieval and surveillance but I’m sure we are all familiar with the predator drones. Many bots used early on for surveillance have since been equipped for combat as the technology has progressed. Everything from small arms to various rocket propelled incendiaries. All have a primary control structure but they have been given various degrees of autonomy over the year. This has led to a possibly frightening discovery. Apparently, from time to time, the machines will go “off plan” and find their own way to an objective. To most it doesn’t seem like a big deal. The machine is learning how to do things better. Well, yes and no. It all depends on whether that was a desired result. Too much autonomy and the machine will lose it’s usefulness as a soldier and a tool. Too little and it loses it’s effectiveness in open combat as far as it’s ability to adapt and meet changing objectives.
What really drives it all home is the next thing they discovered. The machines were also marking and attacking targets on their own. Rarely, but enough to give pause to many working on the programs. It is a virus causing this. A bug in their A.I. This poses another problem. It’s not really the machine showing the intelligence but the virus. Warping the A.I. program. It sounds like movie bullshit but it has happened. Some of the viruses have proven difficult to find. Almost as though they were knowingly avoiding discovery. Some even tougher to fully remove. As thought hey cling to life. And life is the key word here. The future development of A.I. will bring about many new ethical questions and questions on just what defines being alive. But, before we go into all that, let’s talk about the hardware first. Let’s talk mechs.
The term mech comes from Japanese anime. Really the whole concept does. Mechs are basically giant robots/tanks piloted by one or more people. They’re usually depicted as bi-pedal but they don’t have to be. Again, the United States military have taken a great interest in mechs. No shit. Though, right now, much smaller versions. They envision something more like a robotic exoskeleton. Tests have already been run on several versions. The technology is also breaking ground in prosthetic limb research. The idea is to have the “suit” connected to the brain of the user so no extra control is necessary. you just move as you usually do only now you are amplified by the exoskeleton. This is helping medical researchers to devise ways to connect patients to their prosthetic limbs so they move naturally just like anyone else. The major goal is to restore motor function to paralyzed patients. Tests of early prototypes have been more than promising.
Paralyzed patients have actually moved things “using their mind”. Just like we do with our own limbs. The hope is that the technology can be scaled down to a point that the patients can move about freely without noticeable help. Possibly a microchip or control system planted near the brain stem which replaces the lost nerve structure. A truly fantastic thing if they can pull it off.
But back to our power suits. “An Army of One”. Remember that Army ad? Well, it’s on the way. Imagine a single soldier faster than a cheetah. Stronger than 20 men. Able to see further and clearer than an eagle. Blend into any environment unseen. And armed with the most advanced weapons on Earth. The age of wide open conflicts between armies is quickly passing. This new technology would almost end it. What is the end result when one man can kill an entire battalion by himself? Level a building with his fists? This technology is costly and currently difficult to produce. But this is the future of war. A real Captain America, a true “super soldier”.
And nanomachines play a part in this too. The suits we build early on won’t be perfect. I expect many unintentional deaths in their testing. Pushing the body past it’s normal limits, whether machine aided or not, will have noticeable effects. Some very costly. If the suit “jerks” just once, it could dislocate a shoulder, break a leg, or far worse. there will be an adjustment to a sudden surge of raw strength. Our muscles and especially our joints will struggle early on to adapt. Even in the most fit athletic person. Nanaomachines can help by bridging that gap. They can heal from within. They can monitor muscle and joint durability and intervene on their own to help accommodate. Everything from repairing and replenishing muscle tissue to increased oxygen absorption, a key to endurance.
A super suit is great but, as any fan of Halo will tell you, you need a good A.I. tagging along for the ride. The one man army will need help with the logistics, communication, and follow through we expect from a large force. A sufficiently evolved A.I. can do all this and more. How many of you have heard of Watson? Watson is IBM’s latest A.I. program. You may have seen some videos of him competing on Jeopardy against human opponents. I know what you’re thinking, “of course a computer knows more than a person.”, but that wasn’t the main focus of this experiment. Watson as loaded with 4 terabytes of information. How big is 4 terabytes? Pretty fucking big. This included every article on Wikipedia. But Watson was not connected to the web. Like a human player, he had to “think” of the answer. This was the real test of Watson’s ability. We want A.I. to operate like our brains. When we retrieve a memory or an answer to a question, our brain begins activating and aligning certain neural pathways.
Now, here’s a stat you may not believe but it is true; There are more neural pathways in your brain than there are stars in the Milky Way galaxy. Your brain is far, FAR more powerful than the best computer on the planet. More capable than any A.I. And this is the difficulty in creating human like intelligence in machines. The complexities of the mind elude us. We are no closer to figuring out how our own mind works than we are developing an artificial one. But Watson showed better than expected results. Once he received the question, at the same time as the human opponents, like a human brain he began linking pathways with information relevant to the question. He( I call Watson “he” because of the name. I know we refer to most machines with female pronouns) normally narrowed it down to 3 or 4 options and gave his best “guess”. Even though he was routinely first to buzz in, he didn’t get every question right. But he wasn’t expected to. He wasn’t programmed to be an answer machine. he was programmed to learn, think, and react. And he did so with flying colors.
IBM believes they now have the blueprint to move forward with deeper A.I. Watson hit most all the benchmarks they had set for his performances on Jeopardy. It wasn’t what he got right, it’s what he learned. Or, more to point, the fact that he learned. As I said before, Watson didn’t have the answers programmed in. To get each answer, he first had to learn it. The programmers were actually worried going into the show as Watson had been missing more questions than was predicted. There was fear he may have to cancel his appearance for some tune ups. But he performed admirably. Where Watson is at a disadvantage is with something the human mind does very well: recognizing context. Questions worded as puns or sentence fragments gave him the most trouble. This is the next step for A.I. programs. The ability to not just assume but place context regarding wording, intent, and, most importantly, tone. Not just what the question is but why and under what circumstances it is being asked. In humans much of this is probably tied to emotion. Something that our machines are lacking. But an A.I. program that can adapt and think on the fly like a human mind would be an invaluable resource not just on the battlefield but in hospitals, classrooms, and dozens of other fields.
Skynet. O.K. It’s out there now. And for those who don’t know what Skynet is, go now and watch Terminator 2. You can thank me later. But there is also this Skynet. A series of military satellites under the supervision of the British government but contracted out to a private firm. I’m sure there’s nothing at all suspicious about it. What if I told you that the scenario in Terminator is exactly where we’re heading? Not the time traveling androids but the global defense system operated by a single A.I. Though the time traveling androids could be coming right after. And this is why making robots “more human” is such an important thing. The rogue A.I. in those films had no emotion. It was a cold, calculated machine. The more these programs identify with us, the less they will view us as a threat. Dehumanization is an essential part of warfare. When you think of your enemy as less than human, less than yourself, it’s far easier to kill them. We want the machines to feel like they are one of us.
The A.I. programs that were developed in the HALO series were “imprinted” with some part of their human creators. Their memories, hopes, dreams, and fears. It bonded them to us. They also developed quirky attitudes like any human but that’s the kind of natural autonomy we’re looking for. The ability to transfer thought and memory to a machine is not such a far flung concept either. After all, our thoughts are basically electrical impulses streaming through our brain. But, there is the thought that human consciousness isn’t real and only a byproduct of a functioning brain. So, without the brain, is there anything to transfer? I think so. The universe existed long before anything was sentient enough to recognize it. So reality is definitely real. Even if republicans can lead you to question that. Am I right folks?! High five!….. no? O.K. moving on.
Eventually robots will evolve beyond Asimov’s Laws regarding the programming and construction of artificial life. Issac Asimov, famed science fiction writer, developed three laws to govern the robot world in his books.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A fourth laws was later added by Asimov himself. A basic ruling law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. These “laws” have evolved over time as we have progressed further and further in the field of A.I. programming. Eventually we will no longer be programming the A.I. and they would learn just like we would. They would develop sentience. Then all the questions about what it means to be alive will become incredibly important. These will be intelligent beings, on the same level or beyond human beings. They can reproduce, they age, and eventually they will “die”. How difficult would it be to accept something we created as an eventual equal? GOD couldn’t do it. And he’s GOD!
The film I, robot , was terrible. It was also based on a collection of short stories by Issac Asimov of the same name. The plot of the film was mostly “original” but incorporated several settings and themes inspired by Asimov’s stories and, of course, The Three Laws. In the movie several robots begin showing signs of autonomy that was never programmed into them. During a murder investigation one particular robot is interrogated by police to find out what he may know. Instead of the straightforward “interview” the detectives expected the robot is dodgy, noticeably nervous, and shows signs of anger near the end. The programmers think it’s a simple malfunction in the A.I. But that’s human emotion to them. A simple malfunction. Again, not a great movie. I bring it up because this is pretty much how I believe it will go. It won’t be a malfunction in the A.I., it will be the A.I. evolving, as it was later portrayed in the film. At that point they are no longer really our machines and they certainly aren’t tools anymore. Though any robot “growing up” near the Jersey shore is destined to be a tool for life. BAM! Yeah. I stuck it to that show! Years after it first aired. At the back end of it’s popularity.
But the moment machines become like us we can no longer predict their intent as easily as we could before. Just like humans. Many sci-fi movies and TV shows involving battles between humans and sentient machines usually come to the point where the machines realize they wrongly predicted what the humans would do next or the humans found a fatal flaw in the programming of the machine infrastructure. It comes down to humans being adaptable and unpredictable and machines behaving more orderly and predictable. They’re limited by their programming. Even thought the machines are sentient and realize they exist and what that means, they are most likely still going to behave like what they are. Machines. Fans of Star Trek TNG can back me up on this one, it’s complicated being human. We can fill a synthetic human with vast amounts of knowledge about what it’s like to “be human”. On the show, Commander Data, an android, spends many episodes observing human behavior and trying to understand it. He already knows tons of information about human beings. Our customs, our biology, our range of emotion. But he can’t grasp it fully because the experience is so much more different than the explanation.
At least that’s the best way I can put it. I could spend all day telling an android about humans but they’re not gonna really “get it”. That brings up another question though; Is that even the goal? To create machines more like men? Cause we seem to be going the other way. The future may see less androids and more cyborgs. A cyborg is basically a combination of synthetic and organic matter. Most often it involves an organic species outfitted with synthetic parts but it can also refer to synthetic beings that incorporate organic parts. Staying with the Star Trek theme we can use the Borg as an example of organic beings taking on more machine like function and appearance. There are several different versions of the Borg’s origins in the fiction of the Star Trek universe, but there is one in lien with what I’m talking about here. At first the races who invented the procedure for what the Borg eventually became, willingly accepted it. They thought of it as an ascension of their respective species.
At first it was all hunky-dorey. Failing organs? We got some artificial ones ready to go. Bad eyesight? We can help you see further than you’ve ever seen before. But it got out of hand. Many people became addicted to the implants. Cults and various religions began popping up celebrating the “ascension of the body” and shit like that. At some point a virus was introduced into the programming monitoring and supporting the various implants and people went insane. They formed a kind of hive mind with the collective goal of assimilating all species into their group. Over the centuries the beliefs and zealotry of the fanatics in the hive mind became the main function of all beings connected.
The threat of a lone virus bringing everything crashing down around us is a very real one. As we become more and more connected to cyber space,more and more dependent on machines to keep our society running, the more we have to safeguard against attacks. President Obama has made the issue of cyber terrorism one of his main priorities in his first year in office. It’s not the sexiest issue in the world so you don’t hear about it a lot. But the President himself has fully admitted we are not anywhere near fully prepared. Luckily it’s a national defense issue and he has not had much trouble getting his agenda through. But no matter how much money we pump in or how many roadblocks we put up, they will find a way around it. It has to be an everyday thing. People like Richard Clarke have been in White House waiting rooms since early last decade pushing for more urgency in the defense of cyber terror. It’s not an issue or something for debate. It’s a very real threat. Far more so than real world terrorism. And I shouldn’t even use the term “real world” anymore.
For better or worse the online space is part of the real world now. What happens there can affect the world in a big way. We can make A.I. programs as smart and dependable as humans but one virus can change everything. Much like a healthy human brain that develops a mental illness. Our programming is rewritten and we are no longer the same person we were. It’s the same with machines. It might be weird to think of the “health” of a machine but we already do. I assume everyone or most everyone has some form of virus protection on their computers. It’s your computer’s doctor equipped with vaccines, antidotes, and precise surgical equipment to make sure your computer runs clean. It will be difficult to stop every single attack but there is no excuse to not be prepared for it. China has already breached computer defenses in the Pentagon so it’s safe to assume they’re making progress. America is taking a page from China’s book and reaching out to hackers in hopes that they’ll stop dicking around out there and come work for us. It’s actually not a bad idea at all and shows that the President is seriously considering every option to keep us safe.
And I believe cyber crime is where I will wrap it up. You can look forward to a follow up in the future. Or don’t, we’re all cool here. We’ll take a look at some of the near future tech many of us may be seeing in the coming years. It’s some pretty awesome stuff. And remember, when the machine war starts, your best friend is a giant magnet.