Rise Of The Algorithm
By Jerry Mooney
After over hearing a debate about the merits of Star Wars: The Force Awakens, I can’t help but reflect back forty years to when I was pretending to fight a childhood friend in the backyard with a makeshift lightsaber. Although there is some danger when two kids commence whacking each other with broken broom handles, the then dangers inspired by sci-fi amounted to potential splinters or a thwack on the noggin.
Today, as we examine how far our society has come relative to Star Wars technology, splinters and contusions are the least of our worries. We are actually witnessing an encroaching risk, the rise of the drones.
Robots have already begun penetrating our daily lives with an expectation of exponential prominence developing. Although we don’t have Jetsons-like robot maids yet, we have Rumbas vacuuming our carpets, factories are using robotics in production, our cars are getting smarter and more autonomous, and even the healthcare industry, as seen in Japan, experimenting with robot bear nurses.
The intriguing and frightening part about these developments is not in their mechanics, but in their artificial intelligence or AI. Programming robots to do certain tasks is helpful and amazing, but imputing their software with self-learning algorithms opens a potential Pandora’s box.
For example, many car manufactures are using Mobileeye technology in cars to soon make them autonomous. But hacker superstar George Hotz, the first person to hack an iPhone, has invented his own version of a self-driving cars, pieced together from a car, cheap, off-the-shelf electronics and an understanding of how to make machines self-aware.
Instead of coding a computer through a lengthy series of rigid rules or if-then statements designed to tell the car what to do and what not to do, Hotz allows the car to experience driving through cameras and sensors, witnessing how a human does it and then duplicating it.
Hotz indicates that a computer can better learn what a chair is by showing it pictures without a chair and then showing it pictures with a chair. Afterward, Hotz says that if you tell the computer the chair is the missing object in the first set of pictures the computer will program itself to learn what the characteristics of a chair are, being more accurate, comprehensive and save ten times the code.
This also saves enormous amounts of time and effort trying to anticipate every single contingency that must be coded as a rule. In the case of the chair, a code that might make sense is, “A chair has four legs. If the object doesn’t have four legs then it is not a chair.” Then the coder is required to write a long series of exceptions to a chair having four legs.
In Hotz’ method, the computer simply learns what the core goal of the activity is and then improves in doing it until it has mastered the task. Therefor, the coder doesn’t have to write, “If you see the road crumble in front of you, take evasive action.” The computer simply understands how to drive and avoiding damaging circumstances becomes digitally intuitive.
By framing the algorithm in this way, modeling a person’s behaviour instead of following instructions, the computer can use its amazing processing power to program itself, ostensibly creating a digital intuition.
But this causes concern over how dangerous self-aware machines are. Usually an algorithm is designed to solve a specific problem, but the emulating AI used by Hotz learned how to do things he didn’t anticipate modeling his driving tendencies and then recreating them.
If a computer’s algorithm is created by modeling human behavior, what happens when the human behavior being modeled is dangerous, threatening or destructive? How does an evolving modeling algorithm evaluate actions outside of the terms of its original purpose? Or, how does an evolving algorithm integrate moral questions when efficiency and function are the only characteristics evolved?
These are the questions tackled by Elon Musk’s latest venture, OpenAI. OpenAI is a nonprofit research organization created to keep track on the development of AI and keep it friendly to humans. Although the mission of OpenAI is noble and likely necessary, it is hard to fathom a scenario where militant minds or diabolical geniuses don’t get some access to the evil switch on the back of the robot.
Likely, the solution won’t be as simple as flipping the switch from evil to good. And like so many endeavors the implications are difficult to forecast. And since the advancement of robot and AI is inevitable, companies like OpenAI are important. Hopefully one of the byproducts of their efforts is inspiring more companies to follow suit. This can’t be guaranteed, however, and considering how a single, young genius like George Hotz outsmarted the major players in this industry, we should all probably keep our fingers on the pulse.
Feature photo courtesy of Flickr, under Creative Commons Attribution-Noncommercial license
Jerry Mooney is co-founder and managing editor of Zenruption and the author of History Yoghurt and the Moon. He studied at the University of Munich and Lewis and Clark College where he received his BA in International Affairs and West European Studies. He has recently taught Language and Communications at a small, private college and owned various businesses, including an investment company that made him a millionaire before the age of 40. Jerry is committed to zenrupting the forces that block social, political and economic justice. He can also be found on Twitter@JerryMooney