You’ve probably heard the term Artificial Intelligence (or “AI”) lately. There are three key reasons why it’s been in the news, as we discussed in our previous blog. But, at its root, AI is simply math – a series of algorithms and procedures run by computers to solve problems. So why then all the doom and gloom? Quite simply it is not the fear of robots coming alive that has experts and pundits concerned. The real concern is that if anyone can do AI and human values are involved then there is the potential for misuse.
The dramatic misuses naturally come to the top of the speculation chart. What if someone trained a military drone to do something harmful, like attack civilians, etc.?
But consider the following, less dramatic example. Imagine that a police department has tied in all the traffic cameras in the city to create one giant data feed. It then uses AI to ingest the video feed and pick out key objects such as cars, humans, bus stops, etc. Once trained that model would have the power to spot these objects in real time and would be much better at it than a human could ever be when presented with multiple feeds all at once and hundreds of humans and cars moving in and out and all around each other. And not just spot them, but then create data from it. Speed, trajectory, distance vectors, all plotted in near real time.
Now with that new data, a well-intentioned human could then train the AI to do something socially positive. Like spot a human who just fell down. Or spot a human who was walking erratically. Taken to the extreme, it could spot a human about to step out into traffic. Because this processing is happening in real time it could then send an alert to closest police officers to respond or maybe in the future even send an alert to the specific car to apply the brakes before the driver even realizes what is about to happen.
Sounds great and fairly benign. But what if the exact same setup was being used, only this time not by a police service but by a criminal syndicate? Instead of training the model to look for distressed humans, it trained the model to look for ideal pickpocket targets. Or unlocked parked cars. Once spotted it would then send a quick alert to the nearest car thief’s smart phone with the exact location and picture. You get the idea.
And therein lies the real issue. AI at some level does require human values to tell it what it should be looking for or what problem it should be solving for. A computer has no emotions or any concept of social good or bad beyond what it is taught. Falling, running, crashing, walking, fighting are all the same to it unless we specifically tell it that we humans are interested in one over the other.
There are other concerns about the spread of AI as well. A computer has no ethical concerns the way we might. So asking it to write an essay on a topic for a student that is trying to cheat on their homework is no morally different to the computer than someone using a thesaurus to look up similar words. In fact, those two actions are very much alike.
Which brings up another challenge. What data does that AI application have access to and what is it using to learn? In the case of the essay-writing example, the current scenario would likely be that the AI “bot” would draw upon commonly available, easily searchable internet-based data to construct sentences and phrases that it believes would be on topic. It would not of course be able to access the massive amount of non-online materials in a row of academic journals stuck in a university library. But maybe those materials would be incredibly important for context, or research, and so the essay that gets written is superficial at best and perhaps totally wrong at worst. And of course a whole bunch of essays would start to look similar because it’s the same core data that the bot is drawing from every time, with perhaps slight nuances for newly added materials.
Reactions to the spread of AI will be as mixed as human emotions are complex. Fear. Optimism. Bewilderment. The urge to control. To create rules. To experiment. All normal emotions any time a new technology comes out.
Consider for example something as fundamental as electricity. As that rolled out across society there were industries and parts of the economy that were impacted. Some severely. Although not overnight. Eventually there was widespread adoption. More use cases were found for the new energy source. Now it’s hard for most to imagine a world without it. Now that we have seen that widespread adoption the question is “do we control electricity or does it control us?” I think the answer is we control it although we can’t practically live without it. In due course the same will be with AI.
The short answer is, no, robots are not coming to life or taking over the world as you might have seen in popular science fiction. Yes, there are legitimate concerns about how this technology can be used. Yes, as with any new technology, workplaces and society will absorb it and adapt accordingly, and of course that does mean change.
In Quebec, financial planning services are provided by RBC Wealth Management Financial Services Inc. which is licensed as a financial services firm in that province. In the rest of Canada, financial planning services are available through RBC Dominion Securities Inc.