My computer has AI?

March 07, 2024 | Jay Slade, Vice-President, Analytics and Business Intelligence, RBC Dominion Securities


Share

2024 is shaping up to be a year of marketing of AI being inside of new devices … But what does it mean?

hands typing on laptop keyboard.

You may have noticed advertising of newer electronic devices – such as desktops, laptops, certain tablets and phones and televisions – that have built-in Artificial Intelligence (AI). So what is going on and should it be something to be worried about? Or excited about?

Isn’t AI just math?

Yes, at the end of the day, AI is just math, as I explained in my previous blogs. In practical terms, behind-the-scenes processes and analytical routines execute that math to accomplish real-world outcomes and use cases. For a long time now those processes have been easily accomplished with modern-day computing power – without AI. Central Processing Unit (CPU) chips in computers have become ever faster, and more energy-efficient.

However, if a chip manufacturer knew ahead of time that certain calculations and routines would be used more frequently then they could start to design the chips in a manner that would cater to that job specifically. We have already seen this play out in the world of graphics processing units, or GPUs. A couple of decades ago manufacturers (such as Nvidia and AMD) started to realize that graphic intensive calculations were being used so extensively in every-day software applications that it made sense to create a dedicated and separate processor unit just for that task. By adding a GPU to a CPU inside the overall computing package, certain applications (such as video production, gaming, photo editing, etc.) would run faster, smoother and do more. Those GPUs have evolved with even more advanced graphical details, higher resolutions, faster frame rates, etc. Many content creators, artists, and heavy gamers will buy devices (they typically do cost more) that cater to that type of computer usage.

But wait! You say you are not a gamer or creative content user but your computer still can watch videos and surf the net just fine without having a separate GPU.  You are not alone. Remember that the original or basic models of CPUs can still perform a multitude of calculations and for most average software applications are more than enough to get the job done. In fact, most modern chip makers (such as Intel) actually have integrated components and resources inside their base model CPUs designed to perform graphics-oriented calculations with pretty good performance. Sometimes you will see this labelled as integrated graphics, and these integrated graphics components residing inside normal CPUs are more the norm for most every-day computing devices such as desktops, laptops, tablets, phones, etc.

So how does AI enter into this?

Manufacturers and chip makers have seen more and more use cases emerging for AI. A new type of component has been created called a NPU, or neural processing unit. Like the old evolution of GPUs, these NPUs cater to processes related to AI, specifically the learning through data observations and creations based on that modelling. While traditional CPUs were capable of performing these procedures (such as neural net modelling) the newer NPUs are integrated components that cater specifically to that job. The industry is betting on the idea that more and more of those procedures will be used in everyday computing life. Just as GPUs helped the graphics-related performance of various devices, and accordingly NPUs will help the AI aspects of the devices.

But what does that really mean in the real-world context? After all, most everyday computer users are not data scientists looking to train models or do “AI” so how does this impact them? Certainly your average television watcher looking at a 65-inch screen enjoying a movie or sporting event isn’t thinking about AI So what gives? Well there are two main categories where these NPUs will help in everyday life.

The first is around content creation and you have likely already seen that in action. More and more software applications are using AI-type processes to help render new imagery and materials. Think for example a movie creator, music creator, artist, and so on. The software applications they typically use can definitely take advantage of AI capabilities to help create new material or edit existing material. The most obvious use cases you have already seen examples of. For example a phone that captures videos and cleans up the sound or picture quality to eliminate background problems or unwanted noises. Videos where artificial human-looking avatars essentially talk and act as is they are alive. Programs that edit photos where the subjects weren’t quite looking at the camera but get corrected as if they were. Generally speaking, these are use cases where software is employing AI-type processes to create or edit content is a manner that is desirable to an end user.

But a second and less obvious use case is where the device itself is tying to learn about you, the user, to make the experience an increasingly better one. To understand that concept imagine two very selective and discriminating people were to buy the same modern NPU-equipped laptop at the same time, the exact same model, exact same specifications. Imagine that these two users had very specific and different preferences about colour, brightness, intensity, speaker volume, fan noises, battery life, contrast ratio, background lighting and multitude of other preferences. As they each used their laptops for different applications, they would adjust the settings on their devices to satisfy their specific desires. While they each might do similar things their preferences while doing them could very different. So a device with an NPU could start to learn those preferences for those users individually and then start to automatically make those adjustments on that specific unit based on the ongoing learning of how that user prefers to use the device. The same model could end up behaving differently based on what is has learned about its user. The win of course is that in the long run the user would not have to constantly adjust the settings and tweak for preferences, but rather the device would do that adjusting in real time. What could that mean? Well better efficiency and battery life as an example, longevity of the device as another example. A higher satisfaction with their product. You get the idea.

So there you have it. If this explanation has felt a little long winded then you know why device manufacturers are using marketing taglines like “AI inside” or “Powered by AI”. Sounds way cooler than getting into the mechanics of NPUs and neural nets and so on. But the bottom line is the machine is not alive, AI is not fundamentally different today than last year, and there is nothing sinister going on here. As predicted, more and more use cases for AI processes are emerging and both hardware and software companies are adapting to that reality. I would suggest in this case it’s in a predominantly good way for the consumer and in the future more and more devices in your everyday life will try to adapt to you and your lifestyle rather than you adapt to it.

Categories

Technology