The emergence of AI at the Edge
The emergence of AI at the Edge
When I hear the word Artificial Intelligence, and more recently machine learning, my mind immediately leaps to images from movies like the terrifying synthetic in Alien and the childlike character in Spielberg’s A.I. For those of you that are less film fans, it might be Spot the robot dog from Boston Dynamics that we’ve seen doing back flips. So it was fascinating to hear Intel speak at the recent TOUGHBOOK Innovation Forum about how AI and Machine Learning at the edge of the network – on our mobile computing devices – is about to be the next major technology inflection point.
In recent years, the much publicised technology trends have been towards cloud computing, where the data, number crunching power and “intelligence” of our technology are stored. But in synch, Intel has been working on the next generation of improvements to our mobile computing devices, called AI at the Edge.
The thinking is that as the next generation of smart applications are born they will need much increased processing power and AI, in the form of machine learning on the computing device itself, to bring them to life.
The science bit
Two technology breakthroughs are enabling this to happen:
Firstly, the ability to use the Graphics Processing Units (GPUs) in our devices, as well as the Central Processing Units (CPUs), to transform the number of calculations that our devices can process in any given time. For those that like the science, the main difference between CPU and GPU architecture is that a CPU is designed to handle a wide-range of tasks quickly (as measured by CPU clock speed), but are limited in the concurrency of tasks that can be running. A GPU is designed to quickly render high-resolution images and video concurrently.
Because GPUs can perform parallel operations on multiple sets of data, they are also ideally suited for non-graphical tasks such as machine learning and scientific computation. Designed with thousands of processor cores running simultaneously, GPUs enable massive parallelism where each core is focused on making efficient calculations.
The second major breakthrough is the developments in AI and machine learning itself. The ability for software to automatically learn and improve from experience without being explicitly programmed. The two advances combined enable us to take a massive step forward into the next generation of applications.
Already in use
Intel is using this technology to bring AI to all the devices running its 10th generation and future platforms. For example, in Intel’s Threat Detection Technology suite of solutions is an Accelerated Memory Scanning machine learning feature to prevent malware. Traditionally this security task is a very heavy workload on a CPU, as it looks for an ever burgeoning variety of malware attacks. Using Accelerated Memory Scanning this workload is offloaded to the GPU and machine learning capabilities are deployed to make the device become ever smarter and more efficient at spotting the attacks. In traditional security systems, it would be constantly updating its list of threats and sharing them around the network, creating an ever larger workload but imagine if the device could quickly learn what was normal and what were unusual patterns and then only focus on the unusual. It would drastically reduce the processing workload, freeing capacity on the device, and also be much more efficient in its task.
AI on the edge is also being used to make the latest computing devices smarter in the way they operate for their user. Using machine learning, the device can quickly understand the working habits of its user. For example, recognising when in the working day it is most likely to need applications requiring lots of processing power. The device can then deploy and adapt the way its resources are used to match the user’s typical workload. The user themselves might spot just a few clues to this technology being deployed; A much longer battery life, a device that uses its fan less or remains cooler and remains more reliable throughout its lifetime.
Intel has also already used AI at the edge within its own organisation for instant IT efficiency gains. A small team built a simple telemetry tool to harvest data from its employees computing devices to understand more about the performance of the devices. Machine learning was used to spot potential issues, which were then automatically corrected via a self-healing application deployed across the devices. Intel calculated that the automated, machine learning solution solved 200,000 computing issues across the space of a year, drastically reducing calls to the helpdesk and allowing employees to continue working without technology interruptions.
Outside of the device itself, AI at the edge is also already being used in applications such as virtual meetings to automatically reduce visual distractions with blurring, remove background noise or boost screen clarity to super resolution when small print text documents are shared. In the research and development, these advances will also be incredibly useful as our smartest brains crunch data ever faster to provide health breakthroughs and as the creative industry creates ever more realistic holographic and film entertainment.
But the reality is that the most significant applications and benefits of this type of AI at the edge are probably still to be thought of and promise to ultimately transform the working methods of all mobile computing users. The reassuring thing to know is that the latest mobile computing devices, such as those from TOUGHBOOK, will have the technology inside to take advantage of those developments as they arise.