Softbank-owned company – ARM is the latest player in the industry to enter into machine learning field. The British chip designer ARM has built a new set of processors from the ground up to power an artificially intelligent world. ARM has called the Machine Learning platform – Project Trillium. ARM has announced a new Machine Learning (ML) processor along with a second generation Object Detection (OD) processor as part of Trillium.
ARM doesn’t manufacture any chips itself, but its designs are at the core of virtually every CPU in modern smartphones, cameras, and IoT devices. The company is partnered with Apple, Samsung, Qualcomm, and Nvidia have shipped more than 125 billion ARM-based chips.
The Machine Learning(ML) processor is a new design, not based on previous Arm components and has been designed from the ground-up for high performance and efficiency. It offers a huge performance increase (compared to CPUs, GPUs, and DSPs) for recognition (inference) using pre-trained neural networks. ARM is a huge supporter of open source software and Project Trillium is enabled by open source software.
“These are new, ground-up designs, not based on existing CPU or GPU architectures,” ARM’s vice president of machine learning, as said by Jem Davies.
ARM is launching both an ML processor for general AI workloads and a next-generation Object Detection(OD) chip that specializes in detecting faces, people and their gestures, etc. in videos that can be as high-res as full HD and run at 60 frames per second. This is actually ARM’s second-generation object detection chip. The first generation ran in Hive’s smart security camera.
The bigger picture is the wide array of AI applications ARM envisions its new processors will enable. The ML chip is targeting the mobile market, meaning smartphones, self-driving cars, and Internet of Things (IoT) devices at the edge.
ARM sees embedded possibilities in security cameras and smart cities, where “a completely new class of smart cameras” will support everything from facial identification and gesture recognition to ML-driven predictive analytics and mood analysis.
The first generation of Arm’s ML processor will target mobile devices and Arm is confident that it will provide the highest performance per square millimeter in the market. Typical estimated performance is in-excess of 4.6TOPs, that is 4.6 trillion operations per second.
A combination of the OD processor with the ML processor will get you a powerful system that can detect an object and then use ML to recognize the object. This means that the ML processor only needs to work on the portion of the image that contains the object of interest.
The argument for supporting recognition on a device, rather than in the cloud, is compelling which would save a lot of bandwidth. As these technologies become more universal then there would be a sharp spike in data being sent back and forth to the cloud for recognition. Second, it saves power, both on the phone and in the server room, since the phone is no longer using its Wi-Fi or LTE to send/receive data and a server isn’t being used to do the detection.
There is also the issue of latency, if the inference is done locally then the results will be delivered quicker. Plus there is the myriad of security advantages of not having to send personal data up to the cloud.
The third part of project Trillium is made up of the software libraries and drivers that Arm supply to its partners to get the most from these two processors. These libraries and drivers are optimized for the leading NN frameworks including TensorFlow, Caffe, and the Android Neural Networks API.
We’ve recently seen a number of smartphone manufacturers build their own AI chips. That includes Google’s Pixel Visual Core for working with images, the iPhone X’s Neural Engine and the likes of Huawei’s Kirin 970.
By all means, ARM isn’t the only one trying to ride the AI outbreak with optimized silicon. Qualcomm is working on its own AI platform, Intel unveiled a new line of AI-specialized chips last year, Google is building its own machine learning chips for its servers.
The final design for the ML processor will be ready for Arm’s partners before the summer and we should start to see SoCs with it built-in sometime during 2019.
Stay updated with latest happenings of tech world on the go with Technobugg App, on Android and Windows.



