BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Amazon Brings Machine Learning Smarts To Edge Computing Through AWS Greengrass

Following
This article is more than 6 years old.

AWS Greengrass, the edge computing platform from AWS, got a facelift in the form of machine learning inference support. The latest version (v1.5.0) can run Apache MXNet and TensorFlow Lite models locally on edge devices based on NVIDIA Jetson TX2 and Intel Atom architectures.  

Source: AWS

Machine learning inferencing is a top use case for edge computing. Since edge computing gateways are expected to run in offline scenarios with intermittent connectivity to the cloud, they can serve machine learning models at runtime that can work offline. When combined with industrial IoT, ML inferencing makes the deployments valuable through predictive maintenance and analytics.

Amazon has been investing in all the three key areas - IoT, edge computing, and machine learning. AWS IoT is a mature connected devices platform that can deliver scalable M2M, bulk device on-boarding, digital twins and analytics along with tight integration with AWS Lambda for dynamic rules. AWS Greengrass extends AWS IoT to the edge by delivering local M2M, rules engine, and routing capabilities. The most recent addition, Amazon SageMaker, brought scalable machine learning service to AWS. Customers can use it for evolving trained models based on popular algorithms.

Amazon has done a great job of integrating AWS IoT, AWS Greengrass and Amazon SageMaker to deliver end-to-end machine learning support at the edge.

Customers upload training data to Amazon S3 before pointing Amazon SageMaker to it. They can choose one of the existing algorithms of SageMaker to generate a training model that is copied to another bucket of Amazon S3 in the form of a compressed zip file. This zip file is copied to the device, which will be invoked by an AWS Lambda Python function at runtime. It is also possible to directly point Greengrass to a pre-trained SageMaker model.

Source: AWS

Developers can use a Raspberry Pi for local development and testing. For production scenarios, either NVIDIA Jetson TX2 or Intel Atom is the recommended processor. Amazon is also providing pre-built machine learning libraries based on Apache MXNet and Tensorflow models that can be deployed on Greengrass.

As a proof of concept, Amazon has built a webcam called AWS DeepLens powered by AWS IoT, AWS Greengrass and AWS Lambda. Developers can train convolutional neural networks in the cloud and deploy the trained models on DeepLens to perform object detection in offline mode. With the latest support for ML in Greengrass, customers would be able to build their own DeepLens-like devices that run inferencing at the edge.

AWS Greengrass with ML inferencing is a perfect example of machine intelligence running on modern infrastructure. The platform is the convergence of edge computing, serverless computing, IoT, and machine learning technologies.

The public cloud does the heavy lifting while the edge delivers the required intelligence. Expect to see consumer devices become smart and intelligent much along the lines of Amazon Echo and AWS DeepLens. The future of cloud lies at the edge.

Follow me on Twitter or LinkedInCheck out my website