Hardware accelerated Function-as-a-Service (FaaS) enables cloud developers to deploy inference functionalities [1] on Intel® IoT edge devices with accelerators (Integrated GPU, Intel® FPGA, and Intel® Movidius™). These functions provide a great developer experience and seamless migration of visual analytics from cloud to edge in a secure manner using a containerized environment. Hardware-accelerated FaaS provides the best-in-class performance by accessing optimized deep learning libraries on Intel® IoT edge devices with accelerators.

This tutorial shows how to:

  • Set up the Intel® edge device with Clear Linux* OS
  • Install the OpenVINO™ and AWS Greengrass* software stacks
  • Use AWS Greengrass and lambdas to deploy the FaaS samples from the cloud

Supported Platforms

  • Operating System: Clear Linux OS latest release
  • Hardware: Intel® core platforms (Tutorial supports inference on CPU only)

Description of Samples

The AWS Greengrass samples are located at the Edge-Analytics-FaaS.

We provide the following AWS Greengrass samples:

  • greengrass_classification_sample.py

    This AWS Greengrass sample classifies a video stream using classification networks such as AlexNet and GoogLeNet and publishes top-10 results on AWS* IoT Cloud every second.

  • greengrass_object_detection_sample_ssd.py

    This AWS Greengrass sample detects objects in a video stream and classifies them using single-shot multi-box detection (SSD) networks such as SSD Squeezenet, SSD Mobilenet, and SSD300. This sample publishes detection outputs such as class label, class confidence, and bounding box coordinates on AWS IoT Cloud every second.

Converting Deep Learning Models

Sample Models

For classification, download the BVLC Alexnet model as an example. Any custom pre-trained classification models can be used with the classification sample.

For object detection, the sample models optimized for Intel® edge platforms are present at /usr/share/openvino/models. These models are provided as an example; however, any custom pre-trained SSD models can be used with the object detection sample.

Running Model Optimizer

Follow these instructions for converting deep learning models to Intermediate Representation using Model Optimizer. For example, use the following commands.

For classification using BVLC Alexnet model:

python3 mo.py --framework caffe --input_model <
model_location>/bvlc_alexnet.caffemodel --input_proto <
model_location>/deploy.prototxt --data_type <data_type> --output_dir <
output_dir> --input_shape [1,3,227,227]

For object detection using SqueezeNetSSD-5Class model:

python3 mo.py --framework caffe --input_model
SqueezeNetSSD-5Class.caffemodel --input_proto
SqueezeNetSSD-5Class.prototxt
--data_type <data_type> --output_dir <output_dir>

In these examples:

  • <model_location> is /usr/share/openvino/models
  • <data_type> is FP32 or FP16, depending on target device.
  • <output_dir> is the directory where the user wants to store the Intermediate Representation (IR). IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml file should be passed to <PARAM_MODEL_XML>.
  • In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, to use batch size 1, you can provide “–input_shape [1,3,227,227]”.

Installing Clear Linux OS on the edge device

Start with a clean installation of Clear Linux OS on a new system, using the Install Clear Linux* OS on bare metal (automatic), found in Get started.

Create user accounts

After Clear Linux OS is installed, create two user accounts. Create an administrative user in Clear Linux OS. You will also create a user account for the Greengrass services to use (see Greengrass user below).

  1. Create a new user and set a password for that user. Enter the following commands as root:

    useradd <userid>
    passwd <userid>
    
  2. Next, enable the sudo command for your new <userid>. Add <userid> to the wheel group:

    usermod -G wheel -a <userid>
    
  3. Create the user and group account for the Greengrass daemon:

    useradd ggc_user
    groupadd ggc_group
    
  4. Create a /etc/fstab file.

    touch /etc/fstab
    

    Note

    By default Clear Linux OS does not create an /etc/fstab file. The Greengrass service needs to have the file created before it will run.

Add required bundles

Use the swupd software updater utility to add the following bundles to enable the OpenVINO software stack:

swupd bundle-add os-clr-on-clr desktop-autostart computer-vision-basic

Note

Learn more about how to Use swupd.

The computer-vision-basic bundle will install the OpenVINO software, along with the edge device models needed.

Configuring an AWS Greengrass group

For each Intel® edge platform, we need to create a new AWS Greengrass group and install AWS Greengrass core software to establish the connection between cloud and edge.

  1. To create an AWS Greengrass group, follow the AWS Greengrass developer guide

  2. To install and configure AWS Greengrass core on edge platform, follow the instructions at Start AWS Greengrass on the Core Device.

    Note

    You will not need to run the cgroupfs-mount.sh script in step #6 of Module 1 of the AWS Greengrass developer guide because this is enabled already in Clear Linux OS.

Creating and Packaging Lambda Functions

  1. Complete the tutorial at Configure AWS Greengrass on AWS IoT .

    Note

    This creates the tarball needed to create the AWS Greengrass environment on the edge device.

  2. Assure to download both the security resources and the AWS Greengrass core software.

    Note

    Security certificates are linked to your AWS* account.

  3. Replace greengrassHelloWorld.py with Greengrass samples:

    • greengrass_classification_sample.py
    • greengrass_object_detection_sample_ssd.py
  4. Zip these files with extracted Greengrass SDK folders from the previous step into greengrass_sample_python_lambda.zip.

    The zip should contain:

    • greengrasssdk
    • greengrass sample

    For the sample, choose one of these:

    • greengrass_classification_sample.py
    • greengrass_object_detection_sample_ssd.py

    For example:

    zip -r greengrass_lambda.zip greengrasssdk
    greengrass_object_detection_sample_ssd.py
    
  5. Follow steps 6-11 to complete creating lambdas.

    Note

    In the AWS documentation, step 9(a), while uploading the zip file, make sure to name the handler as below depending on the AWS Greengrass sample you are using:

    • greengrass_object_detection_sample_ssd.function_handler (or)
    • greengrass_classification_sample.function_handler

Deploying Lambdas

Configuring the Lambda function

After creating the Greengrass group and the lambda function, start configuring the lambda function for AWS Greengrass.

  1. Follow steps 1-8 in Configure the Lambda Function of the AWS documentation.

  2. In addition to the details mentioned in step 8, change the Memory limit to 2048MB to accommodate large input video streams.

  3. Add the following environment variables as key-value pairs when editing the lambda configuration and click on update:

    Table 1. Environment Variables: Lambda Configuration
    Key Value
    PARAM_MODEL_XML <MODEL_DIR>/<IR.xml>, where <MODEL_DIR> is user specified and contains IR.xml, the Intermediate Representation file from Intel® Model Optimizer
    PARAM_INPUT_SOURCE
    <DATA_DIR>/input.webm to be specified by user. Holds both input and
    output data. For webcam, set PARAM_INPUT_SOURCE to ‘/dev/video0’
    PARAM_DEVICE For CPU, specify “CPU”
    PARAM_CPU_EXTENSION_PATH /usr/lib64/libcpu_extension.so
    PARAM_OUTPUT_DIRECTORY <DATA_DIR> to be specified by user. Holds both input and output data
    PARAM_NUM_TOP_RESULTS User specified for classification sample. (e.g. 1 for top-1 result, 5 for top-5 results)
  4. Add subscription to subscribe, or publish messages from AWS Greengrass lambda function by following the steps 10-14 in Configure the Lambda Function

    Note

    The “Optional topic filter” field should be the topic mentioned inside the lambda function.

    For example, openvino/ssd or openvino/classification

Local Resources

  1. Select this link to add local resources and access privileges.

    Following are the local resources needed for the CPU:

    Local Resources
    Name Resource type Local path Access
    ModelDir Volume <MODEL_DIR> to be specified by user Read-Only
    Webcam Device /dev/video0 Read-Only
    DataDir Volume <DATA_DIR> to be specified by user. Holds both input and output data. Read and Write

Deploy

To deploy the lambda function to AWS Greengrass core device, select “Deployments” on group page and follow the instructions.

Output Consumption

There are four options available for output consumption. These options are used to report, stream, upload, or store inference output at an interval defined by the variable reporting_interval in the AWS Greengrass samples.

  1. IoT Cloud Output: This option is enabled by default in the AWS Greengrass samples using a variable enable_iot_cloud_output. We can use it to verify the lambda running on the edge device. It enables publishing messages to IoT cloud using the subscription topic specified in the lambda (For example, ‘openvino/classification’ for classification and ‘openvino/ssd’ for object detection samples). For classification, top-1 result with class label are published to IoT cloud. For SSD object detection, detection results such as bounding box co-ordinates of objects, class label, and class confidence are published.

    Follow the instructions here to view the output on IoT cloud

  2. Kinesis Streaming:

    This option enables inference output to be streamed from the edge device to cloud using Kinesis [3] streams when ‘enable_kinesis_output’ is set to True. The edge devices act as data producers and continually push processed data to the cloud. The users need to set up and specify Kinesis stream name, Kinesis shard, and AWS region in the AWS Greengrass samples.

  3. Cloud Storage using AWS S3 Bucket:

    When the ‘enable_s3_jpeg_output’ variable is set to True, it enables uploading and storing processed frames (in JPEG format) in an AWS S3 bucket. The users need to set up and specify the S3 bucket name in the AWS Greengrass samples to store the JPEG images. The images are named using the timestamp and uploaded to S3.

  4. Local Storage:

    When the ‘enable_s3_jpeg_output’ variable is set to True, it enables storing processed frames (in JPEG format) on the edge device. The images are named using the timestamp and stored in a directory specified by ‘PARAM_OUTPUT_DIRECTORY’.