Remote View your Computer Vision Models Running on AWS Panorama

mrtj

Janos Tolgyesi

Posted on May 9, 2022

Remote View your Computer Vision Models Running on AWS Panorama

Real-time smart video analytics application development and edge device deployment is a tricks task. In the last few years, the industry players built platforms to support this activity. Notable examples include NVIDIA Metropolis, Intel’s OpenVINO, and AWS Panorama. While these solutions make some aspects of the video-analytics application development more straightforward, there are still many issues to deal with before deploying a video analytics application in production. This post introduces Telescope, the first in a series of open-source tools to make developing AWS Panorama application simpler.

AWS Panorama is a machine learning appliance and software framework that allows you to deploy video analytics applications on edge. For a thorough introduction and a step-by-step tutorial on deploying a Panorama application, refer to Deploy an Object-Detector Model at the Edge on AWS Panorama.

AWS Panorama framework eases many aspects of video analytics application development, including camera connection management, video decoding, frame extraction, model optimization and loading, display output management, over-the-air deployment of your application, and other features. Nevertheless, some tasks are still challenging, including diagnostic tasks when a model does not work as expected.

Introducing Telescope

The only available way to get visual feedback on the correct functionality of a Panorama application is to physically connect a display to the HDMI port of the appliance. The display will show the output video stream of a single application deployed on the device. However, physically accessing the appliance is not always feasible. Telescope allows to re-stream the output video of any Panorama application to an external service, for example, to AWS Kinesis Video Streams. This feature can be very convenient for monitoring an application remotely.

How does Telescope work?

Telescope instantiates a GStreamer pipeline with an appsrc element at the head. An OpenCV VideoWriter is configured to write to the appsrc: instead of saving the consecutive frames to a video file, it streams to the output sink. When opening the VideoWriter instance, the user should specify the frame width and height and the frame rate of the output stream. You can manually set these parameters or let Telescope infer them from the input dimensions and the frequency you feed new frames to it. If using this auto-configuration feature, some frames (by default 100) will be discarded at the beginning of the streaming because Telescope will use them to calculate statistics of the frame rate and measure the frame dimensions. This is the “warmup” of Telescope. If you send frames with different sizes, Telescope will resize the input, but this comes with a performance penalty. You are also expected to send new frames to Telescope with the frequency specified in the frame-per-second parameter. If frames are sent at a different frequency, the video fragments get out of sync in Kinesis Video Streams, and you won’t be able to replay the video smoothly.

Cogwheels of a complicated mechanism
Photo by Josh Redd on Unsplash

Telescope is an abstract base class that handles the state machine of the GStreamer pipeline. The concrete implementation KVSTelescope sends frames to Amazon Kinesis Video Streams service. It is easy to extend Telescope to support other services, especially if there is already a GStreamer plugin for that.

Integrating Telescope into a Panorama application

Configuring the Docker container

Telescope depends on custom compiled external libraries. All these libraries must be compiled and configured correctly in your application’s docker container to make Telescope work. These libraries include:

  • GStreamer 1.0 installed with standard plugins pack, libav, tools, and development libraries;
  • OpenCV 4.2.0, compiled with GStreamer support and Python bindings;
  • numpy (it is typically installed by the base docker image of your Panorama application).

In the case you want to use KVSTelescope, the Telescope implementation that streams the video to Amazon Kinesis Video Streams, it will need also the following libraries:

  • Amazon Kinesis Video Streams (KVS) Producer SDK compiled with GStreamer plugin support;
  • Environment variable GST_PLUGIN_PATH configured to point to the directory where the compiled binaries of KVS Producer SDK GStreamer plugin is placed;
  • Environment variable LD_LIBRARY_PATH including the third-party open-source dependencies compiled by KVS Producer SDK;
  • boto3 (it is typically installed by the base docker image of your Panorama application).

In the source code repository of Telescope, a sample Dockerfile is provided in the examples folder that shows how to install these libraries and Telescope in any container correctly. In most cases, it is required just to copy the relevant sections from the sample to your application’s Dockerfile. Please note that the first time you compile the docker container, it might take up to one hour to correctly compile all libraries.

Setting up AWS IAM privileges for KVSTelescope

KVSTelescope uses the Amazon Kinesis Video Streams (KVS) Producer library, wrapped in a GStreamer sink element, to send the processed frames to the KVS service. KVS Producer needs AWS credentials. It can use the credentials associated with the Panorama Application Role, but you must explicitly configure it. KVSTelescope needs privileges to execute the following actions:

kinesisvideo:DescribeStream
kinesisvideo:GetStreamingEndpoint
kinesisvideo:PutMedia
Enter fullscreen mode Exit fullscreen mode

If KVSTelescope needs to automatically create the Kinesis Video Stream the first time it is used, you should also include the kinesisvideo:CreateStream action. An example policy allowing KVSTelescope to write data to Kinesis Video Streams could look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kinesisvideo:DescribeStream",
                "kinesisvideo:CreateStream",
                "kinesisvideo:GetDataEndpoint",
                "kinesisvideo:PutMedia"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Setting up credentials

There are two types of AWS security credentials: static and temporary. The former never expire and in case of a leak, you should invalidate them manually and reconfigure the applications. For this reason, their use is strongly discouraged in a production environment. Examples of static AWS credentials include the IAM user’s Access Key Id and Secret Access Key pair.

A photo of a keychain
Photo by Angela Merenkova on Unsplash

Temporary credentials expire after a predefined period. In case of a leak, they can be used only within their expiration time, typically in the order of several hours. Temporary credentials can be renewed before they expire so their lifespan can be extended, or they can be exchanged for new ones with a later expiration time. This process needs additional coordination from the application using this type of credentials. Examples of temporary credentials include the AWS Access Key Id, the Secret Access Key, and the Session Token.

We offer different options to provide credentials with KVSCredentialsHandler subclasses provided in the kvs module. If you want to use static credentials for testing purposes, create an IAM user in your AWS account and attach a policy like the one above to the user. You should configure this user to have programmatic access to AWS resources and get the user’s AWS Access Key and Secret Key pair: create a KVSInlineCredentialsHandler or KVSEnvironmentCredentialsHandler instance in the application’s code to pass these credentials to KVS Producer Plugin directly in the GStreamer pipeline definition or as environment variables. However, as these credentials do not expire, using this configuration in a production environment is not recommended. Even in a development and testing environment, you should take the appropriate security measures to protect these credentials: never hardcode them in the source code. Instead, use AWS Secret Manager or a similar service to provide these parameters to your application.

KVSTelescope can also use the Panorama Application Role to pass the application's credentials to KVS Producer. These credentials are temporary, meaning that they expire within a couple of hours, and the user should renew them before expiration. The Producer library expects temporary credentials in a text file. KVSFileCredentialsHandler manages the renewal of the credentials and periodically updates the text file with the new credentials. If you want to use this method, attach a policy similar to the example above to your Application Role. Remember to always test your Panorama application — KVS integration if it still works after the KVSFileCredentialsHandler refreshed the credentials. Let your application run for several hours and periodically check if it continues to stream the video to KVS. You will also find diagnostic information in the CloudWatch logs of your application when the credentials were renewed.

Environment variables

KvsTelescope needs two environment variables to have GStreamer find the KVS Producer plugin. The name of these variables is GST_PLUGIN_PATH and LD_LIBRARY_PATH. They point to the folder of KVS Producer GStreamer plugin and its 3rd party dependencies. In the sample Dockerfile provided, the correct values of these variables are written to a small configuration file named /panorama/.env in the container. You should pass the path of this file to KvsTelescope or otherwise ensure that these environment variables contain the correct value.

Usage example

When initializing a KVSTelescope instance, you should pass the AWS region name where your stream is created, the name of the stream, and a credentials handler instance. If you want to configure the frame rate and the dimensions of the frames manually, you should also set them here. When both parameters are specified, Telescope skips the warmup period and sends the first frame directly to KVS. When you are ready to send frames, call the start_streaming method which opens the GStreamer pipeline. After this method is called, you are expected to send new frames to the stream calling the put method periodically, with the frequency specified in the frame rate or inferred by KvsTelescope. You can stop and restart streaming on the same KvsTelescope instance.

The following example uses the temporary credentials of the IAM Application Role assumed by your Panorama application:

import panoramasdk
from backpack.kvs import KVSTelescope, KVSFileCredentialsHandler

# You might want to read these values from 
# Panorama application parameters
stream_region = 'us-east-1'
stream_name = 'panorama-video'

# The example Dockerfile writes static 
# configuration variables to this file
# If you change the .env file path in the 
# Dockerfile, you should change it also here
DOTENV_PATH = '/panorama/.env'

class Application(panoramasdk.node):

    def __init__(self):
        super().__init__()
        # ...
        credentials_handler = KVSFileCredentialsHandler()
        self.telescope = KVSTelescope(
            stream_region=stream_region,
            stream_name=stream_name,
            credentials_handler=credentials_handler,
            dotenv_path=DOTENV_PATH
        )
        # This call opens the streaming pipeline:
        self.telescope.start_streaming()

    # called from video processing loop:
    def process_streams(self):
        streams = self.inputs.video_in.get()

        for idx, stream in enumerate(streams):

            # Process the stream, for example with:
            # self.process_media(stream)

            # TODO: eventually multiplex streams 
            # to a single frame
            if idx == 0:
                self.telescope.put(stream.image)
Enter fullscreen mode Exit fullscreen mode

If everything works well, you can watch the restreamed video on the Kinesis Video Streams page of the AWS console. Certainly, you can modify the image before sending it to Telescope: draw annotations on it based on the inference result of the deep-learning model.

Annotations

Annotations and annotation drivers provide a unified way to draw annotations on different rendering backends. Currently, two annotation drivers are implemented:

  • PanoramaMediaAnnotationDriver allows you to draw on panoramasdk.media object, and
  • OpenCVImageAnnotationDriver allows you to draw on an OpenCV image (numpy array) object.

A photo of colorful pencils
Photo by Lucas George Wendt on Unsplash

The library can draw two types of annotations: labels and boxes.

Using annotations

Depending on the available backends, you can create one or more annotation driver instances at the beginning of the video frame processing loop. During the process of a single frame, you are expected to collect all annotations to be drawn on the frame in a python collection (for example, in a list). When the processing is finished, you call the render method on any number of drivers, passing the same collection of annotations. All coordinates used in annotation are normalized to the [0; 1) range.

Example usage:

import panoramasdk
from backpack.annotation import (
    Point, LabelAnnotation, RectAnnotation, TimestampAnnotation,
    OpenCVImageAnnotationDriver, 
    PanoramaMediaAnnotationDriver
)

class Application(panoramasdk.node):

    def __init__(self):
        super().__init__()
        # self.telescope = ... 
        self.panorama_driver = PanoramaMediaAnnotationDriver()
        self.cv2_driver = OpenCVImageAnnotationDriver()

    # called from video processing loop:
    def process_streams(self):
        streams = self.inputs.video_in.get()
        for idx, stream in enumerate(streams):
            annotations = [
                TimestampAnnotation(),
                RectAnnotation(
                    point1=Point(0.1, 0.1), 
                    point2=Point(0.9, 0.9)
                ),
                LabelAnnotation(
                    point=Point(0.5, 0.5), 
                    text='Hello World!'
                )
            ]
            self.panorama_driver.render(annotations, stream)

            # TODO: eventually multiplex streams to a 
            # single frame
            if idx == 0:
                self.cv2_driver.render(
                    annotations, 
                    stream.image
                )
                # self.telescope.put(stream.image)
Enter fullscreen mode Exit fullscreen mode

Postface

Even if Telescope can be a helpful tool, its usage might raise two concerns that you should consider carefully. We discourage using Telescope in a production environment: it is a development aid or and a debugging tool.

A photo of a mountain path with a table saying danger
Photo by Greg Rosenke on Unsplash

The first concern is of technical nature. Currently, the application code in a Panorama app does not have direct access to the onboard GPU; thus, all video encoding done by Telescope run on the device’s CPU. This behavior could slow down the appliance: streaming a single output stream with Telescope could require anything between 10–30% of the CPU capacity of the device.

The second concern is related to data protection. The Panorama appliance is designed to protect the video streams being processed. It has two ethernet interfaces to separate the network of the video cameras (typically a closed-circuit local area network) from the device’s Internet access. Using Telescope, you relay the video stream from the protected, closed-circuit camera network to the public Internet. You should carefully examine the data protection requirements of your application and the camera network before using Telescope. Moreover, you are responsible for keeping private all AWS credentials used by Telescope, as required by AWS Shared Responsibility Model.

Where to go from here?

Telescope is part of Backpack, a broader set of tools that aims to help software development on AWS Panorama. Other components will be presented in a future post, so keep tuned and follow us to get updated.

Backpack is open source and available on GitHub. You can build and run the example docker container on an ARM64 based system. For example, suppose you’ve already set up the Panorama Test Utility on a t4g type EC2 instance. In this case, you can build and launch a shell in this container with the following commands on the EC2 instance:

$ git clone https://github.com/Neosperience/backpack.git
$ cd backpack/examples
$ docker build . -t my_backpack_example:1.0
$ docker run -it my_backpack_example:1.0 bash
Enter fullscreen mode Exit fullscreen mode

The container can be built and executed also on a MacBook Pro with the new M1 ARM-based chip.

The backpack library is extensively documented with docstrings. You can read the detailed API docs online or using the python3 interpreter in the container shell:

root@my_backpack_example:/# python3
>>> import backpack.telescope, backpack.kvs, backpack.annotation
>>> help(backpack.telescope)
>>> help(backpack.kvs)
>>> help(backpack.annotation)
Enter fullscreen mode Exit fullscreen mode

You can also try other Telescope implementations found in Backpack. For example, RTSPTelescope allows you to host an RTSP server right in the application container, and connect directly to your Panorama appliance with an RTSP client to monitor your application.

Let me know how you use Telescope, which new features you would like to see and stay tuned for future posts about the other components of Backpack.


About the author

Janos Tolgyesi is an AWS Community Builder working as Machine Learning Solution Architect at Neosperience. He has worked with ML technologies for five years and with AWS infrastructure for eight years. He loves building things, let it be a video analytics application on the edge or a user profiler based on clickstream events. You can find me here, on dev.to, on Twitter, Medium, and LinkedIn.

The open-source project of Backpack was supported by Neosperience.

I would like to express my special thanks to
Luca Bianchi for proofreading this article.

💖 💪 🙅 🚩
mrtj
Janos Tolgyesi

Posted on May 9, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related