

End of support notice: On May 31, 2026, AWS will end support for AWS Panorama. After May 31, 2026, you will no longer be able to access the AWS Panorama console or AWS Panorama resources. For more information, see [AWS Panorama end of support](https://docs.aws.amazon.com/panorama/latest/dev/panorama-end-of-support.html). 

# Building AWS Panorama applications
<a name="panorama-development"></a>

Applications run on the AWS Panorama Appliance to perform computer vision tasks on video streams. You can build computer vision applications by combining Python code and machine learning models, and deploy them to the AWS Panorama Appliance over the internet. Applications can send video to a display, or use the AWS SDK to send results to AWS services.

A [model](applications-models.md) analyzes images to detect people, vehicles, and other objects. Based on images that it has seen during training, the model tells you what it thinks something is, and how confident it is in its guess. You can train models with your own image data or get started with a sample.

The application's [code](gettingstarted-sample.md) process still images from a camera stream, sends them to a model, and processes the result. A model might detect multiple objects and return their shapes and location. The code can use this information to add text or graphics to the video, or to send results to an AWS service for storage or further processing.

To get images from a stream, interact with a model, and output video, application code uses [the AWS Panorama Application SDK](applications-panoramasdk.md). The application SDK is a Python library that supports models generated with PyTorch, Apache MXNet, and TensorFlow.

**Topics**
+ [Computer vision models](applications-models.md)
+ [Building an application image](applications-image.md)
+ [Calling AWS services from your application code](applications-awssdk.md)
+ [The AWS Panorama Application SDK](applications-panoramasdk.md)
+ [Running multiple threads](applications-threading.md)
+ [Serving inbound traffic](applications-ports.md)
+ [Using the GPU](applications-gpuaccess.md)
+ [Setting up a development environment in Windows](applications-devenvwindows.md)

# Computer vision models
<a name="applications-models"></a>

A *computer vision model* is a software program that is trained to detect objects in images. A model learns to recognize a set of objects by first analyzing images of those objects through training. A computer vision model takes an image as input and outputs information about the objects that it detects, such as the type of object and its location. AWS Panorama supports computer vision models built with PyTorch, Apache MXNet, and TensorFlow.

**Note**  
For a list of pre-built models that have been tested with AWS Panorama, see [Model compatibility](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/resources/model-compatibility.md).

**Topics**
+ [Using models in code](#applications-models-using)
+ [Building a custom model](#applications-models-custom)
+ [Packaging a model](#applications-models-package)
+ [Training models](#applications-models-training)

## Using models in code
<a name="applications-models-using"></a>

A model returns one or more results, which can include probabilities for detected classes, location information, and other data.The following example shows how to run inference on an image from a video stream and send the model's output to a processing function.

**Example [application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SAMPLE_CODE-1.0/application.py) – Inference**  

```
    def process_media(self, stream):
        """Runs inference on a frame of video."""
        image_data = preprocess(stream.image,self.MODEL_DIM)
        logger.debug('Image data: {}'.format(image_data))
        # Run inference
        inference_start = time.time()
        inference_results = self.call({"data":image_data}, self.MODEL_NODE)
         # Log metrics
        inference_time = (time.time() - inference_start) * 1000
        if inference_time > self.inference_time_max:
            self.inference_time_max = inference_time
        self.inference_time_ms += inference_time
        # Process results (classification)
        self.process_results(inference_results, stream)
```

The following example shows a function that processes results from basic classification model. The sample model returns an array of probabilities, which is the first and only value in the results array.

**Example [application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SAMPLE_CODE-1.0/application.py) – Processing results**  

```
    def process_results(self, inference_results, stream):
        """Processes output tensors from a computer vision model and annotates a video frame."""
        if inference_results is None:
            logger.warning("Inference results are None.")
            return
        max_results = 5
        logger.debug('Inference results: {}'.format(inference_results))
        class_tuple = inference_results[0]
        enum_vals = [(i, val) for i, val in enumerate(class_tuple[0])]
        sorted_vals = sorted(enum_vals, key=lambda tup: tup[1])
        top_k = sorted_vals[::-1][:max_results]
        indexes =  [tup[0] for tup in top_k]

        for j in range(max_results):
            label = 'Class [%s], with probability %.3f.'% (self.classes[indexes[j]], class_tuple[0][indexes[j]])
            stream.add_label(label, 0.1, 0.1 + 0.1*j)
```

The application code finds the values with the highest probabilities and maps them to labels in a resource file that's loaded during initialization.

## Building a custom model
<a name="applications-models-custom"></a>

You can use models that you build in PyTorch, Apache MXNet, and TensorFlow in AWS Panorama applications. As an alternative to building and training models in SageMaker AI, you can use a trained model or build and train your own model with a supported framework and export it in a local environment or in Amazon EC2.

**Note**  
For details about the framework versions and file formats supported by SageMaker AI Neo, see [Supported Frameworks](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-supported-devices-edge-frameworks.html) in the Amazon SageMaker AI Developer Guide.

The repository for this guide provides a sample application that demonstrates this workflow for a Keras model in TensorFlow `SavedModel` format. It uses TensorFlow 2 and can run locally in a virtual environment or in a Docker container. The sample app also includes templates and scripts for building the model on an Amazon EC2 instance.

****
+ [Custom model sample application](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/custom-model)

![\[\]](http://docs.aws.amazon.com/panorama/latest/dev/images/sample-custom-model.png)


AWS Panorama uses SageMaker AI Neo to compile models for use on the AWS Panorama Appliance. For each framework, use the [format that's supported by SageMaker AI Neo](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-compilation-preparing-model.html), and package the model in a `.tar.gz` archive.

For more information, see [Compile and deploy models with Neo](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html) in the Amazon SageMaker AI Developer Guide.

## Packaging a model
<a name="applications-models-package"></a>

A model package comprises a descriptor, package configuration, and model archive. Like in an [application image package](applications-image.md), the package configuration tells the AWS Panorama service where the model and descriptor are stored in Amazon S3. 

**Example [packages/123456789012-SQUEEZENET\$1PYTORCH-1.0/descriptor.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SQUEEZENET_PYTORCH-1.0/descriptor.json)**  

```
{
    "mlModelDescriptor": {
        "envelopeVersion": "2021-01-01",
        "framework": "PYTORCH",
        "frameworkVersion": "1.8",
        "precisionMode": "FP16",
        "inputs": [
            {
                "name": "data",
                "shape": [
                    1,
                    3,
                    224,
                    224
                ]
            }
        ]
    }
}
```

**Note**  
Specify the framework version's major and minor version only. For a list of supported PyTorch, Apache MXNet, and TensorFlow versions versions, see [Supported frameworks](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-supported-devices-edge-frameworks.html).

To import a model, use the AWS Panorama Application CLI `import-raw-model` command. If you make any changes to the model or its descriptor, you must rerun this command to update the application's assets. For more information, see [Changing the computer vision model](gettingstarted-sample.md#gettingstarted-sample-model).

For the descriptor file's JSON schema, see [assetDescriptor.schema.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/resources/manifest-schema/ver_2021-01-01/assetDescriptor.schema.json).

## Training models
<a name="applications-models-training"></a>

When you train a model, use images from the target environment, or from a test environment that closely resembles the target environment. Consider the following factors that can affect model performance:

****
+ **Lighting** – The amount of light that is reflected by a subject determines how much detail the model has to analyze. A model trained with images of well-lit subjects might not work well in a low-light or backlit environment.
+ **Resolution** – The input size of a model is typically fixed at a resolution between 224 and 512 pixels wide in a square aspect ratio. Before you pass a frame of video to the model, you can downscale or crop it to fit the required size.
+ **Image distortion** – A camera's focal length and lens shape can cause images to exhibit distortion away from the center of the frame. The position of a camera also determines which features of a subject are visible. For example, an overhead camera with a wide angle lens will show the top of a subject when it's in the center of the frame, and a skewed view of the subject's side as it moves farther away from center.

To address these issues, you can preprocess images before sending them to the model, and train the model on a wider variety of images that reflect variances in real-world environments. If a model needs to operate in a lighting situations and with a variety of cameras, you need more data for training. In addition to gathering more images, you can get more training data by creating variations of your existing images that are skewed or have different lighting.

# Building an application image
<a name="applications-image"></a>

The AWS Panorama Appliance runs applications as container filesystems exported from an image that you build. You specify your application's dependencies and resources in a Dockerfile that uses the AWS Panorama application base image as a starting point.

To build an application image, you use Docker and the AWS Panorama Application CLI. The following example from this guide's sample application demonstrates these use cases.

**Example [packages/123456789012-SAMPLE\$1CODE-1.0/Dockerfile](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SAMPLE_CODE-1.0/Dockerfile)**  

```
FROM public.ecr.aws/panorama/panorama-application
WORKDIR /panorama
COPY . .
RUN pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt
```

The following Dockerfile instructions are used.

****
+ `FROM` – Loads the application base image (`public.ecr.aws/panorama/panorama-application`). 
+ `WORKDIR` – Set the working directory on the image. `/panorama` is used for application code and related files. This setting only persists during the build and does not affect the working directory for your application at runtime (`/`).
+ `COPY` – Copies files from a local path to a path on the image. `COPY . .` copies the files in the current directory (the package directory) to the working directory on the image. For example, the application code is copied from `packages/123456789012-SAMPLE_CODE-1.0/application.py` to `/panorama/application.py`.
+ `RUN` – Runs shell commands on the image during the build. A single `RUN` operation can run multiple commands in sequence by using `&&` between commands. This example updates the `pip` package manager and then installs the libraries listed in `requirements.txt`.

You can use other instructions, such as `ADD` and `ARG`, that are useful at build time. Instructions that add runtime information to the container, such as `ENV`, do not work with AWS Panorama. AWS Panorama does not run a container from the image. It only uses the image to export a filesystem, which is transferred to the appliance.

## Specifying dependencies
<a name="applications-image-dependencies"></a>

`requirements.txt` is a Python requirements file that specifies libraries used by the application. The sample application uses Open CV and the AWS SDK for Python (Boto3).

**Example [packages/123456789012-SAMPLE\$1CODE-1.0/requirements.txt](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SAMPLE_CODE-1.0/requirements.txt)**  

```
boto3==1.24.*
opencv-python==4.6.*
```

The `pip install` command in the Dockerfile installs these libraries to the Python `dist-packages` directory under `/usr/local/lib`, so that they can be imported by your application code.

## Local storage
<a name="applications-image-storage"></a>

AWS Panorama reserves the `/opt/aws/panorama/storage` directory for application storage. Your application can create and modify files at this path. Files created in the storage directory persist across reboots. Other temporary file locations are cleared on boot.

## Building image assets
<a name="applications-image-build"></a>

When you build an image for your application package with the AWS Panorama Application CLI, the CLI runs `docker build` in the package directory. This builds an application image that contains your application code. The CLI then creates a container, exports its filesystem, compresses it, and stores it in the `assets` folder.

```
$ panorama-cli build-container --container-asset-name code_asset --package-path packages/123456789012-SAMPLE_CODE-1.0
docker build -t code_asset packages/123456789012-SAMPLE_CODE-1.0 --pull
docker export --output=code_asset.tar $(docker create code_asset:latest)
gzip -1 code_asset.tar
{
    "name": "code_asset",
    "implementations": [
        {
            "type": "container",
            "assetUri": "6f67xmpl32743ed0e60c151a02f2f0da1bf70a4ab9d83fe236fa32a6f9b9f808.tar.gz",
            "descriptorUri": "1872xmpl129481ed053c52e66d6af8b030f9eb69b1168a29012f01c7034d7a8f.json"
        }
    ]
}
Container asset for the package has been succesfully built at  /home/user/aws-panorama-developer-guide/sample-apps/aws-panorama-sample/assets/6f67xmpl32743ed0e60c151a02f2f0da1bf70a4ab9d83fe236fa32a6f9b9f808.tar.gz
```

The JSON block in the output is an asset definition that the CLI adds to the package configuration (`package.json`) and registers with the AWS Panorama service. The CLI also copies the descriptor file, which specifies the path to the application script (the application's entry point).

**Example [packages/123456789012-SAMPLE\$1CODE-1.0/descriptor.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/aws-panorama-sample/packages/123456789012-SAMPLE_CODE-1.0/descriptor.json)**  

```
{
    "runtimeDescriptor":
    {
        "envelopeVersion": "2021-01-01",
        "entry":
        {
            "path": "python3",
            "name": "/panorama/application.py"
        }
    }
}
```

In the assets folder, the descriptor and application image are named for their SHA-256 checksum. This name is used as a unique identifier for the asset when it is stored is Amazon S3. 

# Calling AWS services from your application code
<a name="applications-awssdk"></a>

You can use the AWS SDK for Python (Boto) to call AWS services from your application code. For example, if your model detects something out of the ordinary, you could post metrics to Amazon CloudWatch, send an notification with Amazon SNS, save an image to Amazon S3, or invoke a Lambda function for further processing. Most AWS services have a public API that you can use with the AWS SDK.

The appliance does not have permission to access any AWS services by default. To grant it permission, [create a role for the application](permissions-application.md), and assign it to the application instance during deployment.

**Topics**
+ [Using Amazon S3](#applications-awssdk-s3)
+ [Using the AWS IoT MQTT topic](#monitoring-messagestream)

## Using Amazon S3
<a name="applications-awssdk-s3"></a>

You can use Amazon S3 to store processing results and other application data.

```
import boto3
s3_client=boto3.client("s3")
s3_clients3.upload_file(data_file,
                    s3_bucket_name,
                    os.path.basename(data_file))
```

## Using the AWS IoT MQTT topic
<a name="monitoring-messagestream"></a>

You can use the SDK for Python (Boto3) to send messages to an [MQTT topic](https://docs.aws.amazon.com/iot/latest/developerguide/topics.html) in AWS IoT. In the following example, the application posts to a topic named after the appliance's *thing name*, which you can find in [AWS IoT console](https://console.aws.amazon.com/iot/home#/thinghub).

```
import boto3
iot_client=boto3.client('iot-data')
topic = "panorama/panorama_my-appliance_Thing_a01e373b"
iot_client.publish(topic=topic, payload="my message")
```

Choose a name that indicates the device ID or other identifier of your choice. To publish messages, the application needs permission to call `iot:Publish`.

**To monitor an MQTT queue**

1. Open the [AWS IoT console Test page](https://console.aws.amazon.com/iot/home?region=us-east-1#/test).

1. For **Subscription topic**, enter the name of the topic. For example, `panorama/panorama_my-appliance_Thing_a01e373b`.

1. Choose **Subscribe to topic**.

# The AWS Panorama Application SDK
<a name="applications-panoramasdk"></a>

The AWS Panorama Application SDK is a Python library for developing AWS Panorama applications. In your [application code](gettingstarted-sample.md), you use the AWS Panorama Application SDK to load a computer vision model, run inference, and output video to a monitor.

**Note**  
To ensure that you have access to the latest functionality of the AWS Panorama Application SDK, [upgrade the appliance software](appliance-manage.md#appliance-manage-software).

For details about the classes that the application SDK defines and their methods, see [Application SDK reference](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/resources/applicationsdk-reference.md).

**Topics**
+ [Adding text and boxes to output video](#applications-panoramasdk-overlays)

## Adding text and boxes to output video
<a name="applications-panoramasdk-overlays"></a>

With the AWS Panorama SDK, you can output a video stream to a display. The video can include text and boxes that show output from the model, the current state of the application, or other data.

Each object in the `video_in` array is an image from a camera stream that is connected to the appliance. The type of this object is `panoramasdk.media`. It has methods to add text and rectangular boxes to the image, which you can then assign to the `video_out` array.

In the following example, the sample application adds a label for each of the results. Each result is positioned at the same left position, but at different heights.

```
        for j in range(max_results):
            label = 'Class [%s], with probability %.3f.'% (self.classes[indexes[j]], class_tuple[0][indexes[j]])
            stream.add_label(label, 0.1, 0.1 + 0.1*j)
```

To add a box to the output image, use `add_rect`. This method takes 4 values between 0 and 1, indicating the position of the top left and bottom right corners of the box.

```
        w,h,c = stream.image.shape
        stream.add_rect(x1/w, y1/h, x2/w, y2/h)
```

# Running multiple threads
<a name="applications-threading"></a>

You can run your application logic on a processing thread and use other threads for other background processes. For example, you can create a thread that [serves HTTP traffic](applications-ports.md) for debugging, or a thread that monitors inference results and sends data to AWS.

To run multiple threads, you use the [threading module](https://docs.python.org/3/library/threading.html) from the Python standard library to create a thread for each process. The following example shows the main loop of the debug server sample application, which creates an application object and uses it to run three threads.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – Main loop**  

```
def main():
    panorama = panoramasdk.node()
    while True:
        try:
            # Instantiate application
            logger.info('INITIALIZING APPLICATION')
            app = Application(panorama)
            # Create threads for stream processing, debugger, and client
            app.run_thread = threading.Thread(target=app.run_cv)
            app.server_thread = threading.Thread(target=app.run_debugger)
            app.client_thread = threading.Thread(target=app.run_client)
            # Start threads
            logger.info('RUNNING APPLICATION')
            app.run_thread.start()
            logger.info('RUNNING SERVER')
            app.server_thread.start()
            logger.info('RUNNING CLIENT')
            app.client_thread.start()
            # Wait for threads to exit
            app.run_thread.join()
            app.server_thread.join()
            app.client_thread.join()
            logger.info('RESTARTING APPLICATION')
        except:
            logger.exception('Exception during processing loop.')
```

When all of the threads exit, the application restarts itself. The `run_cv` loop processes images from camera streams. If it receives a signal to stop, it shuts down the debugger process, which runs an HTTP server and can't shut itself down. Each thread must handle its own errors. If an error is not caught and logged, the thread exits silently.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – Processing loop**  

```
    # Processing loop
    def run_cv(self):
        """Run computer vision workflow in a loop."""
        logger.info("PROCESSING STREAMS")
        while not self.terminate:
            try:
                self.process_streams()
                # turn off debug logging after 15 loops
                if logger.getEffectiveLevel() == logging.DEBUG and self.frame_num == 15:
                    logger.setLevel(logging.INFO)
            except:
                logger.exception('Exception on processing thread.')
        # Stop signal received
        logger.info("SHUTTING DOWN SERVER")
        self.server.shutdown()
        self.server.server_close()
        logger.info("EXITING RUN THREAD")
```

Threads communicate via the application's `self` object. To restart the application processing loop, the debugger thread calls the `stop` method. This method sets a `terminate` attribute, which signals the other threads to shut down.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – Stop method**  

```
    # Interrupt processing loop
    def stop(self):
        """Signal application to stop processing."""
        logger.info("STOPPING APPLICATION")
        # Signal processes to stop
        self.terminate = True
    # HTTP debug server
    def run_debugger(self):
        """Process debug commands from local network."""
        class ServerHandler(SimpleHTTPRequestHandler):
            # Store reference to application
            application = self
            # Get status
            def do_GET(self):
                """Process GET requests."""
                logger.info('Get request to {}'.format(self.path))
                if self.path == "/status":
                    self.send_200('OK')
                else:
                    self.send_error(400)
            # Restart application
            def do_POST(self):
                """Process POST requests."""
                logger.info('Post request to {}'.format(self.path))
                if self.path == '/restart':
                    self.send_200('OK')
                    ServerHandler.application.stop()
                else:
                    self.send_error(400)
```



# Serving inbound traffic
<a name="applications-ports"></a>

You can monitor or debug applications locally by running an HTTP server alongside your application code. To serve external traffic, you map ports on the AWS Panorama Appliance to ports on your application container.

**Important**  
By default, the AWS Panorama Appliance does not accept incoming traffic on any ports. Opening ports on the appliance has implicit security risk. When you use this feature, you must take additional steps to [secure your appliance from external traffic](appliance-network.md) and secure communications between authorized clients and the appliance.  
The sample code included with this guide is for demonstration purposes and does not implement authentication, authorization, or encryption.

You can open up ports in the range 8000–9000 on the appliance. These ports, when opened, can receive traffic from any routable client. When you deploy your application, you specify which ports to open, and map ports on the appliance to ports on your application container. The appliance software forwards traffic to the container, and sends responses back to the requestor. Requests are received on the appliance port that you specify and responses go out on a random ephemeral port.

## Configuring inbound ports
<a name="applications-ports-configuration"></a>

You specify port mappings in three places in your application configuration. The code package's `package.json`, you specify the port that the code node listens on in a `network` block. The following example declares that the node listens on port 80.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/package.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/package.json)**  

```
                "outputs": [
                    {
                        "description": "Video stream output",
                        "name": "video_out",
                        "type": "media"
                    }
                ],
                "network": {
                    "inboundPorts": [
                        {
                            "port": 80,
                            "description": "http"
                        }
                    ]
                }
```

In the application manifest, you declare a routing rule that maps a port on the appliance to a port on the application's code container. The following example adds a rule that maps port 8080 on the device to port 80 on the `code_node` container.

**Example [graphs/my-app/graph.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/graphs/my-app/graph.json)**  

```
            {
                "producer": "model_input_width",
                "consumer": "code_node.model_input_width"
            },
            {
                "producer": "model_input_order",
                "consumer": "code_node.model_input_order"
            }
        ],
        "networkRoutingRules": [
            {
                "node": "code_node",
                "containerPort": 80,
                "hostPort": 8080,
                "decorator": {
                    "title": "Listener port 8080",
                    "description": "Container monitoring and debug."
                }
            }
        ]
```

When you deploy the application, you specify the same rules in the AWS Panorama console, or with an override document passed to the [CreateApplicationInstance](https://docs.aws.amazon.com/panorama/latest/api/API_CreateApplicationInstance.html) API. You must provide this configuration at deploy time to confirm that you want to open ports on the appliance.

**Example [graphs/my-app/override.json](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/graphs/my-app/override.json)**  

```
            {
                "replace": "camera_node",
                "with": [
                    {
                        "name": "exterior-north"
                    }
                ]
            }
        ],
        "networkRoutingRules":[
            {
                "node": "code_node",
                "containerPort": 80,
                "hostPort": 8080
            }
        ],
        "envelopeVersion": "2021-01-01"
    }
}
```

If the device port specified in the application manifest is in use by another application, you can use the override document to choose a different port.

## Serving traffic
<a name="applications-ports-serverthread"></a>

With ports open on the container, you can open a socket or run a server to handle incoming requests. The `debug-server` sample shows a basic implementation of an HTTP server running alongside computer vision application code.

**Important**  
The sample implementation is not secure for production use. To avoid making your appliance vulnerable to attacks, you must implement appropriate security controls in your code and network configuration.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – HTTP server**  

```
    # HTTP debug server
    def run_debugger(self):
        """Process debug commands from local network."""
        class ServerHandler(SimpleHTTPRequestHandler):
            # Store reference to application
            application = self
            # Get status
            def do_GET(self):
                """Process GET requests."""
                logger.info('Get request to {}'.format(self.path))
                if self.path == '/status':
                    self.send_200('OK')
                else:
                    self.send_error(400)
            # Restart application
            def do_POST(self):
                """Process POST requests."""
                logger.info('Post request to {}'.format(self.path))
                if self.path == '/restart':
                    self.send_200('OK')
                    ServerHandler.application.stop()
                else:
                    self.send_error(400)
            # Send response
            def send_200(self, msg):
                """Send 200 (success) response with message."""
                self.send_response(200)
                self.send_header('Content-Type', 'text/plain')
                self.end_headers()
                self.wfile.write(msg.encode('utf-8'))
        try:
            # Run HTTP server
            self.server = HTTPServer(("", self.CONTAINER_PORT), ServerHandler)
            self.server.serve_forever(1)
            # Server shut down by run_cv loop
            logger.info("EXITING SERVER THREAD")
        except:
            logger.exception('Exception on server thread.')
```

The server accepts GET requests at the `/status` path to retrieve some information about the application. It also accepts a POST request to `/restart` to restart the application.

To demonstrate this functionality, the sample application runs an HTTP client on a separate thread. The client calls the `/status` path over the local network shortly after startup, and restarts the application a few minutes later.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – HTTP client**  

```
    # HTTP test client
    def run_client(self):
        """Send HTTP requests to device port to demnostrate debug server functions."""
        def client_get():
            """Get container status"""
            r = requests.get('http://{}:{}/status'.format(self.device_ip, self.DEVICE_PORT))
            logger.info('Response: {}'.format(r.text))
            return
        def client_post():
            """Restart application"""
            r = requests.post('http://{}:{}/restart'.format(self.device_ip, self.DEVICE_PORT))
            logger.info('Response: {}'.format(r.text))
            return
        # Call debug server
        while not self.terminate:
            try:
                time.sleep(30)
                client_get()
                time.sleep(300)
                client_post()
            except:
                logger.exception('Exception on client thread.')
        # stop signal received
        logger.info("EXITING CLIENT THREAD")
```

The main loop manages the threads and restarts the application when they exit.

**Example [packages/123456789012-DEBUG\$1SERVER-1.0/application.py](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/packages/123456789012-DEBUG_SERVER-1.0/application.py) – Main loop**  

```
def main():
    panorama = panoramasdk.node()
    while True:
        try:
            # Instantiate application
            logger.info('INITIALIZING APPLICATION')
            app = Application(panorama)
            # Create threads for stream processing, debugger, and client
            app.run_thread = threading.Thread(target=app.run_cv)
            app.server_thread = threading.Thread(target=app.run_debugger)
            app.client_thread = threading.Thread(target=app.run_client)
            # Start threads
            logger.info('RUNNING APPLICATION')
            app.run_thread.start()
            logger.info('RUNNING SERVER')
            app.server_thread.start()
            logger.info('RUNNING CLIENT')
            app.client_thread.start()
            # Wait for threads to exit
            app.run_thread.join()
            app.server_thread.join()
            app.client_thread.join()
            logger.info('RESTARTING APPLICATION')
        except:
            logger.exception('Exception during processing loop.')
```

To deploy the sample application, see the [instructions in this guide's GitHub repository.](https://github.com/awsdocs/aws-panorama-developer-guide/blob/main/sample-apps/debug-server/README.md)

# Using the GPU
<a name="applications-gpuaccess"></a>

You can access the graphics processor (GPU) on the AWS Panorama Appliance to use GPU-accelerated libraries, or run machine learning models in your application code. To turn on GPU access, you add GPU access as a requirement to the package configuration after building your application code container.

**Important**  
If you enable GPU access, you can't run model nodes in any application on the appliance. For security purposes, GPU access is restricted when the appliance runs a model compiled with SageMaker AI Neo. With GPU access, you must run your models in application code nodes, and all applications on the device share access to the GPU.

To turn on GPU access for your application, update the [package configuration](applications-packages.md) after you build the package with the AWS Panorama Application CLI. The following example shows the `requirements` block that adds GPU access to the application code node.

**Example package.json with requirements block**  

```
{
    "nodePackage": {
        "envelopeVersion": "2021-01-01",
        "name": "SAMPLE_CODE",
        "version": "1.0",
        "description": "Computer vision application code.",
        "assets": [
            {
                "name": "code_asset",
                "implementations": [
                    {
                        "type": "container",
                        "assetUri": "eba3xmpl71aa387e8f89be9a8c396416cdb80a717bb32103c957a8bf41440b12.tar.gz",
                        "descriptorUri": "4abdxmpl5a6f047d2b3047adde44704759d13f0126c00ed9b4309726f6bb43400ba9.json",
                        "requirements": [
                            {
                                "type": "hardware_access",
                                "inferenceAccelerators": [
                                    {
                                        "deviceType": "nvhost_gpu",
                                        "sharedResourcePolicy": {
                                            "policy" : "allow_all"
                                        }
                                    }
                                ]
                            }
                        ]
                    }
                ]
            }
        ],
        "interfaces": [
        ...
```

Update the package configuration between the build and packaging steps in your development workflow.

**To deploy an application with GPU access**

1. To build the application container, use the `build-container` command.

   ```
   $ panorama-cli build-container --container-asset-name code_asset --package-path packages/123456789012-SAMPLE_CODE-1.0
   ```

1. Add the `requirements` block to the package configuration.

1. To upload the container asset and package configuration, use the `package-application` command.

   ```
   $ panorama-cli package-application
   ```

1. Deploy the application.

For sample applications that use GPU access, visit the [aws-panorama-samples](https://github.com/aws-samples/aws-panorama-samples) GitHub repository.

# Setting up a development environment in Windows
<a name="applications-devenvwindows"></a>

To build a AWS Panorama application, you use Docker, command-line tools, and Python. In Windows, you can set up a development environment by using Docker Desktop with Windows Subsystem for Linux and Ubuntu. This tutorial walks you through the setup process for a development environment that has been tested with AWS Panorama tools and sample applications.

**Topics**
+ [Prerequisites](#applications-devenvwindows-prerequisites)
+ [Install WSL 2 and Ubuntu](#applications-devenvwindows-wsl2)
+ [Install Docker](#applications-devenvwindows-docker)
+ [Configure Ubuntu](#applications-devenvwindows-ubuntu)
+ [Next steps](#applications-devenvwindows-nextsteps)

## Prerequisites
<a name="applications-devenvwindows-prerequisites"></a>

To follow this tutorial, you need a version of Windows that supports Windows Subsystem for Linux 2 (WSL 2).

****
+ Windows 10 version 1903 and higher (Build 18362 and higher) or Windows 11
+ Windows features
  + Windows Subsystem for Linux
  + Hyper-V
  + Virtual machine platform

This tutorial was developed with the following software versions.

****
+ Ubuntu 20.04
+ Python 3.8.5
+ Docker 20.10.8

## Install WSL 2 and Ubuntu
<a name="applications-devenvwindows-wsl2"></a>

If you have Windows 10 version 2004 and higher (Build 19041 and higher), you can install WSL 2 and Ubuntu 20.04 with the following PowerShell command.

```
> wsl --install -d Ubuntu-20.04
```

For older Windows version, follow the instructions in the WSL 2 documentation: [Manual installation steps for older versions](https://docs.microsoft.com/en-us/windows/wsl/install-manual)

## Install Docker
<a name="applications-devenvwindows-docker"></a>

To install Docker Desktop, download and run the installer package from [hub.docker.com](https://hub.docker.com/editions/community/docker-ce-desktop-windows/). If you encounter issues, follow the instructions on the Docker website: [Docker Desktop WSL 2 backend](https://docs.docker.com/desktop/windows/wsl/).

Run Docker Desktop and follow the first-run tutorial to build an example container.

**Note**  
Docker Desktop only enables Docker in the default distribution. If you have other Linux distributions installed prior to running this tutorial, enable Docker in the newly installed Ubuntu distribution in the Docker Desktop settings menu under **Resources**, **WSL integration**.

## Configure Ubuntu
<a name="applications-devenvwindows-ubuntu"></a>

You can now run Docker commands in your Ubuntu virtual machine. To open a command-line terminal, run the distribution from the start menu. The first time you run it, you configure a username and password that you can use to run administrator commands.

To complete configuration of your development environment, update the virtual machine's software and install tools.

**To configure the virtual machine**

1. Update the software that comes with Ubuntu.

   ```
   $ sudo apt update && sudo apt upgrade -y && sudo apt autoremove
   ```

1. Install development tools with apt.

   ```
   $ sudo apt install unzip python3-pip
   ```

1. Install Python libraries with pip.

   ```
   $ pip3 install awscli panoramacli
   ```

1. Open a new terminal, and then run `aws configure` to configure the AWS CLI.

   ```
   $ aws configure
   ```

   If you don't have access keys, you can generate them in the [IAM console](https://console.aws.amazon.com/iamv2/home?#/users).

Finally, download and import the sample application.

**To get the sample application**

1. Download and extract the sample application.

   ```
   $ wget https://github.com/awsdocs/aws-panorama-developer-guide/releases/download/v1.0-ga/aws-panorama-sample.zip
   $ unzip aws-panorama-sample.zip
   $ cd aws-panorama-sample
   ```

1. Run the included scripts to test compilation, build the application container, and upload packages to AWS Panorama.

   ```
   aws-panorama-sample$ ./0-test-compile.sh
   aws-panorama-sample$ ./1-create-role.sh
   aws-panorama-sample$ ./2-import-app.sh
   aws-panorama-sample$ ./3-build-container.sh
   aws-panorama-sample$ ./4-package-app.sh
   ```

The AWS Panorama Application CLI uploads packages and registers them with the AWS Panorama service. You can now [deploy the sample app](gettingstarted-deploy.md#gettingstarted-deploy-deploy) with the AWS Panorama console.

## Next steps
<a name="applications-devenvwindows-nextsteps"></a>

To explore and edit the project files, you can use File Explorer or an integrated development environment (IDE) that supports WSL.

To access the virtual machine's file system, open File explorer and enter `\\wsl$` in the navigation bar. This directory contains a link to the virtual machine's file system (`Ubuntu-20.04`) and file systems for Docker's data. Under `Ubuntu-20.04`, your user directory is at `home\username`.

**Note**  
To access files in your Windows installation from within Ubuntu, navigate to the `/mnt/c` directory. For example, you can list files in your downloads directory by running `ls /mnt/c/Users/windows-username/Downloads`.

With Visual Studio Code, you can edit application code in your development environment and run commands with an integrated terminal. To install Visual Studio Code, visit [code.visualstudio.com](https://code.visualstudio.com/). After installation, add the [Remote WSL](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl) extension.

Windows terminal is an alternative to the standard Ubuntu terminal that you’ve been running commands in. It supports multiple tabs and can run PowerShell, Command Prompt, and terminals for any other variety of Linux that you install. It supports copy and paste with  Ctrl C  and  Ctrl V , clickable URLs, and other useful improvements. To install Windows Terminal, visit [microsoft.com](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701).