

# Setting up Amazon Neptune
<a name="neptune-setup"></a>

Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. To help you get started with Neptune, this section of the documentation covers a wide range of topics, from choosing the right instance types and storage configurations to securely connecting to your Neptune cluster, loading data, and monitoring performance. The key areas covered in this section are:
+ Instance Types: The documentation outlines the various instance types available for Neptune, along with their specifications and considerations to help you select the most appropriate option for your workload.
+ Storage Types: It also covers the different storage types supported by Neptune and guidance on how to choose the right storage configuration.
+ Cluster Creation: The documentation provides step-by-step instructions for creating a new Neptune cluster, including how to configure the required VPC settings.
+ Connectivity and Security: It covers how to securely connect to your Neptune cluster, set up VPC access, and configure the necessary security measures to protect your data.
+ Data Loading: The documentation offers guidance on loading data into your Neptune cluster, including options for bulk loading and streaming data using various tools and APIs.
+ Monitoring and Troubleshooting: It also includes information on monitoring your Neptune environment, accessing logs, and troubleshooting common issues that may arise during operation.

**Note**  
For AWS graph database reference architectures and reference deployment architectures, See [Amazon Neptune Resources](https://aws.amazon.com/neptune/developer-resources/). These resources can help inform your choices about graph data models and query languages, and accelerate your development process.

# Choosing instance types for Amazon Neptune
<a name="instance-types"></a>

Amazon Neptune offers a number of different instance sizes and families. that offer different capabilities suited to different graph workloads. This section is meant to help you choose the best instance type for your needs.

For the pricing of each instance-type in these families, please see the [Neptune pricing page](https://aws.amazon.com/neptune/pricing/).

## Overview of instance resource allocation
<a name="instance-resources"></a>

Each Amazon EC2 instance type and size used in Neptune offers a defined amount of compute (vCPUs) and system memory. The primary storage for Neptune is external to the DB instances in a cluster, which lets compute and storage capacity scale independently of each other.

This section focuses on how the compute resources can be scaled, and on the differences between each of the various instance families.

In all instance families, vCPU resources are allocated to support two (2) query execution threads per vCPU. This support is dictated by the instance size. When determining the proper size of a given Neptune DB instance, you need to consider the possible concurrency of your application and the average latency of your queries. You can estimate the number of vCPUs needed as follows, where latency is measured as the average query latency in seconds and concurrency is measured as the target number of queries per second:

```
vCPUs = (latency x concurrency) / 2
```

**Note**  
SPARQL queries, openCypher queries, and Gremlin read queries that use the DFE query engine can, under certain circumstances, use more than one execution thread per query. When initially sizing your DB cluster, start with the assumption that each query will consume a single execution thread per execution and scale up if you observe back pressure into query queue. This can be observed by using the `/gremlin/status`, `/oc/status`, or `/sparql/status` APIs, or it can also be observed using the `MainRequestsPendingRequestsQueue` CloudWatch metric.

System memory on each instance is divided into two primary allocations: buffer pool cache and query execution thread memory.

Approximately two thirds of the available memory in an instance is allocated for buffer-pool cache. Buffer-pool cache is used to cache the most recently used components of the graph for faster access on queries that repeatedly access those components. Instances with a larger amount of system memory have larger buffer pool caches that can store more of the graph locally. A user can tune for the appropriate amount of buffer-pool cache by monitoring the buffer cache hit and miss metrics available in CloudWatch.

You may want to increase the size of your instance if the cache hit rate drops below 99.9% for a consistent period of time. This suggests that the buffer pool is not big enough, and the engine is having to fetch data from the underlying storage volume more often than is efficient.

The remaining third of system memory is distributed evenly across query execution threads, with some memory remaining for the operating system and a small dynamic pool for threads to use as needed. The memory available for each thread increases slightly from one instance size to the next up to an `8xl` instance type, at which size the memory allocated per thread reaches a maximum.

The time to add more thread memory is when you encounter an `OutOfMemoryException` (OOM). OOM exceptions occur when one thread needs more than the maximum memory allocated to it (this is not the same as the entire instance running out of memory).

## `t3` and `t4g` instance types
<a name="instance-type-t3-t4g"></a>

The `t3` and `t4g` family of instances offers a low-cost option for getting started using a graph database and also for initial development and testing. These instances are eligible for the Neptune [free-tier offer](https://aws.amazon.com/neptune/free-trial/), which lets new customers use Neptune at no cost for the first 750 instance hours used within a standalone AWS account or rolled up underneath an AWS Organization with Consolidated Billing (Payer Account).

The `t3` and `t4g` instances are only offered in the medium size configuration (`t3.medium` and `t4g.medium`).

They are not intended for use in a production environment.

Because these instances have very constrained resources, they are not recommended for testing query execution time or overall database performance. To assess query performance, upgrade to one of the other instance families.

## `r4` family of instance types
<a name="instance-type-r4"></a>

*DEPRECATED*   –   The `r4` family was offered when Neptune was launched in 2018, but now newer instance types offer much better price/performance. As of engine version [1.1.0.0](engine-releases-1.1.0.0.md), Neptune no longer supports `r4` instance types.

## `r5` family of instance types
<a name="instance-type-r5"></a>

The `r5` family contains memory-optimized instance types that work well for most graph use cases. The `r5` family contains instance types from `r5.large` up to `r5.24xlarge`. They scale linearly in compute performance as you increase in size. For example, an `r5.xlarge` (4 vCPUs and 32GiB of memory) has twice the vCPUs and memory of an `r5.large` (2 vCPUs and 16GiB of memory), and an `r5.2xlarge` (8 vCPUs and 64GiB of memory) has twice the vCPUs and memory of an `r5.xlarge`. You can expect query performance to scale directly with compute capacity up to the `r5.12xlarge` instance type.

The `r5` instance family has a 2-socket Intel CPU architecture. The `r5.12xlarge` and smaller types use a single socket and the system memory owned by that single-socket processor. The `r5.16xlarge` and `r5.24xlarge` types use both sockets and available memory. Because there's some memory-management overhead required between two physical processors in a 2-socket architecture, the performance gains scaling up from a `r5.12xlarge` to a `r5.16xlarge` or `r5.24xlarge` instance type are not as linear as you get scaling up at the smaller sizes.

## `r5d` family of instance types
<a name="instance-type-r5d"></a>

Neptune has a [lookup-cache feature](feature-overview-lookup-cache.md) that can be used to improve the performance of queries which need to fetch and return large numbers of property values and literals. This feature is used primarily by customers with queries that need to return many attributes. The lookup cache boosts performance of these queries by fetching these attribute values locally rather than looking up each one over and over in Neptune indexed storage.

The lookup cache is implemented using a NVMe-attached EBS volume on an `r5d` instance type. It is enabled using a cluster's parameter group. As data is fetched from Neptune indexed storage, property values and RDF literals are cached within this NVMe volume.

If you don't need the lookup cache feature, use a standard `r5` instance type rather than an `r5d`, to avoid the higher cost of the `r5d`.

The `r5d` family has instance types in the same sizes as the `r5` family, from `r5d.large` to `r5d.24xlarge`.

## `r6g` family of instance types
<a name="instance-type-r6g"></a>

AWS has developed its own ARM-based processor called [Graviton](https://aws.amazon.com/ec2/graviton/), that delivers better price/performance than the Intel and AMD equivalents. The `r6g` family uses the Graviton2 processor. In our testing, the Graviton2 processor offers 10-20% better performance for OLTP-style (constrained) graph queries. Larger, OLAP-ish queries, however, may be slightly less performant with the Graviton2 processors than with Intel ones owing to slightly less performant memory-paging performance.

It's also important to note that the `r6g` family has a single-socket architecture, which means that performance scales linearly with compute capacity from an `r6g.large` to an `r6g.16xlarge` (the largest type in the family).

## `r6i` family of instance types
<a name="instance-type-r6i"></a>

[Amazon R6i instances](https://aws.amazon.com/ec2/instance-types/r6i/) are powered by 3rd-generation Intel Xeon Scalable processors (code named Ice Lake) and are an ideal fit for memory-intensive workloads. As a general rule they offer up to 15% better compute price performance and up to 20% higher memory bandwidth per vCPU than comparable R5 instance types.

## `x2g` family of instance types
<a name="instance-type-x2g"></a>

Some graph use cases see better performance when instances have larger buffer-pool caches. The `x2g` family was launched to better support those use cases. The `x2g` family has a larger memory-to-vCPU ratio than the `r5` or `r6g` family. The `x2g` instances also use the Graviton2 processor, and have many of the same performance characteristics as `r6g` instance types, as well as a larger buffer-pool cache.

If you're using `r5` or `r6g` instance types with low CPU utilization and a high buffer-pool cache miss rate, try using the `x2g` family instead. That way, you'll be getting the additional memory you need without paying for more CPU capacity.

## `x2iezn` family of instance types
<a name="instance-type-x2iezn"></a>

The `x2iezn` family provides memory-optimized instances powered by Intel Xeon Scalable processors with high-frequency performance. These instances offer a high memory-to-vCPU ratio (32 GiB per vCPU), making them well-suited for memory-intensive graph workloads that benefit from high single-threaded performance.

Key features include up to 4.5 GHz all-core turbo frequency and availability in sizes from 2xlarge to 12xlarge.

## `x2iedn` family of instance types
<a name="instance-type-x2iedn"></a>

The `x2iedn` family provides memory-optimized instances with local NVMe SSD storage. These instances combine high memory capacity (32 GiB per vCPU) with fast local storage, making them ideal for graph workloads that benefit from both large in-memory caches and high-performance local disk caching.

Powered by 3rd generation Intel Xeon Scalable processors, these instances are available in sizes from xlarge to 32xlarge and are optimized for large-scale graph databases requiring both memory and storage performance.

## `r8g` family of instance types
<a name="instance-type-r8g"></a>

The `r8g` family contains memory-optimized instance types powered by AWS Graviton4 processors. These instances offer significant performance improvements over previous generations, making them well-suited for memory-intensive graph workloads. The r8g instances provide approximately 15-20% better performance for graph queries compared to r7g instances.

The `r8g` family uses a dual-socket platform. Instance types from `r8g.large` to `r8g.24xlarge` run on a single socket, which means that performance scales linearly with compute capacity across that range. The `r8g.48xlarge` uses both sockets and is the largest instance type in the family; as with other dual-socket families, performance gains when scaling from `r8g.24xlarge` to `r8g.48xlarge` may not be perfectly linear due to cross-socket memory management overhead.

Key features of the `r8g` family include:
+ Powered by AWS Graviton4 ARM-based processors
+ Higher memory bandwidth per vCPU compared to previous generations
+ Excellent price/performance ratio for both OLTP-style (constrained) graph queries and OLAP-style analytical workloads
+ Improved memory management capabilities that benefit complex graph traversals

The `r8g` family is ideal for production workloads that require high memory capacity and consistent performance. They're particularly effective for applications with high query concurrency requirements.

## `r7g` family of instance types
<a name="instance-type-r7g"></a>

The `r7g` family uses the AWS Graviton3 processor, which delivers better price/performance than previous Graviton2-based instances. In testing, the Graviton3 processor offers 25-30% better performance for OLTP-style graph queries compared to r6g instances.

Like the `r6g` family, the `r7g` family has a single-socket architecture, which means that performance scales linearly with compute capacity from an `r7g.large` to an `r7g.16xlarge` (the largest type in the family).

Key features of the `r7g` family include:
+ Powered by AWS Graviton3 ARM-based processors
+ Improved memory-paging performance compared to r6g, benefiting both OLTP and OLAP workloads
+ Enhanced buffer-pool cache efficiency
+ Lower latency for memory-intensive operations

The `r7g` family is well-suited for production environments with varied query patterns and is particularly effective for workloads that benefit from improved memory bandwidth.

## `r7i` family of instance types
<a name="instance-type-r7i"></a>

The `r7i` family is powered by 4th-generation Intel Xeon Scalable processors (code named Sapphire Rapids) and offers significant improvements over r6i instances. These instances provide approximately 15% better compute price/performance and up to 20% higher memory bandwidth per vCPU than comparable r6i instance types.

The `r7i` instance family has a 2-socket Intel CPU architecture, similar to the `r5` family. The `r7i.12xlarge` and smaller types use a single socket and the system memory owned by that single-socket processor. The `r7i.16xlarge` and `r7i.24xlarge` types use both sockets and available memory. Because there's some memory-management overhead required between two physical processors in a 2-socket architecture, the performance gains scaling up from a `r7i.12xlarge` to a `r7i.16xlarge` or `r7i.24xlarge` instance type are not as linear as you get scaling up at the smaller sizes.

Key features of the `r7i` family include:
+ Powered by 4th-generation Intel Xeon Scalable processors
+ Performance scales linearly with compute capacity up to r7i.12xlarge
+ Enhanced memory management between physical processors in the 2-socket architecture
+ Improved performance for memory-intensive graph operations

For all of these instance families, you can estimate the number of vCPUs needed using the same formula mentioned previously:

```
vCPUs = (latency x concurrency) / 2
```

Where latency is measured as the average query latency in seconds and concurrency is measured as the target number of queries per second.

## `serverless` instance type
<a name="instance-type-serverless"></a>

The [Neptune Serverless](neptune-serverless.md) feature can scale instance size dynamically based on a workload's resource needs. Instead of calculating how many vCPUs are needed for your application, Neptune Serverless lets you [set lower and upper limits on compute capacity](neptune-serverless-capacity-scaling.md) (measured in Neptune Capacity Units) for the instances in your DB cluster. Workloads with varying utilization can be cost-optimized by using serverless rather than provsioned instances.

You can set up both provisioned and serverless instances in the same DB cluster to achieve an optimal cost-performance configuration.

# Choosing storage types for Amazon Neptune
<a name="storage-types"></a>

Neptune offers two types of storage with a different pricing model:
+ **Standard storage**   –   Standard storage provides cost-effective database storage for applications with moderate to low I/O usage.
+ **I/O–Optimized storage**   –   With I/O–Optimized storage, available from engine version 1.3.0.0, you pay only for the storage and instances you are using. The storage costs are higher than for standard storage, and also the instance costs are higher than for standard instances. You pay nothing for the I/O that you use. If your I/O usage is high, provisioned IOPs storage can reduce costs significantly.

  I/O–Optimized storage is designed to meet the needs of I/O–intensive graph workloads at a predictable cost. You can only switch between I/O–Optimized and Standard storage types once per 30 days.

  For pricing information about I/O–Optimized storage, see the [Neptune pricing page](https://aws.amazon.com/neptune/pricing/). The following section describes how to set up I/O–Optimized storage for a Neptune DB cluster.

## Choosing I/O–Optimized storage for a Neptune DB cluster
<a name="provisioned-iops-storage"></a>

By default, Neptune DB clusters use standard storage.You can enable I/O–Optimized storage on a DB cluster at the time you create it, like this:

Here is an example of how you can enable I/O–Optimized storage when you create a cluster using the AWS CLI:

```
aws neptune create-db-cluster \  
  --db-cluster-identifier (an ID for the cluster) \
  --engine neptune \
  --engine-version (the Neptune engine version) \
  --storage-type iopt1
```

Then, any instance you create automatically has I/O–Optimized storage enabled:

```
aws neptune create-db-instance \
  --db-cluster-identifier (the ID of the new cluster) \
  --db-instance-identifier (an ID for the new instance) \
  --engine neptune \
  --db-instance-class db.r5.large
```

You can also modify an existing DB cluster to enable I/O–Optimized storage on it, like this:

```
aws neptune modify-db-cluster \
  --db-cluster-identifier (the ID of a cluster without I/O–Optimized storage) \
  --storage-type iopt1 \
  --apply-immediately
```

You can restore a backup snapshot to a DB cluster with I/O–Optimized storage enabled:

```
aws neptune restore-db-cluster-from-snapshot \
  --db-cluster-identifier (an ID for the restored cluster) \
  --snapshot-identifier (the ID of the snapshot to restore from) \
  --engine neptune \
  --engine-version (the Neptune engine version) \
  --storage-type iopt1
```

You can determine whether a cluster is using I/O–Optimized storage using any `describe-` call. If the I/O–Optimized storage is enabled, the call returns a storage-type field set to `iop1`.

# Creating an Amazon Neptune cluster
<a name="get-started-create-cluster"></a>

The easiest way to create a new Amazon Neptune DB cluster is to use an CloudFormation template that creates all the required resources for you, without having to do everything by hand. The CloudFormation template performs much of the setup for you, including creating an Amazon Elastic Compute Cloud (Amazon EC2) instance:

**To launch a new Neptune DB cluster using an CloudFormation template**

1. Create a new IAM user with the permissions you will need for working with your Neptune DB cluster, as explained in [IAM user permissions](manage-console-iam-user.md).

1. Set up additional prerequisites needed to use the CloudFormation template, as explained in [Prerequisites for setting up Amazon Neptune using AWS CloudFormation](get-started-prereqs.md).

1. Invoke the CloudFormation stack, as described in [Creating an Amazon Neptune cluster using AWS CloudFormation](get-started-cfn-create.md).

You can also create a [Neptune global database](neptune-global-database.md) that spans multiple AWS Regions, enabling low-latency global reads and providing fast recovery in the rare case where an outage affects an entire AWS Region.

For information about creating an Amazon Neptune cluster manually using the AWS Management Console, see [Launching a Neptune DB cluster using the AWS Management Console](manage-console-launch-console.md).

You can also use an CloudFormation template to create a Lambda function to use with Neptune (see [Using CloudFormation to Create a Lambda Function to Use in Neptune](get-started-cfn-lambda.md)).

For general information about managing clusters and instances in Neptune, see [Managing Your Amazon Neptune Database](manage-console.md).

# Prerequisites for setting up Amazon Neptune using AWS CloudFormation
<a name="get-started-prereqs"></a>

Before you create an Amazon Neptune cluster using an CloudFormation template, you need to have the following:
+ An Amazon EC2 key pair.
+ The permissions required for using CloudFormation.

## Create an Amazon EC2 Key Pair to use for launching a Neptune cluster using CloudFormation
<a name="cfn-ec2-key-pair"></a>

In order to launch a Neptune DB cluster using an CloudFormation template, you must have an Amazon EC2key pair (and its associated PEM file) available in the region where you create the CloudFormation stack.

If you need to create the key pair, see either [Creating a Key Pair Using Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair) in the Amazon EC2 User Guide, or [Creating a Key Pair Using Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair) in the Amazon EC2 User Guide for instructions.

## Add IAM policies to grant permissions needed to use the CloudFormation template
<a name="cfn-iam-perms"></a>

First, you need to have an IAM user set up with permissions needed for working with Neptune, as described in [Creating an IAM user with permissions for Neptune](manage-console-iam-user.md).

Then you need to add the AWS managed policy, `AWSCloudFormationReadOnlyAccess`, to that user.

Finally, you need to create the following customer-managed policy and add it to that user:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::111122223333:role/*",
            "Condition": {
                "StringEquals": {
                    "iam:passedToService": "rds.amazonaws.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS",
            "Condition": {
                "StringLike": {
                    "iam:AWSServiceName": "rds.amazonaws.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "sns:ListTopics",
                "sns:ListSubscriptions",
                "sns:Publish"
            ],
            "Resource": "arn:aws:sns:*:111122223333:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:ListRetirableGrants",
                "kms:ListKeys",
                "kms:ListAliases",
                "kms:ListKeyPolicies"
            ],
            "Resource": "arn:aws:kms:*:111122223333:key/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:ListMetrics"
            ],
            "Resource": "arn:aws:cloudwatch:*:111122223333:service/*-*",
            "Condition": {
                "StringLike": {
                    "cloudwatch:namespace": "AWS/Neptune"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeVpcs",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:vpc/*",
                "arn:aws:ec2:*:111122223333:subnet/*",
                "arn:aws:ec2:*:111122223333:security-group/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "rds:CreateDBCluster",
                "rds:CreateDBInstance",
                "rds:AddTagsToResource",
                "rds:ListTagsForResource",
                "rds:RemoveTagsFromResource",
                "rds:RemoveRoleFromDBCluster",
                "rds:ResetDBParameterGroup",
                "rds:CreateDBSubnetGroup",
                "rds:ModifyDBParameterGroup",
                "rds:DownloadDBLogFilePortion",
                "rds:CopyDBParameterGroup",
                "rds:AddRoleToDBCluster",
                "rds:ModifyDBInstance",
                "rds:ModifyDBClusterParameterGroup",
                "rds:ModifyDBClusterSnapshotAttribute",
                "rds:DeleteDBInstance",
                "rds:CopyDBClusterParameterGroup",
                "rds:CreateDBParameterGroup",
                "rds:DescribeDBSecurityGroups",
                "rds:DeleteDBSubnetGroup",
                "rds:DescribeValidDBInstanceModifications",
                "rds:ModifyDBCluster",
                "rds:CreateDBClusterSnapshot",
                "rds:DeleteDBParameterGroup",
                "rds:CreateDBClusterParameterGroup",
                "rds:RemoveTagsFromResource",
                "rds:PromoteReadReplicaDBCluster",
                "rds:RestoreDBClusterFromSnapshot",
                "rds:DescribeDBSubnetGroups",
                "rds:DescribePendingMaintenanceActions",
                "rds:DescribeDBParameterGroups",
                "rds:FailoverDBCluster",
                "rds:DescribeDBInstances",
                "rds:DescribeDBParameters",
                "rds:DeleteDBCluster",
                "rds:ResetDBClusterParameterGroup",
                "rds:RestoreDBClusterToPointInTime",
                "rds:DescribeDBClusterSnapshotAttributes",
                "rds:AddTagsToResource",
                "rds:DescribeDBClusterParameters",
                "rds:CopyDBClusterSnapshot",
                "rds:DescribeDBLogFiles",
                "rds:DeleteDBClusterSnapshot",
                "rds:ListTagsForResource",
                "rds:RebootDBInstance",
                "rds:DescribeDBClusterSnapshots",
                "rds:DeleteDBClusterParameterGroup",
                "rds:ApplyPendingMaintenanceAction",
                "rds:DescribeDBClusters",
                "rds:DescribeDBClusterParameterGroups",
                "rds:ModifyDBSubnetGroup"
            ],
            "Resource": [
                "arn:aws:rds:*:111122223333:cluster-snapshot:*",
                "arn:aws:rds:*:111122223333:cluster:*",
                "arn:aws:rds:*:111122223333:pg:*",
                "arn:aws:rds:*:111122223333:cluster-pg:*",
                "arn:aws:rds:*:111122223333:secgrp:*",
                "arn:aws:rds:*:111122223333:db:*",
                "arn:aws:rds:*:111122223333:subgrp:*"
            ],
            "Condition": {
                "StringEquals": {
                    "rds:DatabaseEngine": [
                        "graphdb",
                        "neptune"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:GetLogEvents",
                "logs:DescribeLogStreams"
            ],
            "Resource": [
                "arn:aws:logs:*:111122223333:log-group:*:log-stream:*",
                "arn:aws:logs:*:111122223333:log-group:*"
            ]
        }
    ]
}
```

------

**Note**  
The following permissions are only required to delete a stack: `iam:DeleteRole`, `iam:RemoveRoleFromInstanceProfile`, `iam:DeleteRolePolicy`, `iam:DeleteInstanceProfile`, and `ec2:DeleteVpcEndpoints`.   
Also note that `ec2:*Vpc` grants `ec2:DeleteVpc` permissions.

# Creating an Amazon Neptune cluster using AWS CloudFormation
<a name="get-started-cfn-create"></a>

You can use an CloudFormation template to set up a Neptune DB Cluster.

1. To launch the CloudFormation stack on the CloudFormation console, choose one of the **Launch Stack** buttons in the following table.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/neptune/latest/userguide/get-started-cfn-create.html)

1.  On the **Select Template** page, choose **Next**.

1. On the **Specify Details** page, choose a key pair for the **EC2SSHKeyPairName**.

   This key pair is required to access the EC2 instance. Ensure that you have the PEM file for the key pair that you choose.

1. Choose **Next**.

1. On the **Options** page, choose **Next**.

1. On the **Review** page, select the first check box to acknowledge that CloudFormation will create IAM resources. Select the second check box to acknowledge `CAPABILITY_AUTO_EXPAND` for the new stack. 
**Note**  
`CAPABILITY_AUTO_EXPAND` explicitly acknowledges that macros will be expanded when creating the stack, without prior review. Users often create a change set from a processed template so that the changes made by macros can be reviewed before actually creating the stack. For more information, see the CloudFormation [CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html) API.

   Then choose **Create**.

**Note**  
You can also use your CloudFormation template to [upgrade your DB cluster's engine version.](cfn-engine-update.md)

# Set up the Amazon VPC where your Amazon Neptune DB cluster is located
<a name="get-started-vpc"></a>

An Amazon Neptune DB cluster can *only* be created in an Amazon Virtual Private Cloud (Amazon VPC). Its endpoints are accessible within that VPC, and if [Neptune public endpoints](neptune-public-endpoints.md) are enabled, they can also be accessed outside of the VPC and over the internet.

There are a number of different ways to set up the VPC, depending on how you want to access your DB cluster.

Here are some things to keep in mind when configuring the VPC where your Neptune DB cluster is located:
+ Your VPC must have at least two [subnets](#security-vpc-add-subnets). These subnets must be in two different Availability Zones (AZs). By distributing your cluster instances across at least two AZs, Neptune helps ensure that there are always instances available in your DB cluster even in the unlikely event of an availability zone failure. The cluster volume for your Neptune DB cluster always spans three AZs to provide durable storage with extremely low likelihood of data loss.
+ The CIDR blocks in each subnet must be large enough to provide IP addresses that Neptune may need during maintenance activities, failover, and scaling.
+ The VPC must have a DB subnet group that contains subnets that you have created. Neptune chooses one of the subnets in the subnet group and an IP address within that subnet to associate with each DB instance in the DB cluster. The DB instance is then located in the same AZ as the subnet.
+ The VPC should have [DNS enabled](#get-started-vpc-dns) (both DNS hostnames and DNS resolution).
+ The VPC must have a [VPC security group](#security-vpc-security-group) to allows access to your DB cluster.
+ Tenancy in a Neptune VPC should be set to **Default**.

## Adding subnets to the VPC where your Neptune DB cluster is located
<a name="security-vpc-add-subnets"></a>

A subnet is a range of IP addresses in your VPC. You can launch resources such as a Neptune DB cluster or an EC2 instance into a specific subnet. When you create a subnet, you specify the IPv4 CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone (AZ) and cannot span zones. By launching instances in separate Availability Zones, you can protect your applications from a failure in one of the zones. See [VPC subnet documentation](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html) for more information.

A Neptune DB cluster requires at least two VPC subnets.

**To add subnets to a VPC**

1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Subnets**.

1. In the **VPC Dashboard** choose **Subnets**, and then choose **Create subnet**.

1. On the **Create subnet** page, choose the VPC where you want to create the subnet.

1. Under **Subnet settings**, make the following choices:

   1. Enter a name for the new subnet under **Subnet name**.

   1. Choose an Availability Zone (AZ) for the subnet, or leave the choice at **No preference**.

   1. Enter the subnet's IP address block under **IPv4 CIDR block**.

   1. Add tags to the subnet if you need to.

   1. Choose 

1. If you want to create another subnet at the same time, choose **Add new subnet**.

1. Choose **Create subnet** to create the new subnet(s).

## Configuring VPC in Amazon Neptune
<a name="security-vpc-add-subnet-group"></a>

Create a subnet group.

**To create a Neptune subnet group**

1. Sign in to the AWS Management Console, and open the Amazon Neptune console at [https://console.aws.amazon.com/neptune/home](https://console.aws.amazon.com/neptune/home).

1. Choose **Subnet groups**, and then choose **Create DB Subnet Group**.

1. Enter a name and description for the new subnet group (the description is required).

1. Under **VPC**, choose the VPC where you want this subnet group to be located.

1. Under **Avalability zone**, choose the AZ where you want this subnet group to be located.

1. Under **Subnet**, add one or more of the subnets in this AZ to this subnet group.

1. Choose **Create** to create the new subnet group.

## Create a security group using the VPC console
<a name="security-vpc-security-group"></a>

Security groups provide access to your Neptune DB cluster in the VPC. They act as a firewall for the associated DB cluster, controlling both inbound and outbound traffic at the instance level. By default, a DB instance is created with a firewall and a default security group that prevents any access to it. To enable access, you must have a VPC security group with additional rules. 

The following procedure shows you how to add a custom TCP rule that specifies the port range and IP addresses for the Amazon EC2 instance to use to access your Neptune DB cluster. You can use the VPC security group assigned to the EC2 instance rather than its IP address.

**To create a VPC security group for Neptune on the console**

1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the upper-right corner of the console, choose the AWS region where you want to create a VPC security group for Neptune. In the list of Amazon VPC resources for that region, it should show that you have at least one VPC and several subnets. If it does not, you don't have a default VPC in that Region.

1. In the navigation pane under **Security**, choose **Security Groups**.

1. Choose **Create security group**. In the **Create security group** window, enter the **Security group name**, a **Description**, and the identifier of the VPC where your Neptune DB cluster will reside.

1. Add an inbound rule for the security group of an Amazon EC2 instance that you want connected to your Neptune DB cluster:

   1. In the **Inbound rules** area, choose **Add rule**.

   1. In the **Type** list, leave **Custom TCP** selected.

   1. In the **Port range** box, enter **8182**, the default port value for Neptune.

   1. Under **Source**, enter the IP address range (CIDR value) from which you will access Neptune, or choose an existing security group name.

   1. If you need to add more IP addresses or different port ranges, choose **Add rule** again.

1. In the Outbound rules area, you can also add one or more outbound rules if you need to.

1. When you finish, choose **Create security group**.

You can use this new VPC security group when you create a new Neptune DB cluster.

If you use a default VPC, a default subnet group spanning all of the VPC's subnets is already created for you. When you choose the **Create database** in the Neptune console, the default VPC is used unless you specify a different one.

## Make sure that you have DNS support in your VPC
<a name="get-started-vpc-dns"></a>

Domain Name System (DNS) is a standard by which names used on the internet are resolved to their corresponding IP addresses. A DNS hostname uniquely names a computer and consists of a host name and a domain name. DNS servers resolve DNS hostnames to their corresponding IP addresses.

Check to make sure that DNS hostnames and DNS resolution are both enabled in your VPC. The VPC network attributes `enableDnsHostnames` and `enableDnsSupport` must be set to `true`. To view and modify these attributes, go to the VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

For more information, see [Using DNS with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html).

**Note**  
If you are using Route 53, confirm that your configuration does not override DNS network attributes in your VPC.

# Connecting to an Amazon Neptune cluster
<a name="get-started-connecting"></a>

After creating a Neptune cluster, you can configure the connection methods to access it.

## Setting up `curl` or awscurl to communicate with your Neptune endpoint
<a name="get-started-access-graph-curl"></a>

Having a command-line tool for submitting queries to your Neptune DB cluster is very handy, as illustrated in many of the examples in this documentation. The [curl](https://curl.haxx.se/) command line tool is an excellent option for communicating with Neptune endpoints when IAM authentication is not enabled. Versions starting with 7.75.0 support the `--aws-sigv4` option for signing requests when IAM authentication is enabled.

For endpoints where IAM authentication *is* enabled, you can also use [awscurl](https://github.com/okigan/awscurl), which uses almost exactly the same syntax as `curl` but supports signing requests as required for IAM authentication. Because of the added security that IAM authentication provides, it is generally a good idea to enable it.

 For information about how to use `curl` (or `awscurl`), see the [curl man page](https://curl.haxx.se/docs/manpage.html), and the book *[Everything curl](https://ec.haxx.se/)*.

To connect using HTTPS (which Neptune requires), `curl` needs access to appropriate certificates. As long as `curl` can locate the appropriate certificates, it handles HTTPS connections just like HTTP connections, without extra parameters. The same is true for `awscurl`. Examples in this documentation are based on that scenario.

To learn how to obtain such certificates and how to format them properly into a certificate authority (CA) certificate store that `curl` can use, see [SSL Certificate Verification](https://curl.haxx.se/docs/sslcerts.html) in the `curl` documentation.

You can then specify the location of this CA certificate store using the `CURL_CA_BUNDLE` environment variable. On Windows, `curl` automatically looks for it in a file named `curl-ca-bundle.crt`. It looks first in the same directory as `curl.exe` and then elsewhere on the path. For more information, see [SSL Certificate Verification](https://curl.haxx.se/docs/sslcerts.html).

## Different ways to connect to a Neptune DB cluster
<a name="get-started-connect-ways"></a>

An Amazon Neptune DB cluster can *only* be created in an Amazon Virtual Private Cloud (Amazon VPC). Its endpoints are accessible only within that VPC unless you enable and set up [Neptune public endpoints](neptune-public-endpoints.md) for the DB cluster.

There are several different ways to set up access to your Neptune DB cluster in its VPC:
+ [Connecting from an Amazon EC2 instance in the same VPC](get-started-connect-ec2-same-vpc.md)
+ [Connecting from an Amazon EC2 instance in another VPC](get-started-connect-ec2-other-vpc.md)
+ [Connecting from a private network](get-started-connect-private-net.md)
+ [Connecting from a public endpoint](neptune-public-endpoints.md)

# Connecting an Amazon EC2 instance to Amazon Neptune cluster in the same VPC
<a name="get-started-connect-ec2-same-vpc"></a>

One of the most common ways to connect to a Neptune database is from an Amazon EC2 instance in the same VPC as your Neptune DB cluster. For example, the EC2 instance might be running a web server that connects with the internet. In this case, only the EC2 instance has access to the Neptune DB cluster, and the internet only has access to the EC2 instance:

![\[Diagram of accessing a Neptune cluster from an EC2 instance in the same VPC.\]](http://docs.aws.amazon.com/neptune/latest/userguide/images/VPC-connection-01.png)


To enable this configuration, you need to have the right VPC security groups and subnet groups set up. The web server is hosted in a public subnet, so that it can reach the public internet, and your Neptune cluster instance is hosted in a private subnet to keep it secure. See [Set up the Amazon VPC where your Amazon Neptune DB cluster is located](get-started-vpc.md).

In order for the Amazon EC2 instance to connect to your Neptune endpoint on, for example, port `8182`, you will need to set up a security group to do that. If your Amazon EC2 instance is using a security group named, for example, `ec2-sg1`, you need to create another Amazon EC2 security group (let's say `db-sg1`) that has inbound rules for port `8182` and has `ec2-sg1` as its source. Then, add `db-sg1` to your Neptune cluster to allow the connection.

After creating the Amazon EC2 instance, you can log into it using SSH and connect to your Neptune DB cluster. For information about connecting to an EC2 instance using SSH, see [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

If you are using a Linux or macOS command line to connect to the EC2 instance, you can paste the SSH command from the **SSHAccess** item in the **Outputs** section of the CloudFormation stack. You must have the PEM file in the current directory and the PEM file permissions must be set to 400 (`chmod 400 keypair.pem`).

**To create a VPC with both private and public subnets**

1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the top-right corner of the AWS Management Console, choose the Region to create your VPC in.

1. In the **VPC Dashboard**, choose **Launch VPC Wizard**.

1. Complete the **VPC Settings** area of the **Create VPC** page:

   1. Under **Resources to create**, choose **VPC, subnets, etc.**.

   1. Leave the default name tag as is, or enter a name of your choosing, or un-check the **Auto-generate** check box to disable name tag generation.

   1. Leave the IPv4 CIDR block value at `10.0.0.0/16`.

   1. Leave the IPv6 CIDR block value at **No IPv6 CIDR block**.

   1. Leave the **Tenancy** at **Default**.

   1. Leave the number of **Availability Zones (AZs)** at **2**.

   1. Leave **NAT gateways (\$1)** at **None**, unless you will be needing one or more NAT gateways.

   1. Set **VPC endpoints** to **None**, unless you will be using Amazon S3.

   1. Both **Enable DNS hostnames** and **Enable DNS resolution** should be checked.

1. Choose **Create VPC**.

# Connecting an Amazon EC2 instance to an Amazon Neptune cluster in a different VPC
<a name="get-started-connect-ec2-other-vpc"></a>

An Amazon Neptune DB cluster can *only* be created in an Amazon Virtual Private Cloud (Amazon VPC), and its endpoints are accessible within that VPC, usually from an Amazon Elastic Compute Cloud (Amazon EC2) instance running in that VPC. Alternatively, it can be accessed using a public endpoint. For more information on public endpoints, see [Neptune public endpoints](neptune-public-endpoints.md). 

When your DB cluster is in a different VPC from the EC2 instance you are using to access it, you can use [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) to make the connection:

![\[Diagram of accessing a Neptune cluster from a different VPC.\]](http://docs.aws.amazon.com/neptune/latest/userguide/images/VPC-connection-02.png)


A VPC peering connection is a networking connection between two VPCs that routes traffic between them privately, so that instances in either VPC can communicate as if they are within the same network. You can create a VPC peering connection between VPCs in your account, between a VPC in your AWS account and a VPC in another AWS account, or with a VPC in a different AWS Region.

AWS uses the existing infrastructure of a VPC to create a VPC peering connection. It is neither a gateway nor an AWS Site-to-Site VPN connection, and it does not rely on a separate piece of physical hardware. It has no single point of failure for communication and no bandwidth bottleneck.

See the [Amazon VPC Peering Guide](https://docs.aws.amazon.com/vpc/latest/peering/) for more information about how use VPC peering.

# Connecting to an Amazon Neptune cluster over a private network
<a name="get-started-connect-private-net"></a>

You can access a Neptune DB cluster from a private network in two different ways:
+ Using an [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) connection.
+ Using an [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/) connection.

The links above have information about these connection methods and how to set them up. The configuration of an AWS Site-to-Site connection might look like this:

![\[Diagram of accessing a Neptune cluster from a private network.\]](http://docs.aws.amazon.com/neptune/latest/userguide/images/VPC-connection-04.png)


# Neptune Public Endpoints
<a name="neptune-public-endpoints"></a>

## Overview
<a name="neptune-public-endpoints-overview"></a>

Amazon Neptune clusters are typically deployed within your VPC and can only be accessed from within that VPC. This requires configuring applications and development environments within the VPC or using proxy services to connect to the VPC, which increases setup time and costs.

Public endpoints simplify this experience by allowing direct connections to Neptune over the internet, making it easier to get started with graph databases without specialized networking knowledge.

## When to use public endpoints
<a name="neptune-public-endpoints-when-to-use"></a>

Consider using public endpoints in the following scenarios:
+ You want to quickly test Neptune in a development or test environment without complex network configuration
+ You don't have specialized AWS networking knowledge
+ Your application's security posture doesn't require a private VPC
+ You need to connect to Neptune from your local development environment

## Security considerations
<a name="neptune-public-endpoints-security"></a>

When using public endpoints, keep these security considerations in mind:
+ IAM authentication is required for clusters with public endpoints enabled.
+ Access to the database is controlled by the security group it uses.
+ You can restrict which IP addresses can connect to your cluster.
+ You can use IAM policies to control who can create or modify clusters with public access. See: [Restricting public access creation](#neptune-public-endpoints-restrict-access)

## Enabling public endpoints
<a name="neptune-public-endpoints-enabling"></a>

By default, new Neptune databases are created with public endpoints disabled. You must explicitly enable public access when creating or modifying a cluster.

Public endpoints are supported from Neptune engine release version 1.4.6.x. You need to upgrade existing clusters to at least this version to use this feature.

Public endpoint setting is available on the Neptune instance and not the Neptune cluster. Hence, a Neptune cluster can exist with some instances with public endpoints and some with not. However, we don't recommend having such a setting. For more information on this refer to: [How public endpoints work](#neptune-public-endpoints-how-they-work)

## Prerequisites
<a name="neptune-public-endpoints-prerequisites"></a>

### IAM authentication setting on the Neptune cluster
<a name="neptune-public-endpoints-iam-auth"></a>

Before enabling public endpoints on a Neptune instance, ensure your cluster supports IAM authentication. If not, enable it using the following command:

```
aws neptune modify-db-cluster \
  --region us-west-2 \
  --engine graphdb \
  --engine-version 1.4.6.x \
  --db-cluster-identifier neptune-public-endpoint \
  --enable-iam-database-authentication
```

### Network settings
<a name="neptune-public-endpoints-network-settings"></a>

1. Ensure your VPC has subnets that enable public routing (has an entry for an internet gateway in the route table of the subnets). If you don't provide a `db-subnet-group-name` parameter while creating the cluster, the default subnet group is picked for the cluster creation.

1. Make sure the security group attached to the cluster allows inbound traffic for the allowed IP ranges and the allowed ports. For example, if you want to allow TCP traffic from all IPs to connect to the Neptune instance running on port 8182, the inbound rule should have:

   1. Type: All TCP

   1. Protocol: TCP

   1. Port range: 8182

   1. CIDR block: 0.0.0.0/0

**Note**  
Although you can set the CIDR block range to 0.0.0.0/0, we recommend reducing this to a specific IP range of your client application for a better security posture.

## Creating a new instance with public endpoints
<a name="neptune-public-endpoints-creating-instance"></a>

You can create a new Neptune instance with public endpoints using the AWS Management Console, AWS CLI, or AWS SDK.

Using the AWS CLI:

```
aws neptune create-db-instance \
  --region us-west-2 \
  --engine graphdb \
  --engine-version 1.4.6.x \
  --db-cluster-identifier neptune-public-endpoint \
  --publicly-accessible
```

## Modifying an existing instance for public access
<a name="neptune-public-endpoints-modifying-instance"></a>

To modify an existing Neptune instance to enable public access:

```
aws neptune modify-db-instance \
  --region us-west-2 \
  --engine graphdb \
  --engine-version 1.4.6.x \
  --db-instance-identifier neptune-public-endpoint \
  --publicly-accessible
```

**Note**  
Public access is enabled at the instance level, not the cluster level. To ensure your cluster is always accessible via public endpoints, all instances in the cluster must have public access enabled.

## Using the public endpoints
<a name="neptune-public-endpoints-using"></a>

To check if your database is reachable, check the status using the AWS CLI `NeptuneData` API:

```
aws neptunedata get-engine-status \
  --endpoint-url https://my-cluster-name.cluster-abcdefgh1234.us-east-1.neptune.amazonaws.com:8182
```

If the database is accessible, the response is like:

```
{
    "status": "healthy",
    "startTime": "Sun Aug 10 06:54:15 UTC 2025",
    "dbEngineVersion": "1.4.6.0.R1",
    "role": "writer",
    "dfeQueryEngine": "viaQueryHint",
    "gremlin": {
        "version": "tinkerpop-3.7.1"
    },
    "sparql": {
        "version": "sparql-1.1"
    },
    "opencypher": {
        "version": "Neptune-9.0.20190305-1.0"
    },
    "labMode": {
        "ObjectIndex": "disabled",
        "ReadWriteConflictDetection": "enabled"
    },
    "features": {
        "SlowQueryLogs": "disabled",
        "InlineServerGeneratedEdgeId": "disabled",
        "ResultCache": {
            "status": "disabled"
        },
        "IAMAuthentication": "disabled",
        "Streams": "disabled",
        "AuditLog": "disabled"
    },
    "settings": {
        "StrictTimeoutValidation": "true",
        "clusterQueryTimeoutInMs": "120000",
        "SlowQueryLogsThreshold": "5000"
    }
}
```

## Examples of how to query the database
<a name="neptune-public-endpoints-examples"></a>

### AWS CLI
<a name="neptune-public-endpoints-aws-cli"></a>

```
aws neptunedata execute-open-cypher-query \
--open-cypher-query "MATCH (n) RETURN n LIMIT 10" \
--endpoint-url https://my-cluster-name.cluster-abcdefgh1234.us-east-1.neptune.amazonaws.com:8182
```

### Python
<a name="neptune-public-endpoints-python"></a>

```
import boto3
import json
from botocore.config import Config

# Configuration - Replace with your actual Neptune cluster details
cluster_endpoint = "my-cluster-name.cluster-abcdefgh1234.my-region.neptune.amazonaws.com"
port = 8182
region = "my-region"

# Configure Neptune client
# This disables retries and sets the client timeout to infinite 
#     (relying on Neptune's query timeout)
endpoint_url = f"https://{cluster_endpoint}:{port}"
config = Config(
    region_name=region,
    retries={'max_attempts': 1},
    read_timeout=None
)

client = boto3.client("neptunedata", config=config, endpoint_url=endpoint_url)

cypher_query = "MATCH (n) RETURN n LIMIT 5"
try:
    response = client.execute_open_cypher_query(openCypherQuery=cypher_query)
    print("openCypher Results:")
    for item in response.get('results', []):
        print(f"  {item}")
except Exception as e:
    print(f"openCypher query failed: {e}")
```

### JavaScript
<a name="neptune-public-endpoints-javascript"></a>

```
import {
    NeptunedataClient,
    GetPropertygraphSummaryCommand
} from "@aws-sdk/client-neptunedata";
import { inspect } from "util";
import { NodeHttpHandler } from "@smithy/node-http-handler";

/**
 * Main execution function
 */
async function main() {
    // Configuration - Replace with your actual Neptune cluster details
    const clusterEndpoint = 'my-cluster-name.cluster-abcdefgh1234.my-region.neptune.amazonaws.com';
    const port = 8182;
    const region = 'my-region';

    // Configure Neptune client
    // This disables retries and sets the client timeout to infinite 
    //     (relying on Neptune's query timeout)
    const endpoint = `https://${clusterEndpoint}:${port}`;
    const clientConfig = {
        endpoint: endpoint,
        sslEnabled: true,
        region: region,
        maxAttempts: 1,  // do not retry
        requestHandler: new NodeHttpHandler({
            requestTimeout: 0  // no client timeout
        })
    };

    const client = new NeptunedataClient(clientConfig);
    try {
        try {
            const command = new GetPropertygraphSummaryCommand({ mode: "basic" });
            const response = await client.send(command);
            console.log("Graph Summary:", inspect(response.payload, { depth: null }));
        } catch (error) {
            console.log("Property graph summary failed:", error.message);
        }    
    } catch (error) {
        console.error("Error in main execution:", error);
    }
}

// Run the main function
main().catch(console.error);
```

### Go
<a name="neptune-public-endpoints-go"></a>

```
package main
import (
    "context"
    "fmt"
    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/neptunedata"
    "os"
    "encoding/json"
    "net/http"
)

func main() {    
    // Configuration - Replace with your actual Neptune cluster details
    clusterEndpoint := "my-cluster-name.cluster-abcdefgh1234.my-region.neptune.amazonaws.com"
    port := 8182
    region := "my-region"
    
    // Configure Neptune client
    // Configure HTTP client with no timeout
    //    (relying on Neptune's query timeout)
    endpoint := fmt.Sprintf("https://%s:%d", clusterEndpoint, port)
    // Load AWS SDK configuration
    sdkConfig, _ := config.LoadDefaultConfig(
        context.TODO(),
        config.WithRegion(region),
        config.WithHTTPClient(&http.Client{Timeout: 0}),
    )
    
    // Create Neptune client with custom endpoint
    client := neptunedata.NewFromConfig(sdkConfig, func(o *neptunedata.Options) {
        o.BaseEndpoint = aws.String(endpoint)
        o.Retryer = aws.NopRetryer{} // Do not retry calls if they fail
    })

    gremlinQuery := "g.addV('person').property('name','charlie').property(id,'charlie-1')"
    serializer := "application/vnd.gremlin-v1.0+json;types=false"
    
    gremlinInput := &neptunedata.ExecuteGremlinQueryInput{
        GremlinQuery: &gremlinQuery,
        Serializer:   &serializer,
    }
    gremlinResult, err := client.ExecuteGremlinQuery(context.TODO(), gremlinInput)
    if err != nil {
        fmt.Printf("Gremlin query failed: %v\n", err)
    } else {
        var resultMap map[string]interface{}
        err = gremlinResult.Result.UnmarshalSmithyDocument(&resultMap)
        if err != nil {
            fmt.Printf("Error unmarshaling Gremlin result: %v\n", err)
        } else {
            resultJSON, _ := json.MarshalIndent(resultMap, "", "  ")
            fmt.Printf("Gremlin Result: %s\n", string(resultJSON))
        }
    }
}
```

## How public endpoints work
<a name="neptune-public-endpoints-how-they-work"></a>

When a Neptune instance is publicly accessible:
+ Its DNS endpoint resolves to the private IP address from within the DB cluster's VPC.
+ It resolves to the public IP address from outside of the cluster's VPC.
+ Access is controlled by the security group assigned to the cluster.
+ Only instances that are publicly accessible can be accessed via the internet.

### Reader endpoint behavior
<a name="neptune-public-endpoints-reader-behavior"></a>
+ If all reader instances are publicly accessible, the reader endpoint will always resolve over the public internet.
+ If only some reader instances are publicly accessible, the reader endpoint will resolve publicly only if it selects a publicly accessible instance to serve the read query.

### Cluster endpoint behavior
<a name="neptune-public-endpoints-cluster-behavior"></a>
+ The DB cluster endpoint always resolves to the instance endpoint of the writer.
+ If public endpoint is enabled on the writer instance, the cluster endpoint will be publicly accessible, otherwise it won't be.

### Behavior after cluster failover
<a name="neptune-public-endpoints-failover-behavior"></a>
+ A Neptune cluster can have instances on different public accessible setting.
+ If a cluster has a public writer and a non-public reader, post a cluster failover, the new writer (previous reader) becomes non-public and the new reader (previous writer) becomes public.

## Network configuration requirements
<a name="neptune-public-endpoints-network-requirements"></a>

For public endpoints to work properly:

1. The Neptune instances must be in public subnets within your VPC.

1. The route tables associated with these subnets must have a route to an internet gateway for 0.0.0.0/0.

1. The security group must allow access from the public IP addresses or CIDR ranges you want to grant access to.

## Restricting public access creation
<a name="neptune-public-endpoints-restrict-access"></a>

You can use IAM policies to restrict who can create or modify Neptune clusters with public access. The following example policy denies the creation of Neptune instances with public access:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Deny",
      "Action": [
        "rds:CreateDBInstance",
        "rds:ModifyDBInstance",
        "rds:RestoreDBInstanceFromDBSnapshot",
        "rds:RestoreDBInstanceToPointInTime"
      ],
      "Resource": "*",
      "Condition": {
        "Bool": {
          "rds:PubliclyAccessible": true
        }
      }
    }
  ]
}
```

------

More about the `rds:PublicAccessEnabled` IAM condition key: [ Amazon RDS Service Authorization Reference](https://docs.aws.amazon.com//service-authorization/latest/reference/list_amazonrds.html#amazonrds-rds_PubliclyAccessible)

## CloudFormation support
<a name="neptune-public-endpoints-cloudformation"></a>

You can use CloudFormation to launch Neptune clusters with public endpoints enabled by specifying the `PubliclyAccessible` parameter in your CloudFormation template.

## Compatibility with Neptune features
<a name="neptune-public-endpoints-compatibility"></a>

A cluster with public endpoints enabled supports all Neptune features that a VPC-only cluster supports, including:
+ Neptune workbench
+ Full text search integration
+ Neptune Streams
+ Custom endpoints
+ Neptune Serverless
+ Graph Explorer

## Pricing
<a name="neptune-public-endpoints-pricing"></a>

Public endpoints are available at no additional cost beyond standard Neptune pricing. However, connecting from your local environment to Neptune over a public IP might incur increased data transfer costs.

# Securing access to an Amazon Neptune cluster
<a name="get-started-security"></a>

There are multiple ways for you to secure your Amazon Neptune clusters.

## Using IAM policies to restrict access to a Neptune DB cluster
<a name="get-started-security-iam-policies"></a>

To control who can perform Neptune management actions on Neptune DB clusters and DB instances, use AWS Identity and Access Management (IAM).

When you use an IAM account to access the Neptune console, you must first sign in to the AWS Management Console using your IAM account before opening the Neptune console at [https://console.aws.amazon.com/neptune/home](https://console.aws.amazon.com/neptune/home).

When you connect to AWS using IAM credentials, your IAM account must have IAM policies that grant the permissions required to perform Neptune management operations. For more information, see [Using different kinds of IAM policies for controlling access to Neptune](security-iam-access-manage.md#iam-auth-policy).

## Using VPC security groups to restrict access to a Neptune DB cluster
<a name="get-started-security-groups"></a>

Neptune DB clusters must be created in an Amazon Virtual Private Cloud (Amazon VPC). To control which devices and EC2 instances can open connections to the endpoint and port of the DB instance for Neptune DB clusters in a VPC, you use a VPC security group. For more information about VPCs, see [Create a security group using the VPC console](get-started-vpc.md#security-vpc-security-group).

**Note**  
 To connect to your Neptune cluster you must expose the cluster's Database port (default of 8182) for both the inbound and outbound rules to allow for proper connectivity. 

## Using IAM authentication to restrict access to a Neptune DB cluster
<a name="get-started-security-iam-auth"></a>

If you enable AWS Identity and Access Management (IAM) authentication in a Neptune DB cluster, anyone accessing the DB cluster must first be authenticated. See [Authenticating your Amazon Neptune database with AWS Identity and Access Management](iam-auth.md) for information about setting up IAM authentication.

For information about using temporary credentials to authenticate, including examples for the AWS CLI, AWS Lambda, and Amazon EC2, see [Using temporary credentials to connect to Amazon Neptune](iam-auth-temporary-credentials.md). 

The following links provide additional information about connecting to Neptune using IAM authentication with the individual query languages:

**Using Gremlin with IAM authentication**
+ [Connecting to Amazon Neptune databases using IAM authentication with Gremlin console](iam-auth-connecting-gremlin-console.md)
+ [Connecting to Amazon Neptune databases using IAM with Gremlin Java](iam-auth-connecting-gremlin-java.md)
+ [Connecting to Amazon Neptune databases using IAM authentication with Python](iam-auth-connecting-python.md)
**Note**  
This example applies to both Gremlin and SPARQL.

**Using openCypher with IAM authentication**
+ [Connecting to Amazon Neptune databases using IAM authentication with Gremlin console](iam-auth-connecting-gremlin-console.md)
+ [Connecting to Amazon Neptune databases using IAM with Gremlin Java](iam-auth-connecting-gremlin-java.md)
+ [Connecting to Amazon Neptune databases using IAM authentication with Python](iam-auth-connecting-python.md)
**Note**  
This example applies to both Gremlin and SPARQL.

**Using SPARQL with IAM authentication**
+ [Connecting to Amazon Neptune databases using IAM authentication with Java and SPARQL](iam-auth-connecting-sparql-java.md)
+ [Connecting to Amazon Neptune databases using IAM authentication with Python](iam-auth-connecting-python.md)
**Note**  
This example applies to both Gremlin and SPARQL.

# Accessing graph data in Amazon Neptune
<a name="get-started-access-graph"></a>

You can interact with a Amazon Neptune DB cluster after creating a connection. This involves loading data, executing queries, and performing other operations. Most users leverage the `curl` or `awscurl` command-line tools to communicate with the Neptune DB cluster effectively. These tools enable you to send requests, load data, and retrieve results from the graph database, facilitating seamless data management and querying capabilities. 

## Setting up `curl` to communicate with your Neptune endpoint
<a name="get-started-access-graph-curl"></a>

As illustrated in many of the examples in this documentation, the [curl](https://curl.haxx.se/) command line tool is a handy option for communicating with your Neptune endpoint. For information about the tool, see the [curl man page](https://curl.haxx.se/docs/manpage.html), and the book *[Everything curl](https://ec.haxx.se/)*.

To connect using HTTPS (as we recommend and as Neptune requires in most Regions), `curl` needs access to appropriate certificates. To learn how to obtain these certificates and how to format them properly into a certificate authority (CA) certificate store that `curl` can use, see [SSL Certificate Verification](https://curl.haxx.se/docs/sslcerts.html) in the `curl` documentation.

You can then specify the location of this CA certificate store using the `CURL_CA_BUNDLE` environment variable. On Windows, `curl` automatically looks for it in a file named `curl-ca-bundle.crt`. It looks first in the same directory as `curl.exe` and then elsewhere on the path. For more information, see [SSL Certificate Verification](https://curl.haxx.se/docs/sslcerts.html).

As long as `curl` can locate the appropriate certificates, it handles HTTPS connections just like HTTP connections, without extra parameters. Examples in this documentation are based on that scenario.

## Using a query language to access graph data in your Neptune DB cluster
<a name="get-started-access-graph-query-langs"></a>

Once you are connected, you can use the Gremlin and openCypher query languages to create and query a property graph, or the SPARQL query language to create and query a graph containing RDF data.

**Graph query languages supported by Neptune**
+ [Gremlin](access-graph-gremlin.md) is a graph traversal language for property graphs. A query in Gremlin is a traversal made up of discrete steps, each of which follows an edge to a node. See Gremlin documentation at [Apache TinkerPop](https://tinkerpop.apache.org/docs/current/reference/) for more information.

  The Neptune implementation of Gremlin has some differences from other implementations, especially when you are using Gremlin-Groovy (Gremlin queries sent as serialized text). For more information, see [Gremlin standards compliance in Amazon Neptune](access-graph-gremlin-differences.md).
+ [openCypher](access-graph-opencypher.md) is a declarative query language for property graphs that was originally developed by Neo4j, then open-sourced in 2015, and contributed to the [openCypher](http://www.opencypher.org/) project under an Apache 2 open-source license. Its syntax is documented in the [Cypher Query Language Reference, Version 9](https://s3.amazonaws.com/artifacts.opencypher.org/openCypher9.pdf).
+ [SPARQL](access-graph-sparql.md) is a declarative query language for [RDF](https://www.w3.org/2001/sw/wiki/RDF) data, based on the graph pattern matching that is standardized by the World Wide Web Consortium (W3C) and described in [SPARQL 1.1 Overview](https://www.w3.org/TR/sparql11-overview/)) and the [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/) specification.

**Note**  
You can access property graph data in Neptune using both Gremlin and openCypher, but not using SPARQL. Similarly, you can only access RDF data using SPARQL, not Gremlin or openCypher.

# Using Gremlin to access graph data in Amazon Neptune
<a name="get-started-graph-gremlin"></a>

You can use the Gremlin Console to experiment with TinkerPop graphs and queries in a REPL (read-eval-print loop) environment. 

The following tutorial walks you through using the Gremlin console to add vertices, edges, properties, and more to a Neptune graph, highlights some differences in the Neptune-specific Gremlin implementation.

**Note**  
This example assumes that you have completed the following:  
You have connected using SSH to an Amazon EC2 instance.
You have created a Neptune cluster as detailed in [Create Neptune cluster](get-started-create-cluster.md).
You have installed the Gremlin console as described in [Installing the Gremlin console](access-graph-gremlin-console.md).

**Using the Gremlin Console**

1. Change directories into the folder where the Gremlin console files are unzipped.

   ```
   cd apache-tinkerpop-gremlin-console-3.7.2
   ```

1. Enter the following command to run the Gremlin Console.

   ```
   bin/gremlin.sh
   ```

   You should see the following output:

   ```
            \,,,/
            (o o)
   -----oOOo-(3)-oOOo-----
   plugin activated: tinkerpop.server
   plugin activated: tinkerpop.utilities
   plugin activated: tinkerpop.tinkergraph
   gremlin>
   ```

   You are now at the `gremlin>` prompt. You enter the remaining steps at this prompt.

1. At the `gremlin>` prompt, enter the following to connect to the Neptune DB instance.

   ```
   :remote connect tinkerpop.server conf/neptune-remote.yaml
   ```

1. At the `gremlin>` prompt, enter the following to switch to remote mode. This sends all Gremlin queries to the remote connection.

   ```
   :remote console
   ```

1. **Add vertex with label and property.**

   ```
   g.addV('person').property('name', 'justin')
   ```

   The vertex is assigned a `string` ID containing a GUID. All vertex IDs are strings in Neptune.

1. **Add a vertex with custom id.**

   ```
   g.addV('person').property(id, '1').property('name', 'martin')
   ```

   The `id` property is not quoted. It is a keyword for the ID of the vertex. The vertex ID here is a string with the number `1` in it.

   Normal property names must be contained in quotation marks.

1. **Change property or add property if it doesn't exist.**

   ```
   g.V('1').property(single, 'name', 'marko')
   ```

   Here you are changing the `name` property for the vertex from the previous step. This removes all existing values from the `name` property.

   If you didn't specify `single`, it instead appends the value to the `name` property if it hasn't done so already. 

1. **Add property, but append property if property already has a value.**

   ```
   g.V('1').property('age', 29)
   ```

   Neptune uses set cardinality as the default action.

   This command adds the `age` property with the value `29`, but it does not replace any existing values. 

   If the `age` property already had a value, this command appends `29` to the property. For example, if the `age` property was `27`, the new value would be `[ 27, 29 ]`.

1. **Add multiple vertices.**

   ```
   g.addV('person').property(id, '2').property('name', 'vadas').property('age', 27).iterate()
   g.addV('software').property(id, '3').property('name', 'lop').property('lang', 'java').iterate()
   g.addV('person').property(id, '4').property('name', 'josh').property('age', 32).iterate()
   g.addV('software').property(id, '5').property('name', 'ripple').property('lang', 'java').iterate()
   g.addV('person').property(id, '6').property('name', 'peter').property('age', 35)
   ```

   You can send multiple statements at the same time to Neptune.

   Statements can be separated by newline (`'\n'`), spaces (`' '`), semicolon (`'; '`), or nothing (for example: `g.addV(‘person’).iterate()g.V()` is valid). 
**Note**  
The Gremlin Console sends a separate command at every newline (`'\n'`), so they are each a separate transaction in that case. This example has all the commands on separate lines for readability. Remove the newline (`'\n'`) characters to send it as a single command via the Gremlin Console.

   All statements other than the last statement must end in a terminating step, such as `.next()` or `.iterate()`, or they will not run. The Gremlin Console does not require these terminating steps. Use `.iterate` whenever you don't need the results to be serialized.

   All statements that are sent together are included in a single transaction and succeed or fail together.

1. **Add edges.**

   ```
   g.V('1').addE('knows').to(__.V('2')).property('weight', 0.5).iterate()
   g.addE('knows').from(__.V('1')).to(__.V('4')).property('weight', 1.0)
   ```

   Here are two different ways to add an edge.

1. **Add the rest of the Modern graph.**

   ```
   g.V('1').addE('created').to(__.V('3')).property('weight', 0.4).iterate()
   g.V('4').addE('created').to(__.V('5')).property('weight', 1.0).iterate()
   g.V('4').addE('knows').to(__.V('3')).property('weight', 0.4).iterate()
   g.V('6').addE('created').to(__.V('3')).property('weight', 0.2)
   ```

1. **Delete a vertex.**

   ```
   g.V().has('name', 'justin').drop()
   ```

   Removes the vertex with the `name` property equal to `justin`.
**Important**  
*Stop here, and you have the full Apache TinkerPop Modern graph. The examples in the [Traversal section](https://tinkerpop.apache.org/docs/current/reference/#graph-traversal-steps) of the TinkerPop documentation use the Modern graph.*

1. **Run a traversal.**

   ```
   g.V().hasLabel('person')
   ```

   Returns all `person` vertices.

1. **Run a Traversal with values (valueMap()).**

   ```
   g.V().has('name', 'marko').out('knows').valueMap()
   ```

   Returns key, value pairs for all vertices that `marko` “knows.”

1. **Specify multiple labels.**

   ```
   g.addV("Label1::Label2::Label3") 
   ```

   Neptune supports multiple labels for a vertex. When you create a label, you can specify multiple labels by separating them with `::`.

   This example adds a vertex with three different labels. 

   The `hasLabel` step matches this vertex with any of those three labels: `hasLabel("Label1")`, `hasLabel("Label2")`, and `hasLabel("Label3")`. 

   The `::` delimiter is reserved for this use only. 

   You cannot specify multiple labels in the `hasLabel` step. For example, `hasLabel("Label1::Label2")` does not match anything. 

1. **Specify Time/date**.

   ```
   g.V().property(single, 'lastUpdate', datetime('2018-01-01T00:00:00'))
   ```

   Neptune does not support Java Date. Use the `datetime()` function instead. `datetime()` accepts an ISO8061-compliant `datetime` string.

   It supports the following formats: `YYYY-MM-DD, YYYY-MM-DDTHH:mm`, `YYYY-MM-DDTHH:mm:SS`, and `YYYY-MM-DDTHH:mm:SSZ`.

1. **Delete vertices, properties, or edges.**

   ```
   g.V().hasLabel('person').properties('age').drop().iterate()
   g.V('1').drop().iterate()
   g.V().outE().hasLabel('created').drop()
   ```

   Here are several drop examples.
**Note**  
 The `.next()` step does not work with `.drop()`. Use `.iterate()` instead.

1. When you are finished, enter the following to exit the Gremlin Console.

   ```
   :exit
   ```

**Note**  
Use a semicolon (`;`) or a newline character (`\n`) to separate each statement.   
Each traversal preceding the final traversal must end in `iterate()` to be executed. Only the data from the final traversal is returned.

# Using openCypher to access graph data in Amazon Neptune
<a name="get-started-graph-opencypher"></a>

To get started using openCypher, see [openCypher](access-graph-opencypher.md), or use the openCypher notebooks in the GitHub [Neptune graph-notebook repository](https://github.com/aws/graph-notebook/tree/main/src/graph_notebook/notebooks).

# Using SPARQL to access graph data in Amazon Neptune
<a name="get-started-graph-sparql"></a>

SPARQL is a query language for the Resource Description Framework (RDF), which is a graph data format designed for the web. Amazon Neptune is compatible with SPARQL 1.1. This means that you can connect to a Neptune DB instance and query the graph using the query language described in the [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/) specification.

 A query in SPARQL consists of a `SELECT` clause to specify the variables to return and a `WHERE` clause to specify which data to match in the graph. If you are unfamiliar with SPARQL queries, see [Writing Simple Queries](https://www.w3.org/TR/sparql11-query/#WritingSimpleQueries) in the [SPARQL 1.1 Query Language](https://www.w3.org/TR/sparql11-query/). 

The HTTP endpoint for SPARQL queries to a Neptune DB instance is `https://your-neptune-endpoint:port/sparql`.

**To connect to SPARQL**

1. You can get the SPARQL endpoint for your Neptune cluster from the **SparqlEndpoint** item in the **Outputs** section of the CloudFormation stack. 

1. Enter the following to submit a SPARQL **`UPDATE`** using HTTP `POST` and the **curl** command.

   ```
   curl -X POST --data-binary 'update=INSERT DATA { <https://test.com/s> <https://test.com/p> <https://test.com/o> . }' https://your-neptune-endpoint:port/sparql
   ```

   The preceding example inserts the following triple into the SPARQL default graph: `<https://test.com/s> <https://test.com/p> <https://test.com/o>`

1. Enter the following to submit a SPARQL **`QUERY`** using HTTP `POST` and the **curl** command.

   ```
   curl -X POST --data-binary 'query=select ?s ?p ?o where {?s ?p ?o} limit 10' https://your-neptune-endpoint:port/sparql
   ```

   The preceding example returns up to 10 of the triples (subject-predicate-object) in the graph by using the `?s ?p ?o` query with a limit of 10. To query for something else, replace it with another SPARQL query.
**Note**  
The default MIME type of a response is `application/sparql-results+json` for `SELECT` and `ASK` queries.  
The default MIME type of a response is `application/n-quads` for `CONSTRUCT` and `DESCRIBE` queries.  
For a list of all available MIME types, see [SPARQL HTTP API](sparql-api-reference.md).

# Loading Data into an Amazon Neptune cluster
<a name="get-started-loading"></a>

Amazon Neptune provides a process for loading data from external files directly into a Neptune DB instance. You can use this process instead of executing a large number of `INSERT` statements, `addV` and `addE` steps, or other API calls.

The following are links to additional loading information.
+ **Methods for loading data** – [Loading data into Amazon Neptune](load-data.md)
+ **Data formats supported by the bulk loader** – [Load Data Formats](bulk-load-tutorial-format.md)
+ **Loading example** – [Example: Loading Data into a Neptune DB Instance](bulk-load-data.md)



# Monitoring an Amazon Neptune cluster
<a name="get-started-monitoring"></a>

Amazon Neptune supports the following monitoring methods.
+ **Amazon CloudWatch** – Amazon Neptune automatically sends metrics to CloudWatch and also supports CloudWatch Alarms. For more information, see [Monitoring Neptune Using Amazon CloudWatch](cloudwatch.md).
+ **AWS CloudTrail** – Amazon Neptune supports API logging using CloudTrail. For more information, see [Logging Amazon Neptune API Calls with AWS CloudTrail](cloudtrail.md).
+ **Tagging** – Use tags to add metadata to your Neptune resources and track usage based on tags. For more information, see [Tagging Amazon Neptune resources](tagging.md).
+ **Audit log files** – View, download, or watch database log files using the Neptune console. For more information, see [Using Audit Logs with Amazon Neptune Clusters](auditing.md).
+ **Instance status** – Check the health of a Neptune instance's graph database engine, find out what version of the engine is installed, and obtain other engine status information using the [instance status API](access-graph-status.md).

# Troubleshooting an Amazon Neptune cluster
<a name="get-started-troubleshooting"></a>

The following links might be helpful for resolving issues with Amazon Neptune.
+ **Best practices** – For solutions to common issues and performance suggestions, see [Best practices: getting the most out of Neptune](best-practices.md).
+ **Service errors** – For a list of errors for both management APIs and graph database connections, see [Neptune Service Errors](errors.md).
+ **Service limits** – For information about Neptune limits, see [Amazon Neptune Limits](limits.md).
+ **Engine releases** – For information about graph engine releases, including release notes, see [Engine releases for Amazon Neptune](engine-releases.md).
+ **Support forums** – To join a discussion about Neptune, see the [Amazon Neptune Forum](https://repost.aws/tags/TAxVAEdWg1SrS0lClUSX-m_Q?forumID=253).
+ **Pricing** – For information about the costs of using Amazon Neptune, see [Amazon Neptune Pricing](https://aws.amazon.com/neptune/pricing/).
+ **AWS Support** – For help and guidance from the experts, see [AWS Support](https://aws.amazon.com/premiumsupport/).