

For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

# What is Timestream for InfluxDB?
<a name="timestream-for-influxdb"></a>

Amazon Timestream for InfluxDB is a managed time series database engine that makes it easy for application developers and DevOps teams to run InfluxDB databases on AWS for real-time time series applications using open-source APIs. With Amazon Timestream for InfluxDB, it is easy to set up, operate, and scale time series workloads that can answer queries with single-digit millisecond query response time.

Amazon Timestream for InfluxDB gives you access to the capabilities of the familiar open source version of InfluxDB on its 2.x branch. This means that the code, applications, and tools you already use today with your existing InfluxDB open-source databases should work seamlessly with Amazon Timestream for InfluxDB. Amazon Timestream for InfluxDB can automatically back up your database and keep your database software up to date with the latest version. In addition, Amazon Timestream for InfluxDB makes it easy to use replication to enhance database availability, and improve data durability. As with all AWS services, there are no upfront investments required, and you pay only for the resources you use.

## DB instances
<a name="timestream-for-influx-db-instances"></a>

A DB instance is an isolated database environment running in the cloud. It is the basic building block of Amazon Timestream for InfluxDB. A DB instance can contain multiple user-created databases (or organizations and buckets for the case of InfluxDb 2.x databases), and can be accessed using the same client tools and applications you might use to access a standalone self-managed InfluxDB instance. DB instances are simple to create and modify with the AWS command line tools, Amazon Timestream InfluxDB API operations, or the AWS Management Console.

**Note**  
Amazon Timestream for InfluxDB supports access to databases using the Influx API operations and Influx UI. Amazon Timestream for InfluxDB does not allow direct host access.

You can have up to 40 Amazon Timestream for InfluxDB instances.

Each DB instance has a DB instance id. This service generated name uniquely identifies the DB instance when interacting with the Amazon Timestream for InfluxDB API and AWS CLI commands. The DB instance id is unique for that customer in an AWS Region.

The DB instance id forms part of the DNS hostname allocated to your instance by Timestream for InfluxDB. For example, if you specify influxdb1 as the DB instance name and the service generates an instance id *c5vasdqn0b* then Timestream will automatically allocate a DNS endpoint for your instance. An example endpoint is `c5vasdqn0b-3ksj4dla5nfjhi.timestream-influxdb.us-east-1.on.aws`, where `c5vasdqn0b` is your instance id. All instances created before 12/09/2024 will maintain the old structure with an endpoint similar to: `influxdb1-3ksj4dla5nfjhi.us-east-1.timestream-influxdb.amazonaws.com` where `influxdb1` is your instance name.

In the example endpoint `c5vasdqn0b-3ksj4dla5nfjhi.timestream-influxdb.us-east-1.on.aws`, the string `3ksj4dla5nfjhi` is a unique account identifier generated by AWS. The identifier `3ksj4dla5nfjhi` in the example doesn't change for the specified account in a certain Region. Therefore, all your DB instances created by this account share the same fixed identifier in the Region. Consider the following features of the fixed identifier:
+ Currently Timestream for InfluxDB does not support DB instance renaming. 
+  For all instances created after 12/09/2024, if you delete and re-create your DB instance with the same DB instance name, the endpoint will change since a new instance id will be assigned to the instance. Instance created before tthe aforementioned date will be assigned the same endpoint based on instance name.
+ If you use the same account to create a DB instance in a different Region, the internally generated identifier is different because the Region is different, as in `zxlasoonhvd.4a3j5du5ks7md2.timestream-influxdb.us-east-1.on.aws`.

Each DB instance supports only one Timestream for InfluxDB database engine.

When creating a DB instance, InfluxDB requires that an organization name be specified. A DB instance can host multiple organizations and multiple buckets associated to each organization. 

Amazon Timestream for InfluxDB allows you to create a master user account and password for your DB instance as part of the creation process. This master user has permissions to create organizations, buckets, and to perform read, write, delete and upsert operations on your data. You will also be able to access the InfluxUI and retrieve you operator token on. your first log in. From there you will be able to manage all your access tokens as well. You must set the master user password when you create a DB instance, but you can change it at any time using the Influx API, Influx CLI, or the InfluxUI. 

## DB instance classes
<a name="timestream-for-influx-dbi-classes"></a>

The DB instance class determines the computation and memory capacity of an Amazon Timestream for InfluxDB DB instance. The DB instance class that you need depends on your processing power and memory requirements.

A DB instance class consists of both the DB instance class type and the size. For example, `db.influx` is a memory-optimized DB instance class type suitable for the high performance memory requirements related to running InfluxDb workloads. Within the `db.influx` instance class type, `db.influx.2xlarge` is a DB instance class. The size of this class is *2xlarge*.

For more information about instance class pricing, see [ Amazon Timestream for InfluxDB pricing](https://aws.amazon.com/timestream/pricing/).

## DB instance class types
<a name="timestream-for-influx-dbi-classtypes"></a>

Amazon Timestream for InfluxDB supports DB instance classes for the following use case optimized for InfluxDB use cases.
+ **`db.influx`**—These instance classes are ideal for running memory-intensive workloads in open-source InfluxDB databases

## Hardware specifications for DB instance classes
<a name="timestream-for-influx-dbi-classt-hw"></a>

The following terminology describes the hardware specifications for DB instance classes:
+ **vCPU**

  The number of virtual central processing units (CPUs). A virtual CPU is a unit of capacity that you can use to compare DB instance classes. 
+ **Memory (GiB)**

  The RAM, in gibibytes, allocated to the DB instance. There is often a consistent ratio between memory and vCPU. As an example, take the db.influx instance class, which has a memory to vCPU ratio similar to the EC2 r7g instance class. 
+ **Influx-Optimized**

  The DB instance uses an optimized configuration stack and provides additional, dedicated capacity for I/O. This optimization provides the best performance by minimizing contention between I/O and other traffic from your instance. 
+ **Network bandwidth**

  The network speed relative to other DB instance classes. In the following table, you can find hardware details about the Amazon Timestream for InfluxDB instance classes. 


****  

| Instances Class | vCPU | Memory (GiB) | Storage Type | Network bandwidth(Gbps) | 
| --- | --- | --- | --- | --- | 
| db.influx.medium | 1 | 8 | Influx IOPS Included | 10 | 
| db.influx.large | 2 | 16 | Influx IOPS Included | 10 | 
| db.influx.xlarge | 4 | 32 | Influx IOPS Included | 10 | 
| db.influx.2xlarge | 8 | 64 | Influx IOPS Included | 10 | 
| db.influx.4xlarge | 16 | 128 | Influx IOPS Included | 10 | 
| db.influx.8xlarge | 32 | 256 | Influx IOPS Included | 12 | 
| db.influx.12xlarge | 48 | 384 | Influx IOPS Included | 20 | 
| db.influx.16xlarge | 64 | 512 | Influx IOPS Included | 25 | 
| db.influx.24xlarge | 96 | 768 | Influx IOPS Included | 40 | 

## InfluxDB instance storage
<a name="timestream-for-influx-dbi-storage"></a>

DB instances for Amazon Timestream for InfluxDB use Influx IOPS Included volumes for databases and log storage.

In some cases, your database workload might not be able to achieve 100 percent of the IOPS that you have provisioned. For more information, see [Factors that affect storage performance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#CHAP_Storage.Other.Factors). For more information about Timestream for InfluxDB storage pricing, see [Amazon Timestream pricing](https://aws.amazon.com/timestream/pricing/).

### Amazon Timestream for InfluxDB storage types
<a name="timestream-for-influx-dbi-storage-types"></a>

Amazon Timestream for InfluxDB provides support for one storage type, Influx IOPS Included. You can create Timestream for InfluxDB instances with up to 16 tebibytes (TiB) of storage. 

Here is a brief description of the available storage type:
+ **Influx IO Included storage: **Storage performance is the combination of I/O operations per second (IOPS) and how fast the storage volume can perform reads and writes (storage throughput). On Influx IOPS Included storage volumes, Amazon Timestream for InfluxDB provides 3 storage tiers that come pre configured with optimal IOPS and throughput required for different types of workloads.

### InfluxDB instance sizing
<a name="timestream-for-influx-dbi-sizing"></a>

The optimal configuration of a Timestream for InfluxDB instance depends on various factors, including ingestion rate, batch sizes, time series cardinality, concurrent queries, and query types. To provide sizing recommendations, let's consider an exemplary workload with the following characteristics: 
+ Data is collected and written by a fleet of Telegraf agents gathering System, CPU, Memory, Disk, IO, and etc. from a data center. 

  Each write request contains 5000 lines.
+ The queries executed on the system are categorized as “moderate complexity” queries, exhibiting the following characteristics:
  + They have multiple functions and one or two regular expressions 
  + They may include group by clauses or sample a time range of multiple weeks.
  + They typically takes a few hundred milliseconds to a couple of thousand milliseconds to execute.
  + The CPU favors query performance primarily.


****  

| Max \$1 of series | Writes (lines per second) | Reads (Queries per second) | Instance class | Storage Type | 
| --- | --- | --- | --- | --- | 
| <100K | \$150,000 | <10 | db.influx.large | Influx IO Included 3K | 
| <1MM | \$1150,000 | <25 | db.influx.2xlarge | Influx IO Included 3K | 
| \$11MM | \$1200,000 | \$125 | db.influx.4xlarge | Influx IO Included 3K | 
| <5MM | \$1250,000 | \$135 | db.influx.4xlarge | Influx IO Included 12K | 
| <10MM | \$1500,000 | \$150 | db.influx.8xlarge | Influx IO Included 12K | 
| \$110MM | <750,000 | <100 | db.influx.12xlarge | Influx IO Included 12K | 

## AWS Regions and Availability Zones
<a name="timestream-for-influx-dbi-regions"></a>

Amazon cloud computing resources are hosted in multiple locations world-wide. These locations are composed of AWS Regions and . Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones.

**Note**  
For information about finding the for an AWS Region, see [Regions and Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) in the *Amazon EC2 User Guide*. 

Amazon Timestream for InfluxDB enables you to place resources, such as DB instances, and data in multiple locations. 

Amazon operates state-of-the-art, highly-available data centers. Although rare, failures can occur that affect the availability of DB instances that are in the same location. If you host all your DB instances in one location that is affected by such a failure, none of your DB instances will be available.

![\[Diagram showing a region with three availability zones and InfluxDB in zone C.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/AvailabilityZone.png)


It is important to remember that each AWS Region is completely independent. Any Amazon Timestream for InfluxDB activity you initiate (for example, creating database instances or listing available database instances) runs only in your current default AWS Region. The default AWS Region can be changed in the console, or by setting the `AWS_DEFAULT_REGION` environment variable. Or it can be overridden by using the `--region` parameter with the AWS Command Line Interface (AWS CLI). For more information, see [Configuring the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html), specifically the sections about environment variables and command line options. 

To create or work with an Amazon Timestream for InfluxDB DB instance in a specific AWS Region, use the corresponding regional service endpoint.

### AWS Region availability
<a name="timestream-for-influx-dbi-regions-availability"></a>

The following table shows the AWS Regions where Amazon Timestream for InfluxDB is currently available and the endpoint for each Region.


****  

| AWS Region name | Region | Endpoint | Protocol | 
| --- | --- | --- | --- | 
| US East (N. Virginia) | us-east-1 | timestream-influxdb.us-east-1.amazonaws.com  | HTTPS | 
| US East (Ohio) | us-east-2 | timestream-influxdb.us-east-2.amazonaws.com  | HTTPS | 
| US West (Oregon) | us-west-2 | timestream-influxdb.us-west-2.amazonaws.com | HTTPS | 
| Asia Pacific (Mumbai) | ap-south-1 | timestream-influxdb.ap-south-1.amazonaws.com  | HTTPS | 
| Asia Pacific (Singapore) | ap-southeast-1 | timestream-influxdb.ap-southeast-1.amazonaws.com  | HTTPS | 
| Asia Pacific (Sydney) | ap-southeast-2 | timestream-influxdb.ap-southeast-2.amazonaws.com  | HTTPS | 
| Asia Pacific (Tokyo) | ap-northeast-1 | timestream-influxdb.ap-northeast-1.amazonaws.com  | HTTPS | 
| Europe (Frankfurt) | eu-central-1 | timestream-influxdb.eu-central-1.amazonaws.com  | HTTPS | 
| Europe (Ireland) | eu-west-1 | timestream-influxdb.eu-west-1.amazonaws.com  | HTTPS | 
| Europe (Stockholm) | eu-north-1 | timestream-influxdb.eu-north-1.amazonaws.com  | HTTPS | 
| Canada (Central) | ca-central-1 | timestream-influxdb.ca-central-1.amazonaws.com | HTTPS | 
| Europe (London) | eu-west-2 | timestream-influxdb.eu-west-2.amazonaws.com | HTTPS | 
| Europe (Paris) | eu-west-3 | timestream-influxdb.eu-west-3.amazonaws.com | HTTPS | 
| Asia Pacific (Jakarta) | ap-southeast-3 | timestream-influxdb.ap-southeast-3.amazonaws.com | HTTPS | 
| Europe (Milan) | eu-south-1 | timestream-influxdb.eu-south-1.amazonaws.com | HTTPS | 
| Europe (Spain) | eu-south-2 | timestream-influxdb.eu-south-2.amazonaws.com | HTTPS | 
| Middle East (UAE) | me-central-1 | timestream-influxdb.me-central-1.amazonaws.com | HTTPS | 
| China (Beijing) | cn-north-1 | timestream-influxdb---cn-north-1---on.amazonwebservices.com.rproxy.govskope.ca.cn | HTTPS | 
| China (Ningxia) | cn-northwest-1 | timestream-influxdb---cn-northwest-1---on.amazonwebservices.com.rproxy.govskope.ca.cn | HTTPS | 

For more information on AWS Regions where Amazon Timestream for InfluxDB is currently available and the endpoint for each Region, see [Amazon Timestream endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/timestream.html).

### AWS Regions design
<a name="timestream-for-influx-dbi-regions-design"></a>

Each AWS Region is designed to be isolated from the other AWS Regions. This design achieves the greatest possible fault tolerance and stability.

When you view your resources, you see only the resources that are tied to the AWS Region that you specified. This is because AWS Regions are isolated from each other, and we don't automatically replicate resources across AWS Regions.

### AWS Availability Zones
<a name="timestream-for-influx-dbi-availability-design"></a>

When you create a DB instance, Amazon Timestream for InfluxDB choose one for you randomly based on your subnet configuration. An *Availability Zone* is represented by an AWS Region code followed by a letter identifier (for example, `us-east-1a`).

Use the `describe-availability`-zones Amazon EC2 command as follows to describe the within the specified Region that are enabled for your account.

```
aws ec2 describe-availability-zones --region region-name
```

For example, to describe the within the *US East (N. Virginia) Region (us-east-1)* that are enabled for your account, run the following command:

```
aws ec2 describe-availability-zones --region us-east-1
```

You can't choose the for the primary and secondary DB instances in a *Multi-AZ DB deployment*. Amazon Timestream for InfluxDB chooses them for you randomly. For more information about Multi-AZ deployments, see [Configuring and managing a multi-AZ deployment](timestream-for-influx-managing-multi-az.md).

## DB Instance billing for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-dbi-billing"></a>

Amazon Timestream for InfluxDB instances are billed based on the following components:
+ **DB instance hours (per hour)** — Based on the DB instance class of the DB instance, for example, db.influx.large. Pricing is listed on a per-hour basis, but bills are calculated down to the second and show times in decimal form. Amazon Timestream for InfluxDB usage is billed in 1-second increments, with a minimum of 10 minutes. For more information, see [DB instance classes](#timestream-for-influx-dbi-classes)DB instance classes. 
+ **Storage (per GiB per month)** — Storage capacity that you have provisioned to your DB instance. For more information, see [InfluxDB instance storage](#timestream-for-influx-dbi-storage).
+ **Data transfer (per GB)** — Data transfer in and out of your DB instance from or to the internet and other AWS Regions.

For Amazon Timestream for InfluxDB pricing information, see the [Amazon Timestream for InfluxDB pricing page](https://aws.amazon.com/timestream/pricing/).

## Setting up Amazon Timestream for InfluxDB
<a name="timestream-for-influx-dbi-setting-up"></a>

Before you use Amazon Timestream for InfluxDB for the first time, complete the following tasks:

If you already have an AWS account, know your Amazon Timestream for InfluxDB requirements, and prefer to use the defaults for IAM and Amazon VPC [Getting started with Timestream for InfluxDB](timestream-for-influx-getting-started.md).

### Sign up for an AWS account
<a name="timestream-for-influx-dbi-setting-up-aws"></a>

If you do not have an AWS account, complete the following steps to create one.

***To sign up for an AWS account***
+ Go to the [AWS sign in](https://portal.aws.amazon.com/billing/signup) page.
+ Choose **Create a new account**and the follow the instructions.
**Note**  
Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.

When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to an administrative user, and use only the root user to perform tasks that require root user access.

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [https://aws.amazon.com/](https://aws.amazon.com/) and choosing *My Account*.

### User management
<a name="timestream-for-influx-dbi-setting-up-user-managemennt"></a>

***Create an administrative user***

Create an administrative user

After you sign up for an AWS account, create an administrative user so that you don't use the root user for everyday tasks.

***Secure your AWS account root user***



Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see [Signing in as the root user](https://docs.aws.amazon.com/signin/latest/userguide/console-sign-in-tutorials.html#introduction-to-root-user-sign-in-tutorial) in the *AWS Sign-In User Guide*

Turn on multi-factor authentication (MFA) for your root user. For instructions, see [Enable a virtual MFA device for your AWS account root user (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) in the *IAM User Guide*.

***Grant programmatic access***

Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.

To grant users programmatic access, choose one of the following options:


****  

| Which user needs programmatic access? | To | By | 
| --- | --- | --- | 
| Workforce identity(Users managed in IAM Identity Center) | Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. For the AWS CLI, see [Configuring IAM Identity Center authentication with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html) in the *AWS Command Line Interface User Guide*. For AWS SDKs, tools, and AWS APIs, see [Using IAM Identity Center to authenticate AWS SDK and tools](https://docs.aws.amazon.com/sdkref/latest/guide/access-sso.html) in the *AWS SDKs and Tools Reference Guide*.  | 
| IAM | Use temporary credentials to sign programmatic requests to the AWS CLI, SDKs, and APIs. | Following the instructions in [Use temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) in the AWS Identity and Access Management User Guide. | 
| IAM | (Not recommended) Use long-term credentials to sign programmatic requests to the AWS CLI, SDKs, and APIs. |  Following the instructions for the interface that you want to use. For the AWS CLI, see [Authenticating using IAM user credentials for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-authentication-user.html) in the *AWS Command Line Interface User Guide*. For AWS SDKs and tools, see [Using long-term credentials to authenticate AWS SDKs and tools](https://docs.aws.amazon.com/sdkref/latest/guide/access-iam-users.html) in the *AWS SDKs and Tools Reference Guide*. For AWS APIs, see [Managing access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) in the *AWS Identity and Access Management User Guide*.  | 

### Determine requirements
<a name="timestream-for-influx-dbi-setting-up-determine-requirements"></a>

The basic building block of Amazon Timestream for InfluxDB is the DB instance. In a DB instance, you create your buckets. A DB instance provides a network address called an endpoint. Your applications use this endpoint to connect to your DB instance. You will also access your InfluxUI using this same endpoint from your browser. When you create a DB instance, you specify details like storage, memory, database engine and version, network configuration, and security. You control network access to a DB instance through a security group.

Before you create a DB instance and a security group, you must know your DB instance and network needs. Here are some important things to consider:
+ **Resource requirements** — What are the memory and processor requirements for your application or service? You use these settings to help you determine what DB instance class to use. For specifications about DB instance classes, see [DB instance classes](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html).
+ **VPC and security group** — Your DB instance will most likely be in a *virtual private cloud (VPC)*. To connect to your DB instance, you need to set up security group rules. These rules are set up differently depending on what kind of VPC you use and how you use it. For example, you can use: a default VPC or a user-defined VPC.

  The following list describes the rules for each VPC option:
  + **Default VPC** — If your AWS account has a default VPC in the current AWS Region, that VPC is configured to support DB instances. If you specify the default VPC when you create the DB instance, make sure to create a *VPC security group* that authorizes connections from the application or service to the Amazon Timestream for InfluxDB DB instance. Use the **Security Group** option on the VPC console or the AWS CLI to create VPC security groups. For more information, see [Step 3: Create a VPC security group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.CreateVPCSecurityGroup).
+ **User-defined VPC** — If you want to specify a user-defined VPC when you create a DB instance, be aware of the following:
  + Make sure to create a *VPC security group* that authorizes connections from the application or service to the Amazon Timestream for InfluxDB DB instance. Use the **Security Group** option on the VPC console or the AWS CLI to create VPC security groups. For information, see [Step 3: Create a VPC security group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.CreateVPCSecurityGroup).
  + The VPC must meet certain requirements in order to host DB instances, such as having at least two subnets, each in a separate Availability Zone. For information, see [Amazon VPC and Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html).
+ **High availability** — Do you need failover support? On Amazon Timestream for InfluxDB, a Multi-AZ deployment creates a primary DB instance and a secondary standby DB instance in another Availability Zone for failover support. We recommend Multi-AZ deployments for production workloads to maintain high availability. For development and test purposes, you can use a deployment that isn't Multi-AZ. For more information, see [Multi-AZ DB instance deployments](timestream-for-influx-managing-multi-az-instance-deployments.md).
+ **IAM policies** — Does your AWS account have policies that grant the permissions needed to perform Amazon Timestream for InfluxDB operations? If you are connecting to AWS using IAM credentials, your IAM account must have IAM policies that grant the permissions required to perform Amazon Timestream for InfluxDB control plane operations. For more information, see [Identity and Access Management for Amazon Timestream for InfluxDB](security-iam-for-influxdb.md).
+ **Open ports** — What TCP/IP port does your database listen on? The firewalls at some companies might block connections to the default port for your database engine. The default for Timestream for InfluxDB is 8086.
+ **AWS Region** — What AWS Region do you want your database in? Having your database in close proximity to your application or web service can reduce network latency. For more information, see [AWS Regions and Availability Zones](#timestream-for-influx-dbi-regions). 
+ **DB disk subsystem** — What are your storage requirements? Amazon Timestream for InfluxDB provides provides three configurations for it Influx IOPS Included storage type::
  + Influx Io Included 3k IOPS (SSD)
  + Influx Io Included 12k IOPS (SSD)
  + Influx Io Included 16k IOPS (SSD)

  For more information on Amazon Timestream for InfluxDB storage, see Amazon Timestream for InfluxDB DB instance storage. When you have the information you need to create the security group and the DB instance, continue to the next step.

### Provide access to your DB instance in your VPC by creating a security group
<a name="timestream-for-influx-dbi-setting-up-vpc-access"></a>

VPC security groups provide access to DB instances in a VPC. They act as a firewall for the associated DB instance, controlling both inbound and outbound traffic at the DB instance level. DB instances are created by default with a firewall and a default security group that protect the DB instance.

Before you can connect to your DB instance, you must add rules to a security group that enable you to connect. Use your network and configuration information to create rules to allow access to your DB instance.

For example, suppose that you have an application that accesses a database on your DB instance in a VPC. In this case, you must add a custom TCP rule that specifies the port range and IP addresses that your application uses to access the database. If you have an application on an Amazon EC2 instance, you can use the security group that you set up for the Amazon EC2 instance.

#### Creating a security group for VPC access
<a name="timestream-for-influx-dbi-setting-up-vpc-access-create-sg"></a>

To create a VPC security group, sign in to the AWS Management Console and choose [ VPC.](https://console.aws.amazon.com/vpc)

**Note**  
Make sure you are in the VPC console, not the Amazon Timesteam for InfluxDB console.
+ In the upper-right corner of the AWS Management Console, choose the **AWS Region** where you want to create your VPC security group and DB instance. In the list of Amazon VPC resources for that AWS Region, you should see at least one VPC and several subnets. If you don't, you *don't have a default VPC in that AWS Region.*. 
+ In the navigation pane, choose **Security Groups**.
+ Choose **Create security group**.
+ Inn the **Basic details** section of the security group page, enter the **Security group name** and **Description**. For **VPC**, choose the VPC thatyou want to create your DB instance in. 
+ In **Inbound rules**, choose **Add rule**.
  + For **Type**, choose **Custom TCP**.
  + For **Source**, choose a **Security group name** or enter the **IP address range (CIDR value)** from where you access the DB instance. If you choose **My IP**, this allows access to the DB instance from the IP address detected in your browser.

  For Source, choose a security group name or type the IP address range (CIDR value) from where you access the DB instance. If you choose My IP, this allows access to the DB instance from the IP address detected in your browser.
+ (Optional) In **Outbound rules**, add rules for outbound traffic. By default, all outbound traffic is allowed.
+ Choose **Create security group**.

You can use this *VPC security group* as the security group for your DB instance when you create it.

**Note**  
If you use a *default VPC*, a default subnet group spanning all of the VPC's subnets is created for you. When you create a DB instance, you can choose the *default eiifccntf VPC* and choose *default* for DB Subnet Group.

After you have completed the setup requirements, you can create a DB instance using your requirements and security group. To do so, follow the instructions in [Creating a DB instance](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db).

# Getting started with Timestream for InfluxDB
<a name="timestream-for-influx-getting-started"></a>

In the following examples, you can find out how to create and connect to a DB instance using Amazon Timestream for InfluxDB Service. 

**Note**  
Before you can create or connect to a DB instance, make sure to complete the tasks in [Setting up Amazon Timestream for InfluxDB](timestream-for-influxdb.md#timestream-for-influx-dbi-setting-up).

**Topics**
+ [Creating and connecting to a Timestream for InfluxDB instance](timestream-for-influx-getting-started-creating-db-instance.md)
+ [Creating a new operator token for your InfluxDB instance](timestream-for-influx-getting-started-operator-token.md)

# Creating and connecting to a Timestream for InfluxDB instance
<a name="timestream-for-influx-getting-started-creating-db-instance"></a>

This tutorial creates an Amazon EC2 instance and an Amazon Timestream for InfluxDB DB instance. The tutorial shows you how to write data to the DB instance from the EC2 instance using the Telegraf client. As a best practice, this tutorial creates a private DB instance in a virtual private cloud (VPC). In most cases, other resources in the same VPC, such as EC2 instances, can access the DB instance, but resources outside of the VPC can't access it.

After you complete the tutorial, there will be a public and private subnet in each Availability Zone in your VPC. In one Availability Zone, the EC2 instance will be in the public subnet, and the DB instance will be in the private subnet.

**Note**  
There's no charge for creating an AWS account. However, by completing this tutorial, you might incur costs for the AWS resources you use. You can delete these resources after you complete the tutorial if they are no longer needed.

The following diagram shows the configuration when accessibility is public.

![\[Network diagram showing VPC with public subnet, internet gateway, ENI, and Timestream-InfluxDB database.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/public.png)


**Warning**  
We don't recommend using 0.0.0.0/0 for HTTP access, since you would make it possible for all IP addresses to access your public InfluxDB instance via HTTP. This approach is not acceptable even for a short time in a test environment. Authorize only a specific IP address or range of addresses to access your InfluxDB instances using HTTP for web UI or API access.

This tutorial creates a DB instance running InfluxDB with the AWS Management Console. We will focus only on the DB instance size and DB instance identifier. We will use the default settings for the other configuration options. The DB instance created by this example will be private.

Other settings that you could configure include availability, security, and logging. To create a public DB instance, you must choose to make your instance **Publicly accessible** on the **Connectivity configuration** section. For information about creating DB instances, see [Creating a DB instance](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db).

If your instance is not publicly accessible, do the following:
+ Create a host on the VPC of the instance through which you can tunnel traffic.
+ Set up SSH tunneling to the instance. For more information, see [Amazon EC2 instance port forwarding with AWS Systems Manager](https://aws.amazon.com/blogs/mt/amazon-ec2-instance-port-forwarding-with-aws-systems-manager/).
+ In order for the certificate to work, add the following line to the `/etc/hosts` file of your client machine: `127.0.0.1`. This is the DNS address of your instance. 
+ Connect to your instance using the fully qualified domain name, for example, *https://<DNS>:8086*. 
**Note**  
Localhost is unable to validate the certificate because localhost is not part of the certificate SAN.

The following diagram shows the configuration when accessibility is private:

![\[Network diagram showing public and private subnets, security groups, and connections to external services.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/private.png)


## Prerequisites
<a name="timestream-for-influx-getting-started-creating-db-instance-prereq"></a>

Before you begin, complete the steps in the following sections: 
+  Sign up for an AWS account.
+ Create an administrative user.

## Step 1: Create an Amazon EC2 instance
<a name="timestream-for-influx-getting-started-creating-db-instance-step1"></a>

Create an Amazon EC2 instance that you will use to connect to your database.

1. Sign in to the AWS Management Console and open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you want to create the EC2 instance.

1. Choose **EC2 Dashboard**, and then choose **Launch instance**.

1. When the **Launch an instance** page opens, choose the following settings:

   1. Under **Name and tags**, enter `ec2-database-connect` for **Name**.

   1. Under **Application and OS Images (Amazon Machine Image)**, choose **Amazon Linux**, and then select **Amazon Linux 2023 AMI**. Keep the default selections for the other choices.

   1. Under **Instance type**, choose **t2.micro**.

   1. Under **Key pair (login)**, choose a **Key pair name** to use an existing key pair. To create a new key pair for the Amazon EC2 instance, choose **Create new key pair** and then use the **Create key pair** window to create it. For more information about creating a new key pair, see [Create a key pair for your Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-key-pairs.html) in the *Amazon Elastic Compute Cloud User Guide*.

   1. For **Allow SSH traffic from** in **Network settings**, choose the source of SSH connections to the EC2 instance. You can choose **My IP** if the displayed IP address is correct for SSH connections. Otherwise, you can determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell (SSH). To determine your public IP address, in a different browser window or tab, you can use the service at [checkip.amazonaws.com/](https://checkip.amazonaws.com). An example of an IP address is `192.0.2.1/32`. In many cases, you might connect through an internet service provider (ISP) or from behind your firewall without a static IP address. If so, make sure to determine the range of IP addresses used by client computers.
**Warning**  
We do not recommend using 0.0.0.0/0 for SSH access, since you would make it possible for all IP addresses to access your public EC2 instances using SSH. This approach is not acceptable even for a short time in a test environment. Authorize only a specific IP address or range of addresses to access your EC2 instances using SSH.

## Step 2: Create an InfluxDB DB instance
<a name="timestream-for-influx-getting-started-creating-db-instance-step2"></a>

The basic building block of Amazon Timestream for InfluxDB is the DB instance. This environment is where you run your InfluxDB databases.

In this example, you will create a DB instance running the InfluxDB database engine with a db.influx.large DB instance class.

1. Sign in to the AWS Management Console and open the Amazon Timestream for InfluxDB console at [https://console.aws.amazon.com/timestream/.](https://console.aws.amazon.com/timestream/) 

1. In the upper-right corner of the Amazon Timestream for InfluxDB console, choose the AWS Region in which you want to create the DB instance.

1. In the navigation pane, choose **InfluxDB Databases**.

1. Choose **Create InfluxDB database**.  
![\[Empty InfluxDB databases interface with option to create a new database.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/CreateInfluxDatabase.png)

1. In the **Deployment settings** section, select **Cluster with read replicas**. Choose **View subscription options** to start a subscription for the read replica add-on. For more information, see [Read replica licensing through AWS MarketplaceRead replica licensing terminology](timestream-for-influx-rr-licensing.md).

1. In the **Database credentials** section, enter KronosTest-1 for **DB cluster name**.

1. Provide the InfluxDB basic configuration parameters: **Initial username**, **Initial organization name**, **Initial bucket name** and **Password**.
**Important**  
You won't be able to view the user password again. You won't be able to access your instance and obtain an operator token without your password. If you don't record it, you might have to change it. See [Creating a new operator token for your InfluxDB instance](timestream-for-influx-getting-started-operator-token.md).  
If you need to change the user password after the DB instance is available, you can modify the DB instance to do so. For more information about modifying a DB instance, see [Updating DB instances](timestream-for-influx-managing-modifying-db.md).  

![\[InfluxDB database creation interface with deployment settings and credentials input fields.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/CreateInfluxDatabaseDetails.png)


1. In the **Instance configuration** section, select the **db.influx.large** DB instance class.

1. In the **Storage configuration** section, select **Influx IO Included (3K)** for **Storage type**.

1. In the **Connectivity configuration** section, select **IPv4** for the **Network type**. Make sure your InfluxDB instance is in the same subnet as your newly created EC2 instance. Under **Public access**, select **Not publicly accessible** to make your DB instance private.  
![\[Connectivity configuration settings for database access, including network type, VPC, subnets, and security options.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/ConnectivityConfiguration.png)

1. In the **Failover settings** and **Parameter group settings** sections, keep the default values.

1. Configure your logs in **Log delivery settings** and create tags (optional). For more information about logs, see [Setup to view InfluxDB logs on Timestream Influxdb Instances](timestream-for-influx-managing-view-influx-logs.md). For more details about adding tags, see [Adding tags and labels to resources](tagging-keyspaces-influxdb.md).

1. Choose **Create InfluxDB database**.

1. In the **Databases** list, chose the name of your new InfluxDB instance to show its details. The DB instance has a status of **Creating** until it is ready to use.

You can connect to the DB instance when the status changes to **Available**. Depending on the DB instance class and the amount of storage, it can take up to 20 minutes before the new instance is available.

**Important**  
At this time, you can't modify compute (instance types) and storage (storage types) configurations of existing instances.

## Step 3: Access the InfluxDB UI
<a name="timestream-for-influx-getting-started-creating-db-instance-step-3"></a>

To access the InfluxDB UI from a private Timestream for InfluxDB DB instance, you must connect from within the same subnet and security group. One way to facilitate this connection is to create a bastion host within the private subnet.

A bastion host is a special-purpose server that acts as a secure entry point to critical systems, protecting your network from external access. It serves as a gateway between your secure internal network and the outside world.

**Note**  
For publicly accessible Timestream for InfluxDB DB instances, you can access the InfluxDB UI via the **InfluxDB UI** button on the instance details page in the console. Note that this button will be disabled for instances that are not publicly accessible.  
If you have a public DB instance, connect to the InfluxDB UI via the console and proceed to [Step 4: Send Telegraf data to your InfluxDB instance](#timestream-for-influx-getting-started-creating-db-instance-step4).

![\[Summary interface showing details of a private InfluxDB database. The InfluxDB UI button is disabled.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/InfluxDB-database-summary.png)


Follow these steps to create and configure your bastion host: 

1. **Create a bastion host:** To create a bastion host, you can launch a new EC2 instance or use an existing one. Ensure that the instance has the necessary network setup to access the security group you used to create the private Timestream for InfluxDB instance you are trying to access.

1. **Connect to the InfluxDB UI:** Once you have created a bastion host, you can use the endpoint displayed in the console to connect to the InfluxDB UI. The endpoint will be in the format `<db-identifier>-<*>.timestream-influxdb.<region>.on.aws`. In China, it will be `<db-identifier>-<*>.timestream-influxdb.<region>.on.amazonwebservices.com.rproxy.govskope.ca.cn`.

1. **Configure your bastion host for local forwarding: **To set up local forwarding, use the AWS Systems Manager (SSM) session manager. Run the following command, replacing *bastion-ec2-instance-id* with the ID of your bastion host instance, *endpoint* with the endpoint displayed in the above console, and *port-number* with the port number you want to use:

   ```
   aws ssm start-session --target bastion-ec2-instance-id \
   --document-name AWS-StartPortForwardingSessionToRemoteHost \
   --parameters '{"host":["endpoint"], "portNumber":["port-number"], "localPortNumber":["port-number"]}'
   ```

   You may be prompted to install the SessionManagerPlugin. For more details, see [Install the Session Manager plugin for the AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html).

1. **Access the InfluxDB UI:** After completing the above steps, you can access the InfluxDB UI at http://localhost:*port-number*. You will need to acknowledge the "not secure" message.

1. **Enable domain name validation:** To enable domain name validation, add the following line to your `/etc/hosts` file (Linux), `/private/etc/hosts` (Mac), or `C:\Windows\System32\drivers\etc` (Windows).

   ```
   127.0.0.1    endpoint
   ```

1. You can now access the InfluxDB UI using https://*endpoint*:*port-number*.

## Step 4: Send Telegraf data to your InfluxDB instance
<a name="timestream-for-influx-getting-started-creating-db-instance-step4"></a>

You can now start sending telemetry data to your InfluxDB DB instance using the Telegraf agent. In this example, you'll install and configure a Telegraf agent to send performance metrics to you InfluxDB DB instance.

1. After you connect to the InfluxDB UI, you should see a new browser window with a login prompt. Enter the credentials you used earlier to create your InfluxDB DB instance.

1. In the left navigation pane, click on the arrow icon and select **API Tokens**.

1. For this test, choose **Generate API Token**. Select **All Access API Token** from the dropdown list.
**Note**  
For production scenarios, we recommend creating tokens with specific access to the required buckets that are built for specific Telegraf needs.  
![\[Dialog for generating an all-access API token with a warning and description field.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/AllAccessAPIToken.png)

1. Your token will appear on the screen.
**Important**  
Make sure to copy and save the token since it will not be displayed again.

1. Connect to the EC2 instance that you created earlier by following the steps in [Connect to your Linux instance using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.

   We recommend that you connect to your EC2 instance using SSH. If the SSH client utility is installed on Windows, Linux, or Mac, you can connect to the instance using the following command format:

   ```
   ssh -i location_of_pem_file ec2-user@ec2-instance-public-dns-name
   ```

   For example, assume that `ec2-database-connect-key-pair.pem` is stored in `/dir1` on Linux, and the public IPv4 DNS for your EC2 instance is `ec2-12-345-678-90.compute-1.amazonaws.com`. Your SSH command would look as follows:

   ```
   ssh -i /dir1/ec2-database-connect-key-pair.pem ec2-user@ec2-12-345-678-90.compute-1.amazonaws.com
   ```

1. Get the latest version of Telegraf installed on your instance. To do this, use the following command:

   ```
   cat <<EOF | sudo tee /etc/yum.repos.d/influxdata.repo
   [influxdata]
   name = InfluxData Repository - Stable
   baseurl = https://repos.influxdata.com/stable/\$basearch/main
   enabled = 1
   gpgcheck = 1
   gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
   EOF
   
   sudo yum install telegraf
   ```

1. Configure your Telegraf instance.
**Note**  
If telegraf.conf does not exist or it does not contain a `timestream` section, you can generate one with:  

   ```
   telegraf —section-filter agent:inputs:outputs —input-filter cpu:mem —output-filter timestream config > telegraf.conf
   ```

   1. Edit the configuration file usually located at `/etc/telegraf`.

      ```
      sudo nano /etc/telegraf/telegraf.conf
      ```

   1. Configure the input plugins for CPUs, memory metrics, and disk usage.

      ```
      [[inputs.cpu]]
        percpu = true
        totalcpu = true
        collect_cpu_time = false
        report_active = false
      
      [[inputs.mem]]
      
      [[inputs.disk]]
        ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
      ```

   1. Configure the output plugin to send data to your InfluxDB DB instance and save your changes.

      ```
      [[outputs.influxdb_v2]]
         urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
         token = "<your_telegraf_token"
         organization = "your_org"
         bucket = "your_bucket"
         timeout = "5s"
      ```

   1. Configure the Timestream target.

      ```
      # Configuration for sending metrics to Amazon Timestream.
      [[outputs.timestream]]
      
        ## Amazon Region and credentials
        region = "us-east-1"
        access_key = "<AWS key here>"
        secret_key = "<AWS secret key here>"
        database_name = "<timestream database name>" # needs to exist
      
        ## Specifies if the plugin should describe on start.
        describe_database_on_start = false
        mapping_mode = "multi-table" # allows multiple tables for each input metrics
      
        create_table_if_not_exists = true
        create_table_magnetic_store_retention_period_in_days = 365
        create_table_memory_store_retention_period_in_hours = 24
      
        use_multi_measure_records = true # Important to use multi-measure records
        measure_name_for_multi_measure_records = "telegraf_measure"
        max_write_go_routines = 25
      ```

1. Enable and start the Telegraf service.

   ```
   $ sudo systemctl enable telegraf
   $ sudo systemctl start telegraf
   ```

## Step 5: Delete the Amazon EC2 instance and the InfluxDB DB instance
<a name="timestream-for-influx-getting-started-creating-db-instance-step5"></a>

After you explore the Telegraf-generated data using your your InfluxDB DB instance with the InfluxDB UI, delete both your EC2 and your InfluxDB DB instances so you are no longer charged for them.

**To delete the EC2 instance:**

1. Sign in to the AWS Management Console and open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Select the checkbox next to the EC2 instance's name, and then select **Instance state**. Choose **Terminate (delete) instance**.

1. Choose **Terminate (delete)** when prompted for confirmation.

For more information about deleting an EC2 instance, see [Terminate Amazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html) in the *Amazon Elastic Compute Cloud User Guide*.

**To delete the DB instance with no final DB snapshot:**

1.  Sign in to the AWS Management Console and open the Amazon Timestream for InfluxDB console at [https://console.aws.amazon.com/timestream/.](https://console.aws.amazon.com/timestream/) 

1. In the navigation pane, choose **InfluxDB databases**.

1. Select the DB instance you want to delete. Choose **Delete**

1. Confirm the deletion and choose **Delete**.

# Creating a new operator token for your InfluxDB instance
<a name="timestream-for-influx-getting-started-operator-token"></a>

If you need to get the Operator Token for your new InfluxDB instance, perform the following steps:

1. To change your operator token, we recommend using the Influx CLI. For instructions, please see: [Install and use the Influx CLI](https://docs.influxdata.com/influxdb/v2/tools/influx-cli/).

1. Configure your CLI to use `--username-password` to be able to create the operator:

   ```
   influx config create --config-name CONFIG_NAME1  --host-url "https://yourinstanceid.eu-central-1.timestream-influxdb.amazonaws.com:8086" --org [YOURORG]  --username-password [YOURUSERNAME] --active
   ```

1. Create your new operator token. You will be asked for your password to confirm this step.

   ```
   influx auth create --org [YOURORG] --operator
   ```

**Important**  
Once a new operator token has been created, you will need to update any client that is currently using the old one.

# Migrating data from self-managed InfluxDB to Timestream for InfluxDB
<a name="timestream-for-influx-getting-started-migrating-data"></a>

The [Influx migration script](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/python/influx-migration) is a Python script that migrates data between InfluxDB OSS instances, whether those instances are managed by AWS or not.

InfluxDB is a time series database. InfluxDB contains *points*, which contain a number of key-value pairs and a timestamp. When points are grouped by key-value pairs, they form a series. A series is grouped by a string identifier called a *measurement*. InfluxDB is often used for operations monitoring, IOT data, and analytics. A *bucket* is a kind of container within InfluxDB to store data. AWS-managed InfluxDB is InfluxDB within the AWS ecosystem. InfluxDB provides the InfluxDB v2 API for accessing data and making changes to the database. The InfluxDB v2 API is what the Influx migration script uses to migrate data.
+ The Influx migration script can migrate buckets and their metadata, migrate all buckets from all organizations, or do a full migration, which replaces all data on the destination instance.
+ The script backups data from the source instance locally, on whatever system executes the script, then restores the data to the destination instance. The data is kept in code>influxdb-backup-<timestamp></timestamp> directories, one for each migration.
+ The script provides a number of options and configurations including mounting S3 buckets to limit local storage usage during migration and choosing which organizations to use during migration.

**Topics**
+ [Preparation](timestream-for-influx-getting-started-migrating-data-prepare.md)
+ [How to use scripts](timestream-for-influx-getting-started-migrating-data-using-script.md)
+ [Migration Overview](timestream-for-influx-getting-started-migrating-data-overview.md)

# Preparation
<a name="timestream-for-influx-getting-started-migrating-data-prepare"></a>

Data migration for InfluxDB is accomplished with a Python script that utilizes InfluxDB CLI features and the InfluxDB v2 API. Execution of the migration script requires the following environment configuration:
+ **Supported Versions:** A minimum version of 2.3 of InfluxDB and Influx CLI is supported.
+ **Token Environment Variables**
  + Create the environment variable `INFLUX_SRC_TOKEN` containing the token for your source InfluxDB instance.
  + Create the environment variable `INFLUX_DEST_TOKEN` containing the token for your destination InfluxDB instance.
+ **Python 3**
  + Check installation: `python3 --version`.
  + If not installed, install from the Python website. Minimum version 3.7 required. On Windows the default Python 3 alias is simply python. 
  + The Python module requests is required. Install it with: `shell python3 -m pip install requests` 
  + TThe Python module influxdb\$1client is required. Install it with: `shell python3 -m pip install influxdb_client` 
+ **InfluxDB CLI**
  + Confirm installation: `influx version`.
  + If not installed, follow the installation guide in the [InfluxDB documentation](https://docs.influxdata.com/influxdb/cloud/tools/influx-cli/#install-the-influx-cli). 

    Add influx to your \$1PATH. 
+ **S3 Mounting Tools (Optional)**

  When S3 mounting is used, all backup files are stored in a user-defined S3 bucket. S3 mounting can be useful to save space on the executing machine or when backup files need to be shared. If S3 mounting isn't used, by omitting the `--s3-bucket` option, then a local `influxdb-backup-<millisecond timestamp>` directory will be created to store backup files in the same directory that the script was run.

  For Linux: [mountpoint-s3](https://github.com/awslabs/mountpoint-s3). 

  For Windows: [rclone](https://rclone.org/) (Prior rclone configuration is needed).
+ **Disk Space**
  + The migration process automatically creates unique directories to store sets of backup files and retains these backup directories in either S3 or on the local filesystem, depending on the program arguments provided.
  + Ensure there is enough disk space for database backup, ideally double the size of the existing InfluxDB database if you choose to omit the `--s3-bucket` option and use local storage for backup and restoration.
  + Check space with `df -h (UNIX/Linux)` or by checking drive properties on Windows.
+ **Direct Connection**

  Ensure a direct network connection exists between the system running the migration script and the source and destination systems. `influx ping --host <host>` is one way to verify a direct connection.

# How to use scripts
<a name="timestream-for-influx-getting-started-migrating-data-using-script"></a>

A simple example of running the script is the command:

```
python3 influx_migration.py --src-host <source host> --src-bucket <source bucket> --dest-host <destination host>
```

Which migrates a single bucket.

All options can be viewed by running:

```
python3 influx_migration.py -h
```

**Usage**

```
shell   influx_migration.py [-h] [--src-bucket SRC_BUCKET] [--dest-bucket DEST_BUCKET] [--src-host SRC_HOST] --dest-host DEST_HOST [--full] [--confirm-full] [--src-org SRC_ORG] [--dest-org DEST_ORG] [--csv] [--retry-restore-dir RETRY_RESTORE_DIR] [--dir-name DIR_NAME] [--log-level LOG_LEVEL] [--skip-verify] [--s3-bucket S3_BUCKET]
```

**Options**
+ **-confirm-full** (optional): Using `--full` without `--csv` will replace all tokens, users, buckets, dashboards, and any other key-value data in the destination database with the tokens, users, buckets, dashboards, and any other key-value data in the source database. `--full` with `--csv` only migrates all bucket and bucket metadata, including bucket organizations. This option (`--confirm-full`) will confirm a full migration and proceed without user input. If this option is not provided, and `--full` has been provided and `--csv` not provided, then the script will pause for execution and wait for user confirmation. This is a critical action, proceed with caution. Defaults to false.
+ **-csv** (optional): Whether to use csv files for backing up and restoring. If `--full` is passed as well then all user-defined buckets in all organizations will be migrated, not system buckets, users, tokens, or dashboards. If a singular organization is desired for all buckets in the destination server instead of their already-existing source organizations, use `--dest-org`.
+ **-dest-bucket DEST\$1BUCKET** (optional): The name of the InfluxDB bucket in the destination server, must not be an already existing bucket. Defaults to value of `--src-bucket` or `None` if `--src-bucket` not provided.
+ **-dest-host DEST\$1HOST**: The host for the destination server. Example: http://localhost:8086.
+ **-dest-org DEST\$1ORG** (optional): The name of the organization to restore buckets to in the destination server. If this is omitted, then all migrated buckets from the source server will retain their original organization and migrated buckets may not be visible in the destination server without creating and switching organizations. This value will be used in all forms of restoration whether a single bucket, a full migration, or any migration using csv files for backup and restoration.
+ **-dir-name DIR\$1NAME** (optional): The name of the backup directory to create. Defaults to `influxdb-backup-<timestamp>`. Must not already exist.
+ **-full** (optional): Whether to perform a full restore, replacing all data on destination server with all data from source server from all organizations, including all key-value data such as tokens, dashboards, users, etc. Overrides `--src-bucket` and `--dest-bucket`. If used with `--csv`, only migrates data and metadata of buckets. Defaults to false.
+ **h, --help**: Shows help message and exits.
+ **-log-level LOG\$1LEVEL**(optional): The log level to be used during execution. Options are debug, error, and info. Defaults to info.
+ **-retry-restore-dir RETRY\$1RESTORE\$1DIR**(optional): Directory to use for restoration when a previous restore failed, will skip backup and directory creation, will fail if the directory doesn't exist, can be a directory within an S3 bucket. If a restoration fails, the backup directory path that can be used for restoration will be indicated relative to the current directory. S3 buckets will be in the form `influxdb-backups/<s3 bucket>/<backup directory>`. The default backup directory name is `influxdb-backup-<timestamp>`.
+ **-s3-bucket S3\$1BUCKET**(optional): The name of the S3 bucket to use to store backup files. On Linux this is simply the name of the S3 bucket, such as `amzn-s3-demo-bucket1`, given `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables have been set or `${HOME}/.aws/credentials` exists. On Windows, this is the `rclone` configured remote and bucket name, such as `my-remote:amzn-s3-demo-bucket1`. All backup files will be left in the S3 bucket after migration in a created `influxdb-backups-<timestamp>` directory. A temporary mount directory named `influx-backups` will be created in the directory from where this script is ran. If not provided, then all backup files will be stored locally in a created `influxdb-backups-<timestamp>` directory from where this script is run.
+ **-skip-verify**(optional): Skip TLS certificate verification.
+ **-src-bucket SRC\$1BUCKET**(optional): The name of the InfluxDB bucket in the source server. If not provided, then `--full` must be provided.
+ **-src-host SRC\$1HOST**(optional): The host for the source server. Defaults to http://localhost:8086.

As noted previously, `mountpoint-s3` and `rclone` are needed if `--s3-bucket` is to be used, but can be ignored if the user doesn't provide a value for `--s3-bucket`, in which case backup files will be stored in a unique directory locally.

# Migration Overview
<a name="timestream-for-influx-getting-started-migrating-data-overview"></a>

After meeting the prerequisites:

1. Run Migration Script: Using a terminal app of your choice, run the Python script to transfer data from the source InfluxDB instance to the destination InfluxDB instance.

1. Provide Credentials: Provide host addresses and ports as CLI options.

1. Verify Data: Ensure the data is correctly transferred by:

   1. Using the InfluxDB UI and inspecting buckets.

   1. Listing buckets with `influx bucket list -t <destination token> --host <destination host address> --skip-verify`.

   1. Using `influx v1 shell -t <destination token> --host <destination host address> --skip-verify` and running `SELECT * FROM <migrated bucket>.<retention period>.<measurement name> LIMIT 100 to view contents of a bucket or SELECT COUNT(*) FROM <migrated bucket>.<retention period>.<measurment name>` to verify the correct number of records have been migrated.

**Example run**  

1. Open a terminal app of your choice and make sure the required prerequisites are properly installed:  
![\[Script prerequisites.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/script-pre-reqs.png)

1. Navigate to the migration script:  
![\[Script location\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/script-navigate.png)

1. Prepare the following information:

   1. Name of the source bucket to be migrated.

   1. (Optional) Choose a new bucket name for the migrated bucket in the destination server.

   1. Root token for source and destination influx instances.

   1. Host address of source and destination influx instances.

   1. (Optional) S3 bucket name and credentials; AWS Command Line Interface credentials should be set in the OS environment variables.

      ```
      # AWS credentials (for timestream testing)
          export AWS_ACCESS_KEY_ID="xxx"
          export AWS_SECRET_ACCESS_KEY="xxx"
      ```

   1. Construct the command as:

      ```
      python3 influx_migration.py --src-bucket [amzn-s3-demo-source-bucket]  --dest-bucket [amzn-s3-demo-destination-bucket] --src-host [source host] --dest-host [dest host] --s3-bucket [amzn-s3-demo-bucket2](optional) --log-level debug
      ```

   1. Execute the script:  
![\[Script execution\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/script-execution.png)

   1. Wait for the script to finish executing.

   1. Check the newly migrated bucket for data integrity, `performance.txt`. This file, located under the same directory where the script was run, contains some basic information on how long each step took.

## Migration scenarios
<a name="timestream-for-influx-migration-scenarios"></a>

**Example 1: Simple Migration Using Local Storage**  
You want to migrate a single bucket, amzn-s3-demo-primary-bucket, from the source server `(http://localhost:8086)` to a destination server `(http://dest-server-address:8086)`.  
After ensuring you have TCP access (for HTTP access) to both machines hosting the InfluxDB instances on port 8086 and you have both source and destination tokens and have stored them as the environment variables `INFLUX_SRC_TOKEN` and `INFLUX_DEST_TOKEN`, respectively, for added security:  

```
python3 influx_migration.py --src-bucket amzn-s3-demo-primary-bucket --src-host http://localhost:8086 --dest-host http://dest-server-address:8086
```
The output should look similar to the following:  

```
INFO: influx_migration.py: Backing up bucket data and metadata using the InfluxDB CLI
2023/10/26 10:47:15 INFO: Downloading metadata snapshot
2023/10/26 10:47:15 INFO: Backing up TSM for shard 1
2023/10/26 10:47:15 INFO: Backing up TSM for shard 8245
2023/10/26 10:47:15 INFO: Backing up TSM for shard 8263
[More shard backups . . .]
2023/10/26 10:47:20 INFO: Backing up TSM for shard 8240
2023/10/26 10:47:20 INFO: Backing up TSM for shard 8268
2023/10/26 10:47:20 INFO: Backing up TSM for shard 2
INFO: influx_migration.py: Restoring bucket data and metadata using the InfluxDB CLI
2023/10/26 10:47:20 INFO: Restoring bucket "96c11c8876b3c016" as "amzn-s3-demo-primary-bucket"
2023/10/26 10:47:21 INFO: Restoring TSM snapshot for shard 12772
2023/10/26 10:47:22 INFO: Restoring TSM snapshot for shard 12773
[More shard restores . . .]
2023/10/26 10:47:28 INFO: Restoring TSM snapshot for shard 12825
2023/10/26 10:47:28 INFO: Restoring TSM snapshot for shard 12826
INFO: influx_migration.py: Migration complete
```
The directory `influxdb-backup-<timestamp>` will be created and stored in the directory from where the script was run, containing backup files.



**Example 2: Full Migration Using Local Storage and Debug Logging**  
Same as above except you want to migrate all buckets, tokens, users, and dashboards, deleting the buckets in the destination server, and proceeding without user confirmation of a complete database migration by using the `--confirm-full` option. You also want to see what the performance measurements are so you enable debug logging.  

```
python3 influx_migration.py --full --confirm-full --src-host http://localhost:8086 --dest-host http://dest-server-address:8086 --log-level debug
```
The output should look similar to the following:  

```
INFO: influx_migration.py: Backing up bucket data and metadata using the InfluxDB CLI
2023/10/26 10:55:27 INFO: Downloading metadata snapshot
2023/10/26 10:55:27 INFO: Backing up TSM for shard 6952
2023/10/26 10:55:27 INFO: Backing up TSM for shard 6953
[More shard backups . . .]
2023/10/26 10:55:36 INFO: Backing up TSM for shard 8268
2023/10/26 10:55:36 INFO: Backing up TSM for shard 2
DEBUG: influx_migration.py: backup started at 2023-10-26 10:55:27 and took 9.41 seconds to run.
INFO: influx_migration.py: Restoring bucket data and metadata using the InfluxDB CLI
2023/10/26 10:55:36 INFO: Restoring KV snapshot
2023/10/26 10:55:38 WARN: Restoring KV snapshot overwrote the operator token, ensure following commands use the correct token
2023/10/26 10:55:38 INFO: Restoring SQL snapshot
2023/10/26 10:55:39 INFO: Restoring TSM snapshot for shard 6952
2023/10/26 10:55:39 INFO: Restoring TSM snapshot for shard 6953
[More shard restores . . .]
2023/10/26 10:55:49 INFO: Restoring TSM snapshot for shard 8268
2023/10/26 10:55:49 INFO: Restoring TSM snapshot for shard 2
DEBUG: influx_migration.py: restore started at 2023-10-26 10:55:36 and took 13.51 seconds to run.
INFO: influx_migration.py: Migration complete
```



**Example 3: Full Migration Using CSV, Destination Organization, and S3 Bucket**  
Same as the previous example but using Linux or Mac and storing the files in the S3 bucket, `amzn-s3-demo-bucket`. This avoids backup files overloading the local storage capacity.  

```
python3 influx_migration.py --full --src-host http://localhost:8086 --dest-host http://dest-server-address:8086 --csv --dest-org MyOrg --s3-bucket amzn-s3-demo-bucket
```
The output should look similar to the following:  

```
INFO: influx_migration.py: Creating directory influxdb-backups
INFO: influx_migration.py: Mounting amzn-s3-demo-influxdb-migration-bucket
INFO: influx_migration.py: Creating directory influxdb-backups/amzn-s3-demo-bucket/influxdb-backup-1698352128323
INFO: influx_migration.py: Backing up bucket data and metadata using the InfluxDB v2 API
INFO: influx_migration.py: Restoring bucket data and metadata from csv
INFO: influx_migration.py: Restoring bucket amzn-s3-demo-some-bucket
INFO: influx_migration.py: Restoring bucket amzn-s3-demo-another-bucket
INFO: influx_migration.py: Restoring bucket amzn-s3-demo-primary-bucket
INFO: influx_migration.py: Migration complete
INFO: influx_migration.py: Unmounting influxdb-backups
INFO: influx_migration.py: Removing temporary mount directory
```

# Configuring a DB instance
<a name="timestream-for-influx-configuring"></a>

This section shows how to set up your Amazon Timestream for InfluxDB DB instance. Before creating a DB instance, decide on the DB instance class that will run the DB instance. Also, decide where the DB instance will run by choosing an AWS Region. Next, create the DB instance.

You can configure a DB instance with a DB parameter group.A DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

The parameters that are available depend on the DB engine and DB engine version. You can specify a DB parameter group when you create a DB instance. You can also modify a DB instance to specify them. 

**Important**  
At this time, you can't modify compute (Instance types) and Storage (Storage Types) configuration of existing instances.

## Creating a DB instance
<a name="timestream-for-influx-configuring-create-db"></a>

**Using the console**

1. Sign in to the AWS Management Console and open [Amazon Timestream for InfluxDB](https://console.aws.amazon.com/timestream/). 

1. In the upper-right corner of the Amazon Timestream for InfluxDB console, choose the AWS Region in which you want to create the DB instance.

1. In the navigation pane, choose **InfluxDB Databases**.

1. Choose ** Create Influx database**.

1. For **DB Instance Identifier**. enter a name that will identify your instance.

1. Provide the InfluxDB basic configuration parameters **User Name, Organization, Bucket Name and Password**.
**Important**  
Your user name, organization, bucket name and password will be stored as a secret in AWS Secrets Manager that will be created for your account.

   If you need to change the user password after the DB instance is available, you can modify using the [Influx CLI](https://docs.influxdata.com/influxdb/v2/admin/users/change-password/).

1. 

1. For **DB Instance Class**, select an instance size that better fit your workload needs. 

1. For **DB Storage Class**, select a storage class that fits your need. In all cases, you will only need to configure the allocated storage. 

1. In the **Connectivity configuration** section, make sure your InfluxDB instance is in the same subnet as your new the clients that require connectivity to your Timestream for InfluxDB DB instance. You could also chose to make your DB instance publicly available. 

1. Choose **Create Influx database**. 

1. In the **Databases** list, choose the name of your new InfluxDB instance to show its details. The DB Instance has a status of **Creating** until is ready to use. 

1. When the status changes to **Available**, you can connect to the DB instance. Depending on the DB instance class and the amount of storage, it can take up to 20 minutes before the new instance is available.

 **Using the CLI**

To create a DB instance by using the AWS Command Line Interface, call the `create-db-instance` command with the following parameters:

```
--name
--vpc-subnet-ids
--vpc-security-group-ids
--db-instance-type
--db-storage-type
--username
--organization
--password
--allocated-storage
```

For information about each setting, see [Settings for DB instances](#timestream-for-influx-configuring-create-db-settings).

**Example: Using default engine configs**  

For Linux, macOS, or Unix:

```
aws timestream-influxdb create-db-instance \
    --name myinfluxDbinstance \
    --allocated-storage 400 \
    --db-instance-type db.influx.4xlarge \
    --vpc-subnet-ids subnetid1 subnetid2
    --vpc-security-group-ids mysecuritygroup \    
    --username masterawsuser \
    --password \
    --db-storage-type InfluxIOIncludedT2
```

For Windows:

```
aws timestream-influxdb create-db-instance \
    --name myinfluxDbinstance \
    --allocated-storage 400 \
    --db-instance-type db.influx.4xlarge \
    --vpc-subnet-ids subnetid1 subnetid2
    --vpc-security-group-ids mysecuritygroup \    
    --username masterawsuser \
    --password \
    --db-storage-type InfluxIOIncludedT2
```

 **Using the API**

To create a DB instance by using the AWS Command Line Interface, call the `CreateDBInstance` command with the following parameters:

For information about each setting, see [Settings for DB instances](#timestream-for-influx-configuring-create-db-settings).

**Important**  
Part of the DBInstance response object you receive an influxAuthParametersSecretArn. This will hold an ARN to a SecretsManager secret in your account. It will only be populated after your InfluxDB DB instances is available. The secret contains influx authentication parameters provided during the `CreateDbInstance` process. This is a READONLY copy as any updates/modifications/deletions to this secret doesn't impact the created DB instance. If you delete this secret, our API response will still refer to the deleted secret ARN.

Once you have finished creating your Timestream for InfluxDB DB instance, we recommend you download, install and configure the Influx CLI.

The influx CLI provides a simple way to interact with InfluxDB from a command line. For detailed installation and setup instructions, see [Use the Influx CLI](https://docs.influxdata.com/influxdb/v2/tools/influx-cli/).

## Settings for DB instances
<a name="timestream-for-influx-configuring-create-db-settings"></a>

You can create a DB instance using the console, the `create-db-instance` CLI command, or the `CreateDBInstance` Timestream for InfluxDB API operation.

The following table provides details about settings that you choose when you create a DB instance. 


| Console Setting | Description | CLI option and Timestream API parameter | 
| --- | --- | --- | 
| Allocated storage | The amount of storage to allocate for your DB instance (in gibibytes). In some cases, allocating a higher amount of storage for your DB instance than the size of your database can improve I/O performance. For more information, see [InfluxDB instance storage](timestream-for-influxdb.md#timestream-for-influx-dbi-storage). | CLI: `allocated-storage` API: `allocatedstorage` | 
| Bucket Name | A name for the bucket to initialize the InfluxDb instance  | CLI: `bucket` API: `bucket` | 
| DB instance type | The configuration for your DB instance. For example, a db.influx.large DB instance class has 16 GiB memory, 2 vCPUs, memory optimized. If possible, choose a DB instance type large enough that a typical query working set can be held in memory. When working sets are held in memory, the system can avoid writing to disk, which improves performance. For more information, see [DB instance class types](timestream-for-influxdb.md#timestream-for-influx-dbi-classtypes).   | CLI: `db-instance-type` API: `Dbinstancetype` | 
| DB instance identifier |  The name for your DB instance. Name your DB instances in the same way that you name your on-premises servers. Your DB instance identifier can contain up to 63 alphanumeric characters, and must be unique for your account in the AWS Region you chose.  | CLI: `db-instance-identifier` API: `Dbinstanceidentifier` | 
| DB parameter group | A parameter group for your DB instance. You can choose the default parameter group, or you can create a custom parameter group. For more information, see [Working with DB parameter groups](timestream-for-influx-db-connecting.md#timestream-for-influx-working-with-parameter-groups)..  | CLI: `db-parameter-group-name` API: `DBParameterGroupName` | 
| Log Delivery Setting | The name of the S3 bucket were the InfluxDB logs will be stored.  | CLI: `LogDeliveryConfiguration` API: `log-delivery-configuration` | 
| Multi-AZ deployment | Create a standby instance to create a passive secondary replica of your DB instance in another Availability Zone for failover support. We recommend Multi-AZ for production workloads to maintain high availability. For development and testing, you can choose Do not create a standby instance. For more information, see [Configuring and managing a multi-AZ deployment](timestream-for-influx-managing-multi-az.md).  |  CLI: `MultiAz` API: `multi-az`  | 
| Network Type |  The IP addressing protocols supported by the DB instance. IPv4 (the default) to specify that resources can communicate with the DB instance only over the Internet Protocol version 4 (IPv4) addressing protocol. Dual-stack mode to specify that resources can communicate with the DB instance over IPv4, Internet Protocol version 6 (IPv6), or both. Use dual-stack mode if you have any resources that must communicate with your DB instance over the IPv6 addressing protocol. Also, make sure that you associate an IPv6 CIDR block with all subnets in the DB subnet group that you specify. While IPv6 is public by default, we do support private IPv6 endpoints, keep in mind that this is a one way door since we do not support changing the *Publicly Accessible* flag after instance creation.  |  CLI: `network-type` API: `NetworkType`  | 
| Password | This will be your master use password use to Initialize your InfluxDB Db instance. You will use this password to log in into the InfluxUI to obtain your operator token.  | CLI: `password` API: `password` | 
| Public Access | Yes to give the DB instance a public IP address, meaning that it's accessible outside the VPC. To be publicly accessible, the DB instance also has to be in a public subnet in the VPC. No to make the DB instance accessible only from inside the VPC. To connect to a DB instance from outside of its VPC, the DB instance must be publicly accessible. Also, access must be granted using the inbound rules of the DB instance's security group. In addition, other requirements must be met.   | CLI: `publicly-accessible` API: `PubliclyAccessible` | 
| Storage Type |  The storage type for your DB instance You can choose between 3 different types Provisioned influx IOPS Included storage according to your workloads requirements: \$1 Influx IOPS Included 3000 IOPS \$1 Influx IOPS Included 12000 IOPS \$1 INflux IOPS Included 16000 IOPS  For more information, see [InfluxDB instance storage](timestream-for-influxdb.md#timestream-for-influx-dbi-storage).  | CLI: `db-storage-type` API: `DbStorageType` | 
| Initial username | This will be the master user to initialize your InfluxDB DB instance with. You will use this username to log in into the InfluxUI to obtain your operator token.  | CLI: `username` API: `Username` | 
| Subnets | A vpc subnet to associate with this DB instance.   | CLI: `vpc-subnet-ids` API: `VPCSubnetIds` | 
| VPC Security Group (firewall) | The security group to associate with the DB instance.   | CLI: `vpc-security-group-ids` API: `VPCSecurityGroupIds` | 

# Connecting to an Amazon Timestream for InfluxDB DB instance
<a name="timestream-for-influx-db-connecting"></a>

Before you can connect to a DB instance, you must create the DB instance. For information, see [Creating a DB instance](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db). After Amazon Timestream provisions your DB instance, use the InfluxDB API, influx CLI, or any compatible client or utility for InfluxDB to connect to the DB instance. 

**Topics**
+ [Finding the connection information for an Amazon Timestream for InfluxDB DB instance](#timestream-for-influx-db-connecting-finding-connection-info)
+ [Database authentication options](#timestream-for-influx-db-connecting-authentication-options)
+ [Working with parameter groups](#timestream-for-influx-parameter-groups)

## Finding the connection information for an Amazon Timestream for InfluxDB DB instance
<a name="timestream-for-influx-db-connecting-finding-connection-info"></a>

The connection information for a DB instance includes its endpoint, port, username, password, and a valid access token, such as the operator or all-access token. For example, for a Timestream for InfluxDB DB instance, suppose that the endpoint value is `c5vasdqn0b-3ksj4dla5nfjhi.timestream-influxdb.us-east-1.on.aws`. In this case, the port value is 8086, and the database user is *admin*. Given this information, to access the instance you will use:
+ The endpoint of your instance, `c5vasdqn0b-3ksj4dla5nfjhi.timestream-influxdb.us-east-1.on.aws:8086`.
+ Either the username and password supplied when creating the instance or valid access token.

Instances created before December 9, 2024 will have an endpoint that contains the instance name instead of the instance ID. For example: `influxdb1-123456789.us-east-1.timestream-influxdb.amazonaws.com`.

**Important**  
As part of the DB instance response object, you will receive a `influxAuthParametersSecretArn`. This will hold an ARN to a Secrets Manager secret in your account. t will only be populated after your InfluxDB DB instances are available. The secret contains Influx authentication parameters provided during the `CreateDbInstance` process. This is a **read-only** copy as any updates/modifications/deletions to this secret doesn't impact the created DB instance. If you delete this secret, our API response will still refer to the deleted secret ARN.

The endpoint is unique for each DB instance, and the values of the port and user can vary. To connect to a DB instance, you can use the influx CLI, InfluxDB API, or any client compatible with InfluxDB. 

To find the connection information for a DB instance, use the AWS Management Console. You can also use the AWS Command Line Interface (AWS CLI) `describe-db-instances` command or the Timestream for InfluxDB API `GetDBInstance` operation.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB Databases** to display a list of your DB instances.

1. Choose the name of the DB instance to display its details.

1. In the **Summary** section, copy the endpoint. Also, note the port number. You will need both the endpoint and the port number to connect to the DB instance.

If you need to find the username and password information, choose the **Configuration Details** tab and choose the `influxAuthParametersSecretArn` to access your Secrets Manager.

**Using the CLI**
+ To find the connection information for a InfluxDB DB instance by using the AWS CLI, call the `get-db-instance` command. In the call, query for the DB instance ID, endpoint, port, and influxAuthParametersSecretArn.

  For Linux, macOS, or Unix:

  ```
  aws timestream-influxdb get-db-instance --identifier id \
   --query "[name,endpoint,influxAuthParametersSecretArn]"
  ```

  For Windows:

  ```
  aws timestream-influxdb get-db-instance --identifier id ^
   --query "[name,endpoint,influxAuthParametersSecretArn]"
  ```

  Your output should be similar to the following. To access the username information, you will need to check the `InfluxAuthParameterSecret`.

  ```
  [
      [
          "mydb",
          "mydbid-123456789012.timestream-influxdb.us-east-1.on.aws",
          8086,
      ]
  ]
  ```

### Creating access tokens
<a name="timestream-for-influx-db-connecting-creating-access-tokens"></a>

With this information, you are going to be able to connect to your instance to retrieve or create your access tokens. There are several ways to achieve this:

**Using the CLI**

1. If you haven’t already, download, install, and configure the [influx CLI](https://docs.influxdata.com/influxdb/v2/tools/influx-cli/). 

1. When configuring your influx CLI config, use `--username-password` to authenticate.

   ```
   influx config create --config-name YOUR_CONFIG_NAME --host-url "https://yourinstanceid-accountidentifier.timestream-influxdb.us-east-1.on.aws:8086" --org yourorg --username-password admin --active
   ```

1. Use the [influx auth create](https://docs.influxdata.com/influxdb/v2/reference/cli/influx/auth/create/) command to re-create your operator token. Take into account that this process will invalidate the old operator token.

   ```
   influx auth create --org kronos --operator
   ```

1. Once you have the operator token, you can use the [influx auth list](https://docs.influxdata.com/influxdb/v2/reference/cli/influx/auth/list) command to view all your tokens. You can use the [influx auth create](https://docs.influxdata.com/influxdb/v2/reference/cli/influx/auth/create/) command to create an all-access token.

**Important**  
You will need to perform this step to obtain your operator token first. Then you will be able to create new tokens using the InfluxDB API or CLI.

**Using the InfluxDB UI**

1. Browse to your Timestream for InfluxDB instance using the created endpoint to log in and access the InfluxDB UI. You will need to use the username and password used to create your InfluxDB DB instance. You can retrieve this information from the `influxAuthParametersSecretArn` that was specified in the response object of the `CreateDbInstance`.

   Alternatively you can open the InfluxDB UI from the Amazon Timestream for InfluxDB console:

   1.  Sign in to the AWS Management Console and open the Timestream for InfluxDB console at [https://console.aws.amazon.com/timestream/.](https://console.aws.amazon.com/timestream/) 

   1. In the upper-right corner of the Amazon Timestream for InfluxDB console, choose the AWS Region in which you created the DB instance.

   1. In the **Databases** list, choose the name of your InfluxDB instance to show its details. In the upper right corner, choose **InfluxDB UI**.

1. Once logged in to your InfluxDB UI, navigate to **Load Data** and then **API Tokens** using the left navigation bar.

1. Choose **Generate API Token** and select **All Access API Token**.

1. Enter a description for the API token and choose **SAVE**.

1. Copy the generated token and store it for safe keeping.

**Important**  
When creating tokens from the InfluxDB UI, the newly created tokens are only going to be shown once. Make sure you copy these. Otherwise, you will need to re-create them.

**Using the InfluxDB API**
+ Send a request to the InfluxDB API `/api/v2/authorizations` endpoint using the POST request method.

  Include the following with your request:

  1. Headers:

     1. Authorization: Token <INFLUX\$1OPERATOR\$1TOKEN>

     1. Content-Type: application/json

  1. Request body: JSON body with the following properties:

     1. status: "active"

     1. description: API token description

     1. orgID: InfluxDB organization ID

     1. permissions: Array of objects where each object represents permissions for an InfluxDB resource type or a specific resource. Each permission contains the following properties:

        1. action: “read” or “write”

        1. resource: JSON object that represents the InfluxDB resource to grant permission to. Each resource contains at least the following property: orgID: InfluxDB organization ID

        1. type: Resource type. For information about what InfluxDB resource types exist, use the /api/v2/resources endpoint.

The following example uses `curl` and the InfluxDB API to generate an all-access token:

```
export INFLUX_HOST=https://instanceid-123456789.timestream-influxdb.us-east-1.on.aws
export INFLUX_ORG_ID=<YOUR_INFLUXDB_ORG_ID>
export INFLUX_TOKEN=<YOUR_INFLUXDB_OPERATOR_TOKEN>

curl --request POST \
"$INFLUX_HOST/api/v2/authorizations" \
  --header "Authorization: Token $INFLUX_TOKEN" \
  --header "Content-Type: text/plain; charset=utf-8" \
  --data '{
    "status": "active",
    "description": "All access token for get started tutorial",
    "orgID": "'"$INFLUX_ORG_ID"'",
    "permissions": [
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "authorizations"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "authorizations"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "buckets"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "buckets"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dashboards"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dashboards"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "orgs"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "orgs"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "sources"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "sources"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "tasks"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "tasks"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "telegrafs"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "telegrafs"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "users"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "users"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "variables"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "variables"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "scrapers"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "scrapers"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "secrets"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "secrets"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "labels"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "labels"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "views"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "views"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "documents"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "documents"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationRules"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationRules"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationEndpoints"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notificationEndpoints"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "checks"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "checks"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dbrp"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "dbrp"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notebooks"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "notebooks"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "annotations"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "annotations"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "remotes"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "remotes"}},
      {"action": "read", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "replications"}},
      {"action": "write", "resource": {"orgID": "'"$INFLUX_ORG_ID"'", "type": "replications"}}
    ]
  }
'
```

## Database authentication options
<a name="timestream-for-influx-db-connecting-authentication-options"></a>

Amazon Timestream for InfluxDB supports the following ways to authenticate database users:
+ **Password authentication** – Your DB instance performs all administration of user accounts. You create users, specify passwords, and administer tokens using the InfluxDB UI, influx CLI, or InfluxDB API.
+ **Token authentication** – Your DB instance performs all administration of user accounts. You can create users, specify passwords, and administer tokens using your operator token via the influx CLI and InfluxDB API.

### Encrypted connections
<a name="timestream-for-influx-db-connecting-authentication-options-encrypted"></a>

You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt a connection to a DB instance. The certificates needed for the TLS handshake between InfluxDB and the applications created and managed by the Kronos service. When the certificate is renewed, the instance is automatically updated with the latest version without requiring any user intervention.

## Working with parameter groups
<a name="timestream-for-influx-parameter-groups"></a>

Database parameters specify how the database is configured. For example, database parameters can specify the amount of resources, such as memory, to allocate to a database.

You manage your database configuration by associating your DB instances with parameter groups. Amazon Timestream for InfluxDB defines parameter groups with default settings. You can also define your own parameter groups with customized settings.

### Overview of parameter groups
<a name="timestream-for-influx-parameter-groups-overview"></a>

A DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

**Topics**
+ [Default and custom parameter groups](#timestream-for-influx-parameter-groups-overview-default-custom-parameter-groups)
+ [Creating a DB parameter group](#timestream-for-influx-parameter-groups-creating)
+ [Static and dynamic DB instance parameters](#timestream-for-influx-parameter-groups-static-dynamic-parameters)
+ [Supported parameters and parameter values](#timestream-for-influx-parameter-groups-overview-supported-parameters)

#### Default and custom parameter groups
<a name="timestream-for-influx-parameter-groups-overview-default-custom-parameter-groups"></a>

DB instances use DB parameter groups. The following sections describe configuring and managing DB instance parameter groups.

#### Creating a DB parameter group
<a name="timestream-for-influx-parameter-groups-creating"></a>

You can create a new DB parameter group using the AWS Management Console, the AWS Command Line Interface, or the Timestream API.

The following limitations apply to the DB parameter group name:
+ The name must be 1 to 255 letters, numbers, or hyphens.
+ Default parameter group names can include a period, such as `default.InfluxDB.2.7`. However, custom parameter group names can't include a period.
+ The first character must be a letter.
+ The name cannot start with “dbpg-”
+ The name can't end with a hyphen or contain two consecutive hyphens.
+ If you create a DB instance without specifying a DB parameter group, the DB instance uses the InfluxDB engine defaults. 

You can't modify the parameter settings of a default parameter group. Instead, you can do the following:

1. Create a new parameter group.

1. Change the settings of your desired parameters. Not all DB engine parameters in a parameter group are eligible to be modified.

1. Update your DB instance to use the custom parameter group. For information about updating a DB instance, see [Updating DB instances](timestream-for-influx-managing-modifying-db.md). 

**Note**  
If you have modified your DB instance to use a custom parameter group, and you start the DB instance, Amazon Timestream for InfluxDB automatically reboots the DB instance as part of the startup process.  
 Currently, you won’t be able to modify custom parameter groups once they have been created. If you need to change a parameter, it is required that you create a new custom parameter group and assign it to the instances that require this configuration change. If you update an existing DB instance to assign a new parameter group, it will always be applied immediately and reboot your instance. 

#### Static and dynamic DB instance parameters
<a name="timestream-for-influx-parameter-groups-static-dynamic-parameters"></a>

InfluxDB DB instance parameters are always static. They behave as follows:

When you change a static parameter, save the DB parameter group, and assign it to an instance, the parameter change takes effect automatically after the instance is rebooted. 

When you associate a new DB parameter group with a DB instance, Timestream applies the modified static parameters only after the DB instance is rebooted. Currently the only option is apply immediately.

 For more information about changing the DB parameter group, see [Updating DB instances](timestream-for-influx-managing-modifying-db.md).

#### Supported parameters and parameter values
<a name="timestream-for-influx-parameter-groups-overview-supported-parameters"></a>

To determine the supported parameters for your DB instance, view the parameters in the DB parameter group used by the DB instance. For more information, see [Viewing parameter values for a DB parameter group](#timestream-for-influx-working-with-parameter-groups-viewing).

For more information about all parameters supported by the open-source version of InfluxDB, see [InfluxDB configuration options](https://docs.influxdata.com/influxdb/v2/reference/config-options/?t=JSON). Currently you will only be able to modify the following InfluxDB parameters:


****  

| Parameter | Description | Default value | Value | Valid range | Note | 
| --- | --- | --- | --- | --- | --- | 
| [flux-log-enabled](https://docs.influxdata.com/influxdb/v2/reference/config-options/?t=JSON) | Include option to show detailed logs for Flux queries | FALSE | Boolean | N/A |  | 
| [log-level](https://docs.influxdata.com/influxdb/v2/reference/config-options/#log-level) | Log output level. InfluxDB outputs log entries with severity levels greater than or equal to the level specified. | info | debug, info, error | N/A |  | 
| [no-tasks](https://docs.influxdata.com/influxdb/v2/reference/config-options/#no-tasks) | Disable the task scheduler. If problematic tasks prevent InfluxDB from starting, use this option to start InfluxDB without scheduling or executing tasks. | FALSE | Boolean | N/A |  | 
| [query-concurrency](https://docs.influxdata.com/influxdb/v2/reference/config-options/#query-concurrency) | Number of queries allowed to execute concurrently. Setting to 0 allows an unlimited number of concurrent queries. | 0 |  | 0 to 256 |  | 
| [query-queue-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#query-queue-size) | Maximum number of queries allowed in execution queue. When queue limit is reached, new queries are rejected. Setting to 0 allows an unlimited number of queries in the queue. | 1,024 |  | N/A |  | 
| [tracing-type](https://docs.influxdata.com/influxdb/v2/reference/config-options/#tracing-type) | Enable tracing in InfluxDB and specifies the tracing type. Tracing is disabled by default. | "" | log, jaeger | N/A |  | 
| [metrics-disabled](https://docs.influxdata.com/influxdb/v2/reference/config-options/#metrics-disabled) | Disable the HTTP /metrics endpoint which exposes [internal InfluxDB metrics](https://docs.influxdata.com/influxdb/v2/reference/internals/metrics/). | FALSE |  | N/A |  | 
| [http-idle-timeout](https://docs.influxdata.com/influxdb/v2/reference/config-options/#http-idle-timeout) | Maximum duration the server should keep established connections alive while waiting for new requests. Set to `0` for no timeout. | 3m0s | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [http-read-header-timeout](https://docs.influxdata.com/influxdb/v2/reference/config-options/#http-read-header-timeout) | Maximum duration the server should try to read HTTP headers for new requests. Set to `0` for no timeout. | 10s | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [http-read-timeout](https://docs.influxdata.com/influxdb/v2/reference/config-options/#http-read-timeout) | Maximum duration the server should try to read the entirety of new requests. Set to `0` for no timeout. | 0 | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [http-write-timeout](https://docs.influxdata.com/influxdb/v2/reference/config-options/#http-write-timeout) | Maximum duration the server should spend processing and responding to write requests. Set to `0` for no timeout. | 0 | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [influxql-max-select-buckets](https://docs.influxdata.com/influxdb/v2/reference/config-options/#influxql-max-select-buckets) | Maximum number of group by time buckets a `SELECT` statement can create. `0` allows an unlimited number of buckets. | 0 | Long |  Minimum: 0 Maximum: 9,223,372,036,854,775,807  |  | 
| [influxql-max-select-point](https://docs.influxdata.com/influxdb/v2/reference/config-options/#influxql-max-select-point) | Maximum number of points a `SELECT` statement can process. `0` allows an unlimited number of points. InfluxDB checks the point count every second (so queries exceeding the maximum aren’t immediately aborted). | 0 | Long |  Minimum: 0 Maximum: 9,223,372,036,854,775,807  |  | 
| [influxql-max-select-series](https://docs.influxdata.com/influxdb/v2/reference/config-options/#influxql-max-select-series) | Maximum number of series a `SELECT` statement can return. `0` allows an unlimited number of series. | 0 | Long |  Minimum: 0 Maximum: 9,223,372,036,854,775,807  |  | 
| [pprof-disabled](https://docs.influxdata.com/influxdb/v2/reference/config-options/#pprof-disabled) | Disable the `/debug/pprof` HTTP endpoint. This endpoint provides runtime profiling data and can be helpful when debugging. | TRUE | Boolean |  N/A | While InfluxDB sets pprof-disabled as false by default, AWS sets it as true by default. | 
| [query-initial-memory-bytes](https://docs.influxdata.com/influxdb/v2/reference/config-options/#query-initial-memory-bytes) | Initial bytes of memory allocated for a query. | 0 | Long | Minimum: 0Maximum: query-memory-bytes |  | 
| [query-max-memory-bytes](https://docs.influxdata.com/influxdb/v2/reference/config-options/#influxql-max-select-series) | Maximum total bytes of memory allowed for queries. | 0 | Long | Minimum: 0Maximum: 9,223,372,036,854,775,807 |  | 
| [query-memory-bytes](https://docs.influxdata.com/influxdb/v2/reference/config-options/#query-memory-bytes) | Specifies the Time to Live (TTL) in minutes for newly created user sessions. | 0 | Long | Minimum: 0Maximum: 2,147,483,647 | Must be greater than or equal to query-initial-memory-bytes. | 
| [session-length](https://docs.influxdata.com/influxdb/v2/reference/config-options/#session-length) | Specifies the Time to Live (TTL) in minutes for newly created user sessions. | 60 | Integer | Minimum: 0Maximum: 2,880 |  | 
| [session-renew-disabled](https://docs.influxdata.com/influxdb/v2/reference/config-options/#session-renew-disabled) | Disables automatically extending a user’s session TTL on each request. By default, every request sets the session’s expiration time to 5 minutes from now. When disabled, sessions expire after the specified [session length](https://docs.influxdata.com/influxdb/v2/reference/config-options/#session-length) and the user is redirected to the login page, even if recently active. | FALSE | Boolean | N/A |  | 
| [storage-cache-max-memory-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-cache-max-memory-size) | Maximum size (in bytes) a shard’s cache can reach before it starts rejecting writes. | 1,073,741,824 | Long | Minimum: 0Maximum: 549,755,813,888 | Must be lower than instance's total memory capacity.We recommend setting it to below 15 percent of the total memory capacity. | 
| [storage-cache-snapshot-memory-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-cache-snapshot-memory-size) | Size (in bytes) at which the storage engine will snapshot the cache and write it to a TSM file to make more memory available. | 26,214,400 | Long | Minimum: 0Maximum: 549,755,813,888 | Must be lower than storage-cache-max-memory-size. | 
| [storage-cache-snapshot-write-cold-duration](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-cache-snapshot-write-cold-duration) | Duration at which the storage engine will snapshot the cache and write it to a new TSM file if the shard hasn’t received writes or deletes. | 10m0s | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [storage-compact-full-write-cold-duration](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-compact-full-write-cold-duration) | Duration at which the storage engine will compact all TSM files in a shard if it hasn’t received writes or deletes. | 4h0m0s | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [storage-compact-throughput-burst](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-compact-throughput-burst) | Rate limit (in bytes per second) that TSM compactions can write to disk. | 50,331,648 | Long | Minimum: 0Maximum: 9,223,372,036,854,775,807 |  | 
| [storage-max-concurrent-compactions](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-max-concurrent-compactions) | Maximum number of full and level compactions that can run concurrently. A value of `0` results in 50 percent of `runtime.GOMAXPROCS(0)` used at runtime. Any number greater than zero limits compactions to that value. This setting does not apply to cache snapshotting. | 0 | Integer | Minimum: 0Maximum: 64 |  | 
| [storage-max-index-log-file-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-max-index-log-file-size) | Size (in bytes) at which an index write-ahead log (WAL) file will compact into an index file. Lower sizes will cause log files to be compacted more quickly and result in lower heap usage at the expense of write throughput. | 1,048,576 | Long | Minimum: 0Maximum: 9,223,372,036,854,775,807 |  | 
| [storage-no-validate-field-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-no-validate-field-size) | Skip field size validation on incoming write requests. | FALSE | Boolean | N/A |  | 
| [storage-retention-check-interval](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-retention-check-interval) | Interval of retention policy enforcement checks. | 30m0s | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | N/A | Hours:-Minimum: 0-Maximum: 256,205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 | 
| [storage-series-file-max-concurrent-snapshot-compactions](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-series-file-max-concurrent-snapshot-compactions) | Maximum number of snapshot compactions that can run concurrently across all series partitions in a database.  | 0 | Integer | Minimum: 0Maximum: 64 |  | 
| [storage-series-id-set-cache-size](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-series-id-set-cache-size) | Size of the internal cache used in the TSI index to store previously calculated series results. Cached results are returned quickly rather than needing to be recalculated when a subsequent query with the same tag key/value predicate is executed. Setting this value to `0` will disable the cache and may decrease query performance. | 100 | Long | Minimum: 0Maximum: 9,223,372,036,854,775,807 |  | 
| [storage-wal-max-concurrent-writes](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-wal-max-concurrent-writes) | Maximum number writes to the WAL directory to attempt at the same time. | 0 | Integer | Minimum: 0Maximum: 256 |  | 
| [storage-wal-max-write-delay](https://docs.influxdata.com/influxdb/v2/reference/config-options/#storage-wal-max-write-delay) | Maximum amount of time a write request to the WAL directory will wait when the the maximum number of concurrent active writes to the WAL directory has been met. Set to `0` to disable the timeout. | 10m | Duration with unit hours, minutes, seconds, milliseconds. Example: durationType=minutes,value=10 | Hours:-Minimum: 0-Maximum: 256205Minutes:-Minimum: 0-Maximum: 15,372,286Seconds:-Minimum: 0-Maximum: 922,337,203Milliseconds:-Minimum: 0-Maximum: 922,337,203,685 |  | 
| [ui-disabled](https://docs.influxdata.com/influxdb/v2/reference/config-options/#ui-disabled) | Disable the InfluxDB user interface (UI). The UI is enabled by default. | FALSE | Boolean | N/A |  | 

Improperly setting parameters in a parameter group can have unintended adverse effects, including degraded performance and system instability. Always be cautious when modifying database parameters. Test parameter group setting changes on a test DB instance before applying those parameter group changes to a production DB instance.

### Working with DB parameter groups
<a name="timestream-for-influx-working-with-parameter-groups"></a>

DB instances use DB parameter groups. The following sections describe configuring and managing DB instance parameter groups.

**Topics**
+ [Creating a DB parameter group](#timestream-for-influx-working-with-parameter-groups-creating)
+ [Associating a DB parameter group with a DB instance](#timestream-for-influx-working-with-parameter-groups-associating)
+ [Listing DB parameter groups](#timestream-for-influx-working-with-parameter-groups-listing)
+ [Viewing parameter values for a DB parameter group](#timestream-for-influx-working-with-parameter-groups-viewing)

#### Creating a DB parameter group
<a name="timestream-for-influx-working-with-parameter-groups-creating"></a>

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Parameter group name** box, enter the name of the new DB parameter group.

1. In the **Description** box, enter a description for the new DB parameter group.

1. Choose the parameters to modify and apply the desired values. For more information on supported parameters, see [Supported parameters and parameter values](#timestream-for-influx-parameter-groups-overview-supported-parameters).

1. Choose **Create parameter group**.

**Using the AWS Command Line Interface**
+ To create a DB parameter group by using the AWS CLI, call the `create-db-parameter-group` command with the following parameters:

  ```
  --db-parameter-group-name <value>
  --description <value>
  --endpoint_url <value>
  --region <value>
  --parameters (list) (string)
  ```  
**Example**  

  For information about each setting, see [Settings for DB instances](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db-settings). This example uses default engine configs. 

  ```
  aws timestream-influxdb create-db-parameter-group 
      --db-parameter-group-name YOUR_PARAM_GROUP_NAME \
      --endpoint-url YOUR_ENDPOINT \
      --region YOUR_REGION \
      --parameters "InfluxDBv2={logLevel=debug,queryConcurrency=10,metricsDisabled=true}" \" \
      --debug
  ```

#### Associating a DB parameter group with a DB instance
<a name="timestream-for-influx-working-with-parameter-groups-associating"></a>

You can create your own DB parameter groups with customized settings. You can associate a DB parameter group with a DB instance using the AWS Management Console, the AWS Command Line Interface, or the Timestream for InfluxDB API. You can do so when you create or modify a DB instance.

For information about creating a DB parameter group, see [Creating a DB parameter group](#timestream-for-influx-working-with-parameter-groups-creating). For information about creating a DB instance, see [Creating a DB instance](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db). For information about modifying a DB instance, see [Updating DB instances](timestream-for-influx-managing-modifying-db.md).

**Note**  
When you associate a new DB parameter group with a DB instance, the modified static parameters are applied only after the DB instance is rebooted. Currently, only apply immediately is supported. Timestream for InfluxDB only support static parameters.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB Databases**, and then choose the DB instance that you want to modify.

1. Choose **Update**. The **Update DB instance** page appears.

1. Change the **DB parameter group** setting.

1. Choose **Continue** and check the summary of modifications.

1. Currently only **Apply immediately** is supported. This option can cause an outage in some cases since it will reboot your DB instance. 

1. On the confirmation page, review your changes. If they are correct, choose **Update DB instance** to save your changes and apply them. Or choose **Back** to edit your changes or **Cancel** to cancel your changes.

**Using the AWS Command Line Interface**

For Linux, macOS, or Unix:

```
aws timestream-influxdb update-db-instance 
--identifier YOUR_DB_INSTANCE_ID \
--region YOUR_REGION \
--db-parameter-group-identifier YOUR_PARAM_GROUP_ID \
--log-delivery-configuration "{\"s3Configuration\": {\"bucketName\": \"${LOGGING_BUCKET}\", \"enabled\": false }}"
```

For Windows:

```
aws timestream-influxdb update-db-instance 
--identifier YOUR_DB_INSTANCE_ID ^
--region YOUR_REGION ^
--db-parameter-group-identifier YOUR_PARAM_GROUP_ID ^
--log-delivery-configuration "{\"s3Configuration\": {\"bucketName\": \"${LOGGING_BUCKET}\", \"enabled\": false }}"
```

#### Listing DB parameter groups
<a name="timestream-for-influx-working-with-parameter-groups-listing"></a>

You can list the DB parameter groups you've created for your AWS account.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **Parameter groups**.

1. The DB parameter groups appear in a list.

**Using the AWS Command Line Interface**

To list all DB parameter groups for an AWS account, use the AWS Command Line Interface `list-db-parameter-groups` command.

```
aws timestream-influxdb list-db-parameter-groups --region region
```

To return a specific DB parameter groups for an AWS account, use the AWS Command Line Interface `get-db-parameter-group` command.

```
aws timestream-influxdb get-db-parameter-group --region region --identifier identifier
```

#### Viewing parameter values for a DB parameter group
<a name="timestream-for-influx-working-with-parameter-groups-viewing"></a>

You can get a list of all parameters in a DB parameter group and their values.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **Parameter groups**.

1. The DB parameter groups appear in a list.

1. Choose the name of the parameter group to see its list of parameters.

**Using the AWS Command Line Interface**

To view the parameter values for a DB parameter group, use the AWS Command Line Interface `get-db-parameter-group` command. Replace *parameter-group-identifier* with your own information.

```
get-db-parameter-group --identifier parameter-group-identifier
```

**Using the API**

To view the parameter values for a DB parameter group, use the Timestream API `GetDbParameterGroup` command. Replace *parameter-group-identifier* with your own information.

```
GetDbParameterGroup parameter-group-identifier
```

# Working with Multi-AZ read replica clusters for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-working-read-replica"></a>

A read replica cluster deployment is an asynchronous deployment mode of Amazon Timestream for InfluxDB that allows you to configure read replicas attached to a primary DB instance. A read replica cluster has a writer DB instance and a reader DB instance in separate Availability Zones within the same AWS Region. Read replica clusters provide high availability and increased capacity for read workloads when compared to Multi-AZ DB instance deployments.

## Instance class availability for read replica clusters
<a name="timestream-for-influx-instance-class-rr"></a>

Read replica cluster deployments are supported for the same instance types as regular Timestream for InfluxDB instances.


****  

| Instance class | vCPU | Memory (GiB) | Storage type | Network bandwidth (Gbps) | 
| --- | --- | --- | --- | --- | 
| db.influx.medium | 1 | 8 | Influx IOPS Included | 10 | 
| db.influx.large | 2 | 16 | Influx IOPS Included | 10 | 
| db.influx.xlarge | 4 | 32 | Influx IOPS Included | 10 | 
| db.influx.2xlarge | 8 | 64 | Influx IOPS Included | 10 | 
| db.influx.4xlarge | 16 | 128 | Influx IOPS Included | 10 | 
| db.influx.8xlarge | 32 | 256 | Influx IOPS Included | 12 | 
| db.influx.12xlarge | 48 | 384 | Influx IOPS Included | 20 | 
| db.influx.16xlarge | 64 | 512 | Influx IOPS Included | 25 | 
| db.influx.24xlarge | 96 | 768 | Influx IOPS Included | 40 | 

## Read replica cluster architecture
<a name="timestream-for-influx-rr-cluster-architecture"></a>

With a read replica cluster, Amazon Timestream for InfluxDB automatically replicates all writes made to the writer DB instance to all reader DB instances using InfluxData’s licensed read replica add-on. This replication is asynchronous and all writes are acknowledged as soon as they are committed by the writer node. Writes do not require acknowledgement from all reader nodes to be considered as a successful write. Once data is committed by the writer DB instance, it is replicated to the read replica instance almost instantaneously. In case of unrecoverable writer failure, any data that has not been replicated over to at least one of the readers will be lost.

A read replica instance is a read-only copy of a writer DB instance. You can reduce the load on your writer DB instance by routing some or all of the queries from your applications to the read replica. In this way, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

The following diagram shows a primary DB instance replicating to a read replica in a different Availability Zone. Clients have read/write access to the primary DB instance and read-only access to the replica.

![\[A primary DB instance in Avaiability Zone A asynchronously replicates to a read replica instance in Availability Zone C.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/rr_azs_diagram.png)


## Parameter groups for read replica clusters
<a name="timestream-for-influx-rr-param-groups"></a>

In a read replica cluster, a *DB parameter group* acts as a container for engine configuration values that are applied to every DB instance in the read replica cluster. A default DB parameter group is set based on the DB engine and DB engine version. The settings in the DB parameter group are used for all of the DB instances in the cluster.

When passing a specific DB parameter group using [CreateDbCluster](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_CreateDbCluster.html) or [UpdateDbCluster](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_UpdateDbCluster.html) for Multi-AZ DB read replica, ensure the `storage-wal-max-write-delay` is set to a duration of 1 hour minimum. If no DB parameter group is specified, `storage-wal-max-write-delay` will default to 1 hour.

## Replica lag in read replica clusters
<a name="timestream-for-influx-replica-lag"></a>

Although Timestream for InfluxDB read replica clusters allow for high write performance, replica lag can still occur due to the nature of engine-based asynchronous replication. This lag can lead to potential data loss in the event of a failover, making it essential to monitor.

You can track the replica lag from CloudWatch by selecting **All metrics** in the AWS Management Console navigation pane. Choose **Timestream/InfluxDB**, then **By DbCluster**. Select your **DbClusterName** and then your **DbReaderInstanceName**. Here, besides the normal set of metrics tracked for all Timestream for InfluxDB instances (see below list), you will also see ReplicaLag, expressed in milliseconds.
+ CPUUtilization
+ MemoryUtilization
+ DiskUtilization
+ ReplicaLag (only for replica instance mode DB instances)

### Common causes of replica lag
<a name="timestream-for-influx-lag-causes"></a>

In general, replica lag occurs when the write and read workloads are too high for the reader DB instances to apply the transactions efficiently. Various workloads can incur temporary or continuous replica lag. Some examples of common causes are the following:
+ High write concurrency or heavy batch updating on the writer DB instance, causing the apply process on the reader DB instances to fall behind.
+ Heavy read workload that is using resources on one or more reader DB instances. Running slow or large queries can affect the apply process and can cause replica lag.
+ Transactions that modify large amounts of data or DDL statements can sometimes cause a temporary increase in replica lag because the database must preserve commit order.

For a tutorial that shows you how to create a CloudWatch alarm when replica lag exceeds a set amount of time, see [Tutorial: Create an Amazon CloudWatch alarm for Multi-AZ cluster replica lag for Amazon Timestream for InfluxDB](timestream-for-influx-creating-cw-alarms.md#timestream-for-influx-tutorial-alarm).

### Mitigating replica lag
<a name="timestream-for-influx-mitigating-lag"></a>

For Timestream for InfluxDB read replica clusters, you can mitigate replica lag by reducing the load on your writer DB instance.

## Availability and durability
<a name="timestream-for-influx-availability"></a>

Read replica clusters can be configured to either automatically fail over to one of the reader instances in case of writer failure to prioritize write availability or to avoid failing over to minimize tip data loss. Tip data refers to the replication gap of data not yet replicated to at least one of the reader nodes (see [Replica lag in read replica clusters](#timestream-for-influx-replica-lag)). The default and recommended behavior for read replica clusters is to automatically fail over in case of writer failures. However, if tip data loss is more important than write availability for your use cases, you can override the default by updating the cluster.

Read replica clusters ensure that all DB instances of the cluster are distributed across at least two Availability Zones to ensure increased write availability and data durability in case of an Availability Zone outage.

**Topics**
+ [Instance class availability for read replica clusters](#timestream-for-influx-instance-class-rr)
+ [Read replica cluster architecture](#timestream-for-influx-rr-cluster-architecture)
+ [Parameter groups for read replica clusters](#timestream-for-influx-rr-param-groups)
+ [Replica lag in read replica clusters](#timestream-for-influx-replica-lag)
+ [Availability and durability](#timestream-for-influx-availability)
+ [Overview of Amazon Timestream for InfluxDB read replica clusters](timestream-for-influx-read-replica-overview.md)
+ [Creating a Timestream for InfluxDB read replica cluster](timestream-for-influx-create-rr-cluster.md)
+ [Connecting to a Timestream for InfluxDB read replica DB cluster](timestream-for-influx-connecting-cluster.md)
+ [Modifying a read replica cluster for Amazon Timestream for InfluxDB](timestream-for-influx-modifying-rr-cluster.md)
+ [Rebooting a read replica cluster in Amazon Timestream for InfluxDB](timestream-for-influx-rebooting-rr-cluster.md)
+ [Creating CloudWatch alarms to monitor Amazon Timestream for InfluxDB](timestream-for-influx-creating-cw-alarms.md)
+ [Read replica licensing through AWS Marketplace](timestream-for-influx-rr-licensing.md)

# Overview of Amazon Timestream for InfluxDB read replica clusters
<a name="timestream-for-influx-read-replica-overview"></a>

The following sections discuss Timestream for InfluxDB read replica clusters:

**Topics**
+ [Use cases for read replicas](#timestream-for-influx-rr-use-cases)
+ [How read replicas work](#timestream-for-influx-how-rr-work)
+ [Characteristics of Timestream for InfluxDB read replicas](#timestream-for-influx-rr-characteristics)
+ [Read replica instance and storage types](#timestream-for-influx-rr-instance-storage-types)
+ [Considerations when deleting replicas](#timestream-for-influx-rr-deletion)

## Use cases for read replicas
<a name="timestream-for-influx-rr-use-cases"></a>

Using a read replica cluster might make sense in a variety of scenarios, including the following:
+ Scaling beyond the compute or I/O capacity of a single DB instance for read-heavy database workloads. You can direct this excess read traffic to one or more read replicas.
+ Serving read traffic while the primary writer instance is unavailable. In some cases, your primary DB instance might not be able to take I/O requests, for example, due to I/O suspension for backups or scheduled maintenance. In these cases, you can direct read traffic to your read replica. For this use case, keep in mind that the data on the read replica might be "stale" because the primary DB instance is unavailable. Also, keep in mind that you will need to turn off automatic failover for these scenarios to work.
+ Business reporting or data warehousing scenarios where you might want business reporting queries to run against a read replica, rather than your production DB instance.
+ Implementing disaster recovery. You can promote a read replica to primary as a disaster recovery solution if the primary DB instance fails.
+ Faster failover for scenarios where availability is more important than durability. Since read replicas use asynchronous replication, there is a chance that some data that was committed by the primary writer instance was not replicated before a failover. However, for applications where uptime is paramount, this trade-off is acceptable. Depending on your workload characteristics, a failover to a read replica could be significantly faster than a failover to a standby DB instance that uses synchronous replication, as the replica instance is already running and does not need to start the engine. This can be particularly beneficial in use cases where every minute counts.

## How read replicas work
<a name="timestream-for-influx-how-rr-work"></a>

To create a read replica cluster, Amazon Timestream for InfluxDB uses InfluxData’s licensed read replica add-ons. The add-on subscription is activated via the AWS Marketplace, directly from the Amazon Timestream management console. For more details, see [Read replica licensing through AWS MarketplaceRead replica licensing terminology](timestream-for-influx-rr-licensing.md).

Read replicas are billed as standard DB instances at the same rates as the DB instance type used for each node in your cluster, plus the cost of InfluxData’s licensed add-on. The cost of the add-on is billed in instance-hours via the AWS Marketplace. You aren't charged for the data transfer incurred in replicating data between the source DB instance and a read replica within the same AWS Region.

Once you have created and configured your read replica cluster and start accepting writes, Amazon Timestream for InfluxDB uses the asynchronous replication method to update the read replica whenever there is a change to the primary DB instance.

The read replica functions as a dedicated DB instance, exclusively accepting read-only connections. Applications can connect to a read replica in the same manner as they would to any other DB instance, providing a seamless and familiar experience. Amazon Timestream for InfluxDB automatically replicates all data from the primary DB instance to the read replica, ensuring data consistency and accuracy. Note that updates are done at the cluster level and applied at the same time to both the primary and replica.

## Characteristics of Timestream for InfluxDB read replicas
<a name="timestream-for-influx-rr-characteristics"></a>


****  

| Feature or behavior | Timestream for InfluxDB | 
| --- | --- | 
| What is the replication method? | Logical replication. | 
| Can a replica be made writable? | No, Timestream for InfluxDB read replicas are designed to be read-only and cannot be made writable. While a read replica can be promoted to primary in the event of a failover, thereby accepting writes, at any given time, there can only be one writer DB instance in a Timestream for InfluxDB read replica cluster. This ensures data consistency and prevents conflicts that could arise from multiple writable instances. The read replica's role is to provide a redundant, read-only copy of the data, and it will automatically reject write requests to maintain data integrity. | 
| Can backups be performed on the replica? | Yes, you can use the built-in engine capabilities to create backups using the Influx CLI. | 
| Can you use parallel replication? | No, Timestream for InfluxDB has a single process handling replication. | 

## Read replica instance and storage types
<a name="timestream-for-influx-rr-instance-storage-types"></a>

A read replica is created with the same instance and storage type as the primary DB instance. Any changes to the configuration must be made at the cluster level and will apply to all instances within the cluster. All instance and storage configurations available for Timestream for InfluxDB DB instances are available for Timestream for InfluxDB read replica clusters.

**Instance types**


****  

| Instance class | vCPU | Memory (GiB) | Storage type | Network bandwidth (Gbps) | 
| --- | --- | --- | --- | --- | 
| db.influx.medium | 1 | 8 | Influx IOPS Included | 10 | 
| db.influx.large | 2 | 16 | Influx IOPS Included | 10 | 
| db.influx.xlarge | 4 | 32 | Influx IOPS Included | 10 | 
| db.influx.2xlarge | 8 | 64 | Influx IOPS Included | 10 | 
| db.influx.4xlarge | 16 | 128 | Influx IOPS Included | 10 | 
| db.influx.8xlarge | 32 | 256 | Influx IOPS Included | 12 | 
| db.influx.12xlarge | 48 | 384 | Influx IOPS Included | 20 | 
| db.influx.16xlarge | 64 | 512 | Influx IOPS Included | 25 | 
| db.influx.24xlarge | 96 | 768 | Influx IOPS Included | 40 | 

**Storage options**


****  

| Timestream for InfluxDB DB cluster storage | Source DB instance storage allocation | Included IOPS | 
| --- | --- | --- | 
| Influx IO Included (3K) | 20 GiB to 16 TiB | 3,000 IOPS | 
| Influx IO Included (12K) | 400 GiB to 16 TiB | 12,000 IOPS | 
| Influx IO Included (16K) | 400 GiB to 16 TiB | 16,000 IOPS | 

## Considerations when deleting replicas
<a name="timestream-for-influx-rr-deletion"></a>

If you no longer require read replicas, you can explicitly delete the cluster by calling the `delete-db-cluster` API. In the following example, replace each *user input placeholder* with your own information. Keep in mind that you cannot remove a single node from your cluster at this time.

```
aws timestream-influxdb delete-db-cluster \
            --region region \
            --endpoint endpoint \
            --db-cluster-id cluster-id
```

# Creating a Timestream for InfluxDB read replica cluster
<a name="timestream-for-influx-create-rr-cluster"></a>

A Timestream for InfluxDB read replica cluster has a writer DB instance and a reader DB instance in separate Availability Zones. Timestream for InfluxDB read replica clusters provide high availability, increased capacity for read workloads, and faster failover when failover to replica is configured.

## DB cluster prerequisites
<a name="timestream-for-influx-create-prereq"></a>

**Important**  
The following are prerequisites to complete before creating a read replica cluster.

**Topics**
+ [Configure the network for the DB cluster](#timestream-for-influx-config-network)
+ [Additional prerequisites](#timestream-for-influx-addl-prereqs)

### Configure the network for the DB cluster
<a name="timestream-for-influx-config-network"></a>

You can only create a Timestream for InfluxDB read replica DB cluster in a virtual private cloud (VPC) based on the Amazon VPC service. It must be in an AWS Region that has at least three Availability Zones. The DB subnet group that you choose for the DB cluster must cover at least three Availability Zones. This configuration ensures that each DB instance in the DB cluster is in a different Availability Zone.

To connect to your DB cluster from resources other than EC2 instances in the same VPC, configure the network connections manually.

### Additional prerequisites
<a name="timestream-for-influx-addl-prereqs"></a>

**Before you create your read replica cluster, consider the following additional prerequisites:**

To tailor the configuration parameters for your DB cluster, specify a DB cluster parameter group with the required parameter settings. For information about creating or modifying a DB cluster parameter group, see [Parameter groups for read replica clusters](timestream-for-influx-working-read-replica.md#timestream-for-influx-rr-param-groups).

Determine the TCP/IP port number to specify for your DB cluster. The firewalls at some companies block connections to the default ports. If your company firewall blocks the default port, choose another port for your DB cluster. All DB instances in a DB cluster use the same port.

## Create a DB cluster
<a name="timestream-for-influx-create-cluster"></a>

You can create a Timestream for InfluxDB read replica DB cluster using the AWS Management Console, the AWS CLI, or the Amazon Timestream for InfluxDB API.

------
#### [ Using the AWS Management Console ]

You can create a Timestream for InfluxDB read replica DB cluster by choosing **Cluster with read replicas** in the **Deployment settings** section.

To create a read replica DB cluster using the console:

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream) and open the Amazon Timestream console.

1. In the upper-right corner of the AWS Management Console, choose the AWS Region in which you want to create the read replica DB cluster.

1. In the navigation pane, choose **InfluxDB databases**.

1. Choose **Create InfluxDB database**.

1. In **Deployment settings**, choose **Cluster with read replicas**.

   Once you select that option, a message will appear indicating you need to activate your subscription via the AWS Marketplace widget. Click on **View subscription options**. Note that it can take 1–2 minutes for the subscription to become active.  
![\[The Create InfluxDB database interface that shows the different deployment settings available for the new database. The cluster with read replicas option is selected.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/deployment_settings_rr.jpg)  
![\[The Deployment settings interface showing a message that the subscription is in progress.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/subscription_in_progress.jpg)

1. Once the subscription is active, click **View subscription**.  
![\[The Deployment settings interface showing a message that the subscription is now active.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/subscription_success_message.jpg)

1. A window will appear presenting information on the cost per vCPU per instance hour for each Region. This follows the same compute pricing model where you are charged for the number of hours your instance is active based on the instance type you have selected. You will only need to subscribe to the add-on once, and that will allow you to create instances in all Regions where Timestream for InfluxDB is available.  
![\[Subscription options form showing pricing details on the cost per vCPU per instance hour for each Region.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/purchase_subscription.png)
**Important**  
To subscribe to the offer, you will need to have either AWSMarketplaceManageSubscriptions or AWSMarketplaceFullAccess permissions. For more information about these permissions, check [Controlling access to AWS Marketplace subscriptions](https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-iam-users-groups-policies.html).

1. Once you confirm your subscription, the service will automatically select the Region based on the Region of your instance.

1. In **Database credentials**, complete the following fields:

   1. For **DB cluster name**, enter the identifier for your DB cluster.

   1. Provide the InfluxDB basic initial configuration parameters: **username**, **organization name**, **bucket name**, and **password**.

1. In **Instance configuration**, specify the **DB instance class**. Select an instance size that best fits your workload needs. Keep in mind that this instance type will be used for all instances in your read replica DB cluster.

1. In **Storage configuration**, select a **Storage type** that fits your needs. In all cases, you will only need to configure the allocated storage. Keep in mind that this storage type will be used for all instances in your read replica DB cluster.

1. In the **Connectivity configuration** section, make sure your InfluxDB cluster is in the same subnet as the clients that require connectivity to your Timestream for InfluxDB DB instance. You could also choose to make your DB instance publicly available in the **Public access** subsection.

1. Choose **Create InfluxDB database**.

1. In the **InfluxDB databases** list, choose the name of your new InfluxDB cluster to show its details. The DB cluster will have a status of **Creating** until it is ready to use.

1. When the status changes to **Available**, you can connect to the DB cluster. Depending on the DB instance class and the amount of storage, it can take up to 20 minutes before the new instance is available.  
![\[DB cluster summary page showing two instances with the status "Available".\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/cluster_details_page.png)

1. Once created, you can click on your DB cluster identifier to retrieve information about your newly created cluster. The endpoint showing an instance mode of **PRIMARY** is the one you will need to use for writes and engine administration.

------
#### [ Using the AWS CLI ]

To create a DB instance using the AWS Command Line Interface, call the `create-db-cluster` command with the following parameters. Replace each *user input placeholder* with your own information.

```
aws timestream-influxdb create-db-cluster \
      --region region \
      --vpc-subnet-ids subnet-ids \
      --vpc-security-group-ids security-group-ids \
      --db-instance-type db.influx.large \
      --db-storage-type InfluxIOIncludedT2 \
      --allocated-storage 400 \
      --password password \ 
      --name cluster-name \
      --deployment-type MULTI_NODE_READ_REPLICAS \
      --publicly-accessible
      //--failover-mode is optional and defaults to AUTOMATIC.
```

------

### Settings for creating read replica clusters
<a name="timestream-for-influx-rr-create-settings"></a>

For details about settings that you choose when you create a read replica cluster, see the following table. For more information about the AWS CLI options, see [create-db-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/timestream-influxdb/create-db-cluster.html). For more information about the Amazon Timestream for InfluxDB API parameters, see [CreateDbCluster](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_CreateDbCluster.html).


****  

| Console setting | Setting description | CLI option and Timestream for InfluxDB API parameter | 
| --- | --- | --- | 
| Allocated storage | The amount of storage to allocate for each DB instance in your DB cluster (in gibibytes). For more information, see [InfluxDB instance storage](timestream-for-influxdb.md#timestream-for-influx-dbi-storage). |  **CLI option: ** `--allocated-storage` **API parameter: **`allocatedStorage`  | 
| Database port | The port number on which InfluxDB accepts connections. Valid Values: 1024-65535 Default: 8086 Constraints: The value can't be 2375-2376, 7788-7799, 8090, or 51678-51680.  |  **CLI option: ** `--port` **API parameter: **`port`  | 
| DB cluster name | The name that uniquely identifies the DB cluster. DB instance names must be unique per customer and per region. |  **CLI option: ** `--name` **API parameter: **`name`  | 
| DB instance type | The compute and memory capacity of each DB instance in your Timestream for InfluxDB DB cluster, for example db.influx.xlarge. If possible, choose a DB instance class large enough that a typical query working set can be held in memory. When working sets are held in memory, the system can avoid writing to disk, which improves performance.  |  **CLI option: ** `--db-instance-type` **API parameter: **`dbInstanceType`  | 
| DB cluster parameter group |  The ID of the DB parameter group to assign to your DB cluster. DB parameter groups specify how the database is configured. For example, DB parameter groups can specify the limit for query concurrency. |  **CLI option: ** `--db-parameter-group-identifier` **API parameter: **`dbParameterGroupIdentifier`  | 
| Deployment type |  Specifies whether the DB cluster will be deployed as a multinode read replica or a Multi-AZ multinode read replica. Possible values: `MULTI_NODE_READ_REPLICAS`  |  **CLI option: ** `--deployment-type` **API parameter: **`deploymentType`  | 
| VPC subnet ID | The DB subnet ID you want to use for the DB cluster. Select Choose existing to use an existing DB subnet group, then choose the required subnet group from the Existing DB subnet groups dropdown list. Choose Automatic setup to let Timestream for InfluxDB select a compatible DB subnet group. |  **CLI option: ** `--vpc-subnet-ids` **API parameter: **`vpcSubnetIds`  | 
| Organization | The name of the initial organization for the initial admin user in InfluxDB. An InfluxDB organization is a workspace for a group of users. |  **CLI option: ** `--organization` **API parameter: **`organization`  | 
| Bucket | The name of the initial InfluxDB bucket. All InfluxDB data is stored in a bucket. A bucket combines the concept of a database and a retention period (the duration of time that each data point persists). A bucket belongs to an organization. |  **CLI option: ** `--bucket` **API parameter: **`bucket`  | 
| Log exports |  Configuration for sending InfluxDB engine logs to a specified S3 bucket. Configuration for S3 bucket log delivery: `s3Configuration -> (structure)` The name of the S3 bucket to deliver logs to: `bucketName -> (string)` Indicates whether log delivery to the S3 bucket is enabled: `enabled -> (boolean)` Shorthand syntax: `s3Configuration={bucketName=string, enabled=boolean}`  |  **CLI option: ** `--log-delivery-configuration` **API parameter: **`logDeliveryConfiguration`  | 
| Password | The password of the initial admin user you created in InfluxDB. This password will allow you to access the InfluxDB UI to perform various administrative tasks and also use the InfluxDB CLI to create an operator token. These attributes will be stored in a secret created in AWS Secrets Manager in your account. |  **CLI option: ** `--password` **API parameter: **`password`  | 
| Username | The username of the initial admin user created in InfluxDB. Must start with a letter and can't end with a hyphen or contain two consecutive hyphens. For example, my-user1. This username will allow you to access the InfluxDB UI to perform various administrative tasks and also use the InfluxDB CLI to create an operator token. These attributes will be stored in a secret created in AWS Secrets Manager in your account. |  **CLI option: ** `--username` **API parameter: **`username`  | 
| Public access | Indicates whether the DB cluster is accessible from outside the VPC. **Publicly accessible** gives the DB cluster a public IP address, meaning it's accessible outside the VPC. To be publicly accessible, the DB cluster also has to be in a public subnet in the VPC. **Not publicly accessible** makes the DB cluster accessible only from inside the VPC.  |  **CLI options: ** `--publicly-accessible``--no-publicly-accessible` **API parameter: **`publiclyAccessible`  | 
| DB storage type | InfluxDB data. You can choose between three different types of provisioned Influx IOPS Included storage according to your workload's requirements. Possible values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-create-rr-cluster.html)  |  **CLI options: ** `--db-storage-type``--no-publicly-accessible` **API parameter: **`dbStorageType`  | 
| VPC security group | A list of VPC security group IDs to associate with the DB instance. |  **CLI options: ** `--vpc-security-group-ids``--no-publicly-accessible` **API parameter: **`vpcSecurityGroupIds`  | 
| VPC subnet IDs | A list of VPC subnet IDs to associate with the DB instance. Provide at least two VPC subnet IDs in different Availability Zones when deploying with a Timestream for InfluxDB DB cluster. |  **CLI options: ** `--vpc-subnet-ids` **API parameter: **`vpcSubnetIds`  | 
| Failover mode | How your cluster responds to a primary instance failure. You can configure this with the following options: `AUTOMATIC`: If the primary instance fails, the system automatically promotes a read replica to become the new primary instance. `NO_FAILOVER`: If the primary instance fails, the system attempts to restore the primary instance without promoting a read replica. The cluster remains unavailable until the primary instance is restored.  |  **CLI options: ** `--failover-mode` **API parameter: **`failoverMode`  | 

**Important**  
As part of the DB cluster response object, you will receive an `influxAuthParametersSecretArn`. This will hold an ARN to a Secrets Manager secret in your account. It will only be populated after your InfluxDB DB instances are available. The secret contains Influx authentication parameters provided during the `CreateDbInstance` process. This is a **read-only** copy as any updates/modifications/deletions to this secret doesn't impact the created DB instance. If you delete this secret, our API response will still refer to the deleted secret ARN.

# Connecting to a Timestream for InfluxDB read replica DB cluster
<a name="timestream-for-influx-connecting-cluster"></a>

A Timestream for InfluxDB read replica DB cluster has two reachable DB instances instead of a single DB instance. Each connection is handled by a specific DB instance. When you connect to a read replica DB cluster, the hostname and port that you specify point to a fully qualified domain name called an *endpoint*.

The primary (writer) endpoint connects to the writer DB instance of the read replica DB cluster, which supports both read and write operations. The reader endpoint connects to the reader DB instance, which support only read operations.

Using endpoints, you can map each connection to the appropriate DB instance based on your use case. For example, to perform administrative or write statements, you can connect to whichever DB instance is the writer DB instance. To perform queries, you can connect to the reader endpoint. For diagnosis or tuning, you can connect to a specific DB instance endpoint, `/metrics`, to examine details about a specific DB instance.

For information about connecting to a DB instance, see [Connecting to an Amazon Timestream for InfluxDB DB instance](timestream-for-influx-db-connecting.md). For more information about connecting to read replica clusters, see the following topics.

## Types of read replica cluster endpoints
<a name="timestream-for-influx-rr-cluster-endpoint-types"></a>

An endpoint is represented by a unique identifier that contains a host address. Each Timestream for InfluxDB cluster has:
+ A cluster endpoint.
+ A cluster read-only endpoint.
+ An instance endpoint for each instance in the cluster.

### Cluster endpoint
<a name="timestream-for-influx-rr-cluster-endpoints"></a>

A *cluster endpoint* (or *writer endpoint*) for a read replica cluster connects to the current writer DB instance for that DB cluster. This endpoint is the only one that can perform write operations such as:
+ InfluxDB-specific administrative commands, e.g., creating, modifying, or deleting organizations, users, buckets, tasks, etc.
+ Writing data to your database cluster.

You use the cluster endpoint for all write operations on the DB cluster, including writes, upserts, deletes, and all configuration and administrative changes.

In addition, you can use the cluster endpoint for read operations, such as queries.

If the current writer DB instance of a DB cluster fails, the read replica cluster automatically fails over to one of its replicas, promoting it as the new writer DB instance. During a failover, the DB cluster continues to serve connection requests to the cluster endpoint from the new writer DB instance, with minimal interruption of service. The read replica endpoint that was promoted to writer will stop serving reads until a new replica is deployed.

The following example illustrates a cluster endpoint for a read replica cluster:

```
ipvtdwa5se-wmyjrrjko.us-west-2.timestream-influxdb.amazonaws.com
```

### Read-only endpoint
<a name="timestream-for-influx-rr-readonly-endpoints"></a>

The *read-only endpoint* connects to any one of the read replica instances in the cluster. Read replicas will only support read operations, such as Flux or InfluxQL queries; in other words, all operations executed against the `/api/v2/query` endpoint for Flux queries or `/api/query` endpoint for InfluxQL v1-compatible queries. By processing those statements on the reader DB instances, this endpoint reduces the overhead on the writer DB instance. It also helps the cluster to handle a higher number of simultaneous queries.

The following example illustrates a reader endpoint for a read replica cluster. The read-only intent of a reader endpoint is denoted by the `-ro` within the cluster endpoint name.

```
ipvtdwa5se-wmyjrrjko-ro.us-west-2.timestream-influxdb.amazonaws.com
```

### Instance endpoint
<a name="timestream-for-influx-rr-instance-endpoints"></a>

An *instance endpoint* connects to a specific DB instance within a read replica cluster. Each DB instance in a DB cluster has its own unique instance endpoint. Therefore, there is one instance endpoint for the current writer DB instance of the DB cluster (the primary), and there is one instance endpoint for each of the reader DB instances in the DB cluster.

The instance endpoint provides direct control over connections to the DB cluster. This control can help you address scenarios where using the cluster endpoint or reader endpoint might not be appropriate. For example, your client application might require more fine-grained load balancing based on workload type. In this case, you can configure multiple clients to connect to different reader DB instances in a DB cluster to distribute read workloads.

The following example illustrates an instance endpoint for a DB instance in a read replica cluster:

```
mydbinstance-123456789012.us-east-1.timestream-influxdb.amazonaws.com
```

# Modifying a read replica cluster for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-modifying-rr-cluster"></a>

A read replica cluster has a writer DB instance and a reader DB instance in separate Availability Zones. Read replica clusters provide high availability, increased capacity for read workloads, and faster failover when compared to Multi-AZ deployments. For more information about read replica clusters, see [Overview of Amazon Timestream for InfluxDB read replica clusters](timestream-for-influx-read-replica-overview.md).

You can modify a read replica cluster to change its settings.

**Important**  
You can't modify the DB instances within a read replica cluster. All modifications must be done at the DB cluster level.  
You can modify a read replica cluster using the AWS Management Console, the AWS CLI, or the Amazon Timestream for InfluxDB API.

## Modify a read replica cluster for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-modify-rr-db-cluster"></a>

------
#### [ Using the AWS Management Console ]

To modify a read replica DB cluster using the console:

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream) and open the Amazon Timestream console.

1. In the navigation pane, choose **InfluxDB databases** and then choose the read replica cluster you want to modify.

1. Choose **Modify**. The **Modify DB cluster** page appears.

1. Choose any of the settings that you want. For information about each setting, see [Settings for modifying read replica clusters](#timestream-for-influx-rr-modify-settings).

1. After you have made your changes, choose **Continue** and check the summary of modifications.

1. On the confirmation page, review your changes. If they're correct, choose **Modify DB cluster** to save your changes. Alternatively, choose **Back** to edit your changes or **Cancel** to cancel your changes.

**Important**  
 Currently Amazon Timestream for InfluxDB only supports **Apply Immediately** updates for the read replica cluster. If you confirm the changes, your DB cluster will incur downtime while the changes are being applied. 

------
#### [ Using the AWS CLI ]

To modify a DB instance using the AWS Command Line Interface, use the `update-db-cluster` command with the following parameters. Replace each *user input placeholder* with your own information.

```
aws timestream-influxdb update-db-cluster \
      --region region \
      --db-cluster-id db-cluster-id \                      
      --db-instance-type db.influx.4xlarge \
      --port 10000 \
      --failover mode NO_FAILOVER
```

------

## Settings for modifying read replica clusters
<a name="timestream-for-influx-rr-modify-settings"></a>

For details about settings that you can use to modify a read replica cluster, see the following table. For more information about the AWS CLI options, see [update-db-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/timestream-influxdb/update-db-cluster.html).


****  

| Console setting | Setting description | CLI option and Timestream for InfluxDB API parameter | 
| --- | --- | --- | 
| Database port | The port number on which InfluxDB accepts connections. Valid Values: 1024-65535 Default: 8086 Constraints: The value can't be 2375-2376, 7788-7799, 8090, or 51678-51680.  |  **CLI option: ** `--port` **API parameter: **`port`  | 
| DB instance type | The compute and memory capacity of each DB instance in your Timestream for InfluxDB DB cluster, for example db.influx.xlarge. If possible, choose a DB instance class large enough that a typical query working set can be held in memory. When working sets are held in memory, the system can avoid writing to disk, which improves performance. |  **CLI option: ** `--db-instance-type` **API parameter: **`dbInstanceType`  | 
| DB cluster parameter group |  The ID of the DB parameter group to assign to your DB cluster. DB parameter groups specify how the database is configured. For example, DB parameter groups can specify the limit for query concurrency. |  **CLI option: ** `--db-parameter-group-identifier` **API parameter: **`dbParameterGroupIdentifier`  | 
| Log exports |  Configuration for sending InfluxDB engine logs to a specified S3 bucket. Configuration for S3 bucket log delivery: `s3Configuration -> (structure)` The name of the S3 bucket to deliver logs to: `bucketName -> (string)` Indicate whether log delivery to the S3 bucket is enabled: `enabled -> (boolean)` Shorthand syntax: `s3Configuration={bucketName=string, enabled=boolean}`  |  **CLI option: ** `--log-delivery-configuration` **API parameter: **`logDeliveryConfiguration`  | 
| Failover mode | Configure how your cluster responds to a primary instance failure using the following options: `AUTOMATIC`: If the primary instance fails, the system automatically promotes a read replica to become the new primary instance. `NO_FAILOVER`: If the primary instance fails, the system attempts to restore the primary instance without promoting a read replica. The cluster remains unavailable until the primary instance is restored.  | **CLI option: ** `--failover-mode` **API parameter: **`failoverMode` | 

# Rebooting a read replica cluster in Amazon Timestream for InfluxDB
<a name="timestream-for-influx-rebooting-rr-cluster"></a>

You can reboot a read replica cluster in the event of any health issues.

## Rebooting a read replica cluster for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-rebooting-rr-db-cluster"></a>

------
#### [ Using the AWS Management Console ]

To reboot a read replica DB cluster using the console:

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream) and open the Amazon Timestream console.

1. In the navigation pane, choose **InfluxDB databases** and then choose the read replica cluster you want to reboot.

1. Choose **Restart database**.

1. Choose **Confirm and Restart**.

------
#### [ Using the AWS CLI ]

To reboot a DB instance using the AWS Command Line Interface, use the `reboot-db-cluster` command with the following parameters. Replace each *user input placeholder* with your own information.

```
aws timestream-influxdb reboot-db-cluster \
      --region region \
      --db-cluster-id db-cluster-id \
```

------

# Creating CloudWatch alarms to monitor Amazon Timestream for InfluxDB
<a name="timestream-for-influx-creating-cw-alarms"></a>

You can create a CloudWatch alarm that sends an Amazon SNS message when the alarm changes state. An alarm watches a single metric over a time period that you specify. The alarm can also perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto Scaling policy.

Alarms invoke actions for sustained state changes only. CloudWatch alarms don't invoke actions simply because they are in a particular state. The state must have changed and have been maintained for a specified number of time periods.

You can set CloudWatch alarms on any of the available metrics for Timestream for InfluxDB, including `CPUUtilization`, `MemoryUtilization`, `DiskUtilization`, and `ReplicaLag`.

We recommend to start creating `DiskUtilization`-related alarms for your Timestream for InfluxDB databases, since out-of-storage space issues can turn out to be fairly problematic to InfluxDB. We recommend setting alerts to be sent whenever `DiskUtilization` goes over approximately 75–80 percent.

## To set an alarm using the AWS CLI
<a name="timestream-for-influx-alarm-cli"></a>

Call `put-metric-alarm`. For more information, see [put-metric-alarm](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudwatch/put-metric-alarm.html) in the *AWS CLI Command Reference*.

## To set an alarm using the CloudWatch API
<a name="timestream-for-influx-alarm-api"></a>

Call `PutMetricAlarm`. For more information, see [PutMetricAlarm](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) in the *Amazon CloudWatch API Reference*. For more information about setting up Amazon SNS topics and creating alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

## Tutorial: Create an Amazon CloudWatch alarm for Multi-AZ cluster replica lag for Amazon Timestream for InfluxDB
<a name="timestream-for-influx-tutorial-alarm"></a>

You can create an Amazon CloudWatch alarm that sends an Amazon SNS message when replica lag for a Multi-AZ DB cluster has exceeded a threshold. An alarm watches the `ReplicaLag` metric over a time period that you specify. The action is a notification sent to an Amazon SNS topic or Amazon EC2 Auto Scaling policy.

### To set a CloudWatch alarm for Multi-AZ DB cluster replica lag
<a name="timestream-for-influx-alarm-tutorial-steps"></a>

1. Sign in to the AWS Management Console and open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch).

1. In the navigation pane, choose **Alarms**, then **All alarms**.

1. Choose **Create alarm**.

1. On the **Specify metric and conditions** page, choose **Select metric**.

1. In the search box, enter the name of your DB cluster, select **Timestream/InfluxDB**, **By DbCluster**, and then select your cluster.  
![\[The Select metric page showing an empty CloudWatch graph and two Timestream for InfluxDB sort options to choose from.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/select_metric_page.png)

1. The following image shows the **Select metric** page with a read replica cluster named `inframonitoringcluster` selected. Choose the metric you want to create an alarm for, in this case `ReplicaLag`. Click **Select metric**.  
![\[The Select metric page showing an empty CloudWatch graph and seven CloudWatch metrics to choose from.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/select_metric_cluster_selected.png)

1. On the **Specify metric and conditions** page, customize the following fields:  
![\[The Specify metric and conditions page showing settings selected for the inframonitoringcluster cluster.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/replica_lag_metrics_conditions.png)

   1. Select a period of time for your calculations in the **Period** section.

   1. Set up the conditions related to your alarm. For **Threshold type**, you can choose between **Static** and **Anomaly detection**.

      In this case, we will use **Static** since we know how our workload behaves. Each workload might have different requirements when it comes to what is considered "healthy."

   1. Select your threshold value. In the case of **Static** threshold values, these will be in milliseconds.

   1. Choose **Next**.

1. On the **Configure actions** page, in the **Notification** section, customize the following settings:  
![\[The Configure actions page showing a list of six different actions. The Notification section is completed.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/configure_actions.png)

   1. For **Alarm state trigger**, select **In alarm**.

   1. Choose **Create new topic** in **Send a notification to the following SNS topic**.

   1. Enter a unique topic name and a valid email address that will receive the notification.

   1. Choose **Create topic**. Scroll down and choose **Next**.

1. On the **Add name and description** page, enter an **Alarm name** and **Alarm description**. Choose **Next**.  
![\[The Add name and description page showing fields for alarm name and alarm description.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/add_name_desc.png)

1. Review your alarm settings on the **Preview and create** page, and then choose **Create alarm**.

**Important**  
To keep your Timestream for InfluxDB cluster in a healthy state, we also recommend monitoring and creating alarms for `CPUUtilization` and `MemoryUtilization` that consistently exceed a healthy 85 percent usage and `DiskUtilization` that exceeds 75 percent.

# Read replica licensing through AWS Marketplace
<a name="timestream-for-influx-rr-licensing"></a>

To use Timestream for InfluxDB read replicas, you will need to activate the Timestream for InfluxDB read replicas add-on license through AWS Marketplace. Once the license is active, you will pay an hourly rate to use read replica clusters. You will only pay for the hours your read replica cluster is active. If you subscribe to the license but have no active Timestream for InfluxDB read replica clusters, you will not be charged.

**Topics**
+ [Read replica licensing terminology](#timestream-for-influx-rr-licensing-terminology)
+ [Payments and billing](#timestream-for-influx-rr-license-billing)
+ [Subscribing to the InfluxDB read replica add-on on Marketplace listings](#timestream-for-influx-subscribe-rr-add-on)

## Read replica licensing terminology
<a name="timestream-for-influx-rr-licensing-terminology"></a>

This page uses the following terminology when discussing the Amazon Timestream for InfluxDB integration with AWS Marketplace.

**SaaS subscription**  <a name="saassub"></a>
In AWS Marketplace, software-as-a-service (SaaS) products such as the pay-as-you-go license model adopt a usage-based subscription model. InfluxData, the software seller for the read replica add-on, tracks your usage and you pay only for what you use.

**InfluxData Marketplace fees**  <a name="influxdatafees"></a>
Fees charged for the InfluxDB read replica add-on software license usage by InfluxData. These service fees are metered through AWS Marketplace and appear on your AWS bill under the AWS Marketplace section.

**Amazon Timestream for InfluxDB fees**  <a name="timestreamfees"></a>
Fees that AWS charges for the Amazon Timestream for InfluxDB services, which excludes licenses when using Timestream for InfluxDB read replica clusters. Fees are metered through the Amazon Timestream for InfluxDB service being used and appear on your AWS bill.

## Payments and billing
<a name="timestream-for-influx-rr-license-billing"></a>

Timestream for InfluxDB integrates with AWS Marketplace to offer hourly, pay-as-you-go licenses for the read replica add-on. The read replica Marketplace fees cover the license costs of the read replica add-on software, and the Amazon Timestream fees cover the costs of your Timestream for InfluxDB read replica cluster usage. For information about pricing, see [Amazon Timestream pricing](https://aws.amazon.com/timestream/pricing).

To stop these fees, you must delete any Timestream for InfluxDB read replica clusters. In addition, you can remove your subscriptions to AWS Marketplace for read replica add-on license. If you remove your subscriptions without deleting your read replica clusters, Amazon Timestream will continue to bill you for the use of the read replica clusters. For more information, see [Considerations when deleting replicas](timestream-for-influx-read-replica-overview.md#timestream-for-influx-rr-deletion).

You can view bills and manage payments for your Timestream for InfluxDB read replica cluster in the AWS Billing console. Your bills includes two charges: one for your usage of InfluxData's licensed add-on through AWS Marketplace, and one for your usage of Amazon Timestream. For more information about billing, see [Understanding your bill](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/getting-viewing-bill.html) in the *AWS Billing and Cost Management User Guide*.

## Subscribing to the InfluxDB read replica add-on on Marketplace listings
<a name="timestream-for-influx-subscribe-rr-add-on"></a>

To use the read replica add-on license through AWS Marketplace, you must use the Amazon Timestream AWS Management Console to subscribe to the InfluxDB read replica add-on. You cannot complete these tasks through the AWS CLI or the Timestream for InfluxDB API.

**Topics**
+ [Subscribe from Amazon Timestream AWS Management Console](#timestream-for-influx-subscribe-console)
+ [Subscribe to the InfluxDB read replica add-on in AWS Marketplace](#timestream-for-influx-subscribe-marketplace)

**Note**  
If you want to create your read replica cluster by using the AWS CLI or the Timestream for InfluxDB API, you must complete this step first.

### Subscribe from Amazon Timestream AWS Management Console
<a name="timestream-for-influx-subscribe-console"></a>

You can subscribe to the InfluxDB read replica add-on using the Timestream Management Console. Start the **Create InfluxDB Database** flow and follow the steps. For more information, see [Creating a Timestream for InfluxDB read replica cluster](timestream-for-influx-create-rr-cluster.md).

### Subscribe to the InfluxDB read replica add-on in AWS Marketplace
<a name="timestream-for-influx-subscribe-marketplace"></a>

To use the InfluxDB add-on license with AWS Marketplace, you need to have an active AWS Marketplace subscription for the InfluxDB read replica add-on. You will need to subscribe to a single add-on offer and that will allow you to create any instance type you need in any of the available regions. For information about AWS Marketplace subscriptions, see [SaaS products through AWS Marketplace](https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-saas-products.html#saas-pricing-models) in the *AWS Marketplace Buyer Guide*.

We recommend that you subscribe to InfluxDB in AWS Marketplace *before* you start creating a DB instance.

1. Navigate to the [AWS Marketplace](https://console.aws.amazon.com/marketplace) and search for InfluxData.  
![\[Timestream for InfluxDB read replicas add-on appearing in AWS Marketplace search.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/search_mkt_influxdb.png)

1. Select **Timestream for InfluxDB Read Replicas (Add-On)**.

1. Select **View purchase options**.

1. Review the End User License Agreement and choose **Subscribe**.  
![\[Offer and pricing details for Timestream for InfluxDB read replicas add-on.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/kronos/addon_details.png)

1. You can now create your Timestream for InfluxDB read replica cluster using the Timestream Management Console, CLI, or API.

# Managing DB instances
<a name="timestream-for-influx-managing"></a>

This section covers various aspects of managing Amazon Timestream for InfluxDB instance to ensure optimal performance, availability, and monitoring capabilities. It provides guidance on updating configurations of your database instances, handling multi-AZ deployments, and failover processes. It also explains how to delete database instances and set up log viewing for your InfluxDB instances. 

**Topics**
+ [Updating DB instances](timestream-for-influx-managing-modifying-db.md)
+ [Maintenance windows](timestream-for-influx-managing-maintaining-db.md)
+ [Deleting a DB instance](timestream-for-influx-managing-deleting-db.md)
+ [Rebooting a DB instance](timestream-for-influx-managing-rebooting-db.md)
+ [Multi-AZ DB instance deployments](timestream-for-influx-managing-multi-az-instance-deployments.md)
+ [Setup to view InfluxDB logs on Timestream Influxdb Instances](timestream-for-influx-managing-view-influx-logs.md)
+ [Monitoring and Configuration Optimization for Timestream for InfluxDB 2](timestream-for-influx-monitoring-configuration-optimization.md)

# Updating DB instances
<a name="timestream-for-influx-managing-modifying-db"></a>

 You can update the following configuration parameters of your Timestream for InfluxDB instance:
+ Instance class
+ Storage type
+ Allocated storage (increase only)
+ Deployment type
+ Parameter group
+ Log delivery configuration

**Important**  
We recommend you test all changes on a test instance before modifying the production instance to understand their impact, especially when upgrading database versions. Review the impact on your database and applications before updating settings. Some modifications require a DB instance reboot, resulting in downtime.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**. 

1. On the **Modify DB instance** page, make the desired changes.

1. Choose **Continue** and check the summary of modifications.

1. Choose **Next**.

1. Review your changes.

1. Choose **Modify instance** to apply your changes.

**Note**  
These modifications require a reboot of the Influx DB instance and can cause an outage in some cases.

**Using the AWS Command Line Interface**

To update a DB instance by using the AWS Command Line Interface, call the `update-db-instance` command. Specify the DB instance identifier and the values for the options that you want to modify. For information about each option, see [Settings for DB instances](timestream-for-influx-configuring.md#timestream-for-influx-configuring-create-db-settings).

**Example**  
 The following code modifies *my-db-instance* by setting a different `db-parameter-group-name`. Replace each *user input placeholder* with your own information. The changes are applied immediately.  
For Linux, macOS, or Unix:  

```
aws timestream-influxdb update-db-instance \
    --identifier my-db-instance \
    --db-storage-type desired-storage-type \
    --allocated-storage desired-allocated-storage \
    --db-instance-type desired-instance-type \
    --deployment-type desired-deployment-type \
    --db-parameter-group-name new-param-group \
    --port 8086
```
For Windows:  

```
aws timestream-influxdb update-db-instance ^
    --identifier my-db-instance ^
    --db-storage-type desired-storage-type ^
    --allocated-storage desired-allocated-storage ^
    --db-instance-type desired-instance-type ^
    --deployment-type desired-deployment-type ^
    --db-parameter-group-name new-param-group
    --port 8086
```

# Maintenance windows
<a name="timestream-for-influx-managing-maintaining-db"></a>

Periodically, Amazon Timestream for InfluxDB performs maintenance on Amazon Timestream for InfluxDB resources. Maintenance most often involves updates to the following resources in your DB instance:
+ Underlying hardware
+ Underlying operating system (OS)
+ Database engine version

Updates to the operating system most often occur for security issues. 

Some maintenance items require that Amazon Timestream for InfluxDB take your DB instance offline for a short time. Maintenance items that require a resource to be offline include required operating system or database patching. Required patching is automatically scheduled only for patches that are related to security and instance reliability. Such patching occurs infrequently, typically once every few months. It seldom requires more than a fraction of your maintenance window.

**Maintenance window**

Every Amazon Timestream for InfluxDB DB instance has a weekly maintenance window during which maintenance is performed. You can configure your maintenance window in two ways:
+ **Service managed (default)**: Amazon Timestream for InfluxDB determines the optimal maintenance window for your resource.
+ **Customer managed**: You specify a preferred maintenance window using the format `ddd:HH:MM-ddd:HH:MM` (for example, `Sun:02:00-Sun:04:00`). The window must be at least 2 hours and no more than 24 hours. Cross-midnight windows are supported.

You can set your preferred maintenance window when creating a DB instance or change it later using the `update-db-instance` API.

**Timezone**

You can specify a timezone for your maintenance window using the `timezone` field. When set, window times are interpreted in the specified timezone. The `timezone` field is required. Use IANA timezone identifiers such as `America/New_York` or `Asia/Tokyo`. The system handles Daylight Saving Time transitions automatically.

**CLI examples**

Create a DB instance with a custom maintenance window:

```
aws timestream-influxdb create-db-instance \
  --name "my-influxdb" \
  --db-instance-type db.influx.medium \
  --allocated-storage 50 \
  --vpc-subnet-ids subnet-12345abc subnet-67890def \
  --vpc-security-group-ids sg-12345abc \
  --maintenance-schedule '{
    "timezone": "America/New_York",
    "preferredMaintenanceWindow": "Sun:02:00-Sun:04:00"
  }' \
  --region us-west-2
```

Update the maintenance window on an existing DB instance:

```
aws timestream-influxdb update-db-instance \
  --identifier <instance-identifier> \
  --maintenance-schedule '{
    "timezone": "Asia/Tokyo",
    "preferredMaintenanceWindow": "Wed:03:00-Wed:06:00"
  }' \
  --region us-west-2
```

Revert to service managed:

```
aws timestream-influxdb update-db-instance \
  --identifier <instance-identifier> \
  --maintenance-schedule '{
    "timezone": "UTC",
    "preferredMaintenanceWindow": ""
  }' \
  --region us-west-2
```

**Important**  
If a required maintenance action has been deferred for more than 25 days, the service may apply maintenance outside of your preferred window to ensure the security and reliability of your resource.

During maintenance, the DB instance status changes to `MAINTENANCE`. After completion, the status returns to `AVAILABLE`.

**Supported timezones**

Use IANA timezone identifiers. Timezone abbreviations such as `EST`, `PST`, and `GMT+5` are not supported.


| **Timezone** | **Description** | 
| --- | --- | 
| UTC | Coordinated Universal Time (default) | 
| America/New\$1York | US Eastern | 
| America/Chicago | US Central | 
| America/Denver | US Mountain | 
| America/Los\$1Angeles | US Pacific | 
| America/Sao\$1Paulo | Brazil | 
| Europe/London | UK | 
| Europe/Paris | Central Europe | 
| Europe/Berlin | Germany | 
| Asia/Tokyo | Japan | 
| Asia/Shanghai | China | 
| Asia/Singapore | Singapore | 
| Asia/Mumbai | India | 
| Asia/Dubai | UAE | 
| Australia/Sydney | Australia Eastern | 
| Pacific/Auckland | New Zealand | 

**Considerations**
+ Maintenance windows define when maintenance *can* occur, not when it *will* occur. Maintenance is performed as needed, typically no more than once per week.
+ Maintenance is required at least once per month for security and reliability patching.
+ For Multi-AZ deployments, maintenance is performed on the standby first, then a failover occurs, minimizing downtime.
+ If you use a timezone with DST transitions, avoid scheduling maintenance between 1:00 AM and 3:00 AM to prevent skipped windows during spring forward.

# Deleting a DB instance
<a name="timestream-for-influx-managing-deleting-db"></a>



Deleting a DB instance has an effect on instance recoverability, and snapshot availability. Consider the following issues:
+ If you want to delete all Timestream for InfluxDB resources, note that the DB instances resources incur billing charges.
+ When the status for a DB instance is deleting, its CA certificate value doesn't appear in the Timestream for InfluxDB console or in output for AWS Command Line Interface commands or Timestream API operations. 
+ The time required to delete a DB instance varies depending on how much data is deleted, and whether a final snapshot is taken.

You can delete a DB instance using the AWS Management Console, the AWS Command Line Interface, or the Timestream API. You must provide the name of the DB instance: 

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB Databases**, and then choose the DB instance that you want to delete.

1. Choose **Delete**.

1. Enter *confirm* in the box.

1. Choose **Delete**.

**Using the AWS Command Line Interface**

To find the instance IDs of the DB instances in your account, call the `list-db-instances` command:

```
aws timestream-influxdb list-db-instances \
--endpoint-url YOUR_ENDPOINT \
--region YOUR_REGION
```

To delete a DB instance by using the AWS CLI, call the `delete-db-instance` command with the following options:

```
aws timestream-influxdb list-db-instances \
--identifier YOUR_DB_INSTANCE \
```

**Example**  

For Linux, macOS, or Unix:

```
aws timestream-influxdb delete-db-instance \
    --identifier mydbinstance
```

For Windows:

```
aws timestream-influxdb delete-db-instance ^
    --identifier mydbinstance
```

# Rebooting a DB instance
<a name="timestream-for-influx-managing-rebooting-db"></a>



You can reboot a DB instance using the AWS Management Console, the AWS Command Line Interface, or the Timestream API. You must provide the ID of the DB instance: 

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB Databases**, and then choose the DB instance that you want to reboot.

1. Choose **Restart database**.

1. Choose **Confirm and Restart**.

**Using the AWS Command Line Interface**

To reboot a DB instance by using the AWS CLI, call the `reboot-db-instance` command with the following options:

**Example Commands**  

For Linux, macOS, or Unix:

```
aws timestream-influxdb reboot-db-instance \
    --region YOUR_REGION \
    --identifier YOUR_INSTANCE_ID
```

For Windows:

```
aws timestream-influxdb reboot-db-instance ^
    --region YOUR_REGION ^
    --identifier YOUR_INSTANCE_ID
```

# Multi-AZ DB instance deployments
<a name="timestream-for-influx-managing-multi-az-instance-deployments"></a>

Amazon Timestream for InfluxDB provides high availability and failover support for DB instances using Multi-AZ deployments with a single standby DB instance. This type of deployment is called a Multi-AZ DB instance deployment. Amazon Timestream for InfluxDB use the Amazon failover technology. 

In a Multi-AZ DB instance deployment, Amazon Timestream automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy. Running a DB instance with high availability can enhance availability during DB instance failure and Availability Zone disruption. For more information on , see [AWS Regions and Availability Zones](timestream-for-influxdb.md#timestream-for-influx-dbi-regions).

**Note**  
The high availability option isn't a scaling solution for read-only scenarios. You can't use a standby replica to serve read traffic. 

Using the Amazon Timestream console, you can create a Multi-AZ DB instance deployment by simply specifying **Create a standby instance** option in the **Availability and durability configuration** section when creating a DB instance. You can also specify a Multi-AZ DB instance deployment with the AWS Command Line Interface or Amazon Timestream API. Use the `create-db-instance` or CLI command, or the `CreateDBInstance` API operation.

DB instances using Multi-AZ DB instance deployments can have increased write and commit latency compared to a Single-AZ deployment. This can happen because of the synchronous data replication that occurs. You might have a change in latency if your deployment fails over to the standby replica, although AWS is engineered with low-latency network connectivity between . For production workloads, we recommend that you use IOPS Included storage 12K or 16K IOPS for fast, consistent performance. For more information about DB instance classes, see [DB instance classes](timestream-for-influxdb.md#timestream-for-influx-dbi-classes).

# Configuring and managing a multi-AZ deployment
<a name="timestream-for-influx-managing-multi-az"></a>

Timestream for InfluxDB Multi-AZ deployments can only have one standby. When the deployment has one standby DB instance, it's called a Multi-AZ DB instance deployment. A Multi-AZ DB instance deployment has one standby DB instance that provides failover support, but doesn't serve read traffic. 

**Important**  
Your instance must have at least two subnets associated with it to execute Single-AZ to Multi-AZ updates. Once the instance is created, you can't modify its deployment mode from Single-AZ to Multi-AZ .

You can use the AWS Management Console to determine whether your DB instance is a Single-AZ or Multi-AZ deployment.

**Using the AWS Management Console**

1. Sign in to the AWS Management Console and open the [Amazon Timestream for InfluxDB console](https://console.aws.amazon.com/timestream/).

1. In the navigation pane, choose **InfluxDB databases**, and then choose **DB identifier**.

A Multi-AZ DB instance deployment has the following characteristics:
+ There is only one row for the DB instance.
+ The value of Role is Instance or Primary.
+ The value of Multi-AZ is Yes.

# Failover process for Amazon Timestream
<a name="timestream-for-influx-managing-multi-az-failover"></a>

If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon Timestream for InfluxDB automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ. The time that it takes for the failover to complete depends on the database activity and other conditions at the time the primary DB instance became unavailable. Failover times are typically 60–120 seconds. However, large transactions or a lengthy recovery process can increase failover time. When the failover is complete, it can take additional time for the Timestream console to reflect the new Availability Zone.

**Note**  
Amazon Timestream handles failovers automatically so you can resume database operations as quickly as possible without administrative intervention. The primary DB instance switches over automatically to the standby replica if any of the conditions described in the following table occurs. 


****  

| Failover reason | Description | 
| --- | --- | 
| The operating system underlying the Timestream database instance is being patched in an offline operation.  |  A failover was triggered during the maintenance window for an OS patch or a security update.  | 
| The primary host of the Timestream Multi-AZ instance is unhealthy.  |  The Multi-AZ DB instance deployment detected an impaired primary DB instance and failed over.  | 
| The primary host of the Timestream Multi-AZ instance is unreachable due to loss of network connectivity.  |  Timestream monitoring detected a network reachability failure to the primary DB instance and triggered a failover.  | 
| The Timestream instance was modified by customer.  |  An Timesteam for InfluxDB DB instance modification triggered a failover. For more information, see [Updating DB instances](timestream-for-influx-managing-modifying-db.md).  | 
| The Timestream Multi-AZ primary instance is busy and unresponsive.  |  The primary DB instance is unresponsive. We recommend that you do the following: \$1 Examine the event for excessive CPU, memory, or swap space usage. \$1 Evaluate your workload to determine whether you're using the appropriate DB instance class. For more information, see DB instance classes.  | 
| The storage volume underlying the primary host of the Timestream Multi-AZ instance experienced a failure.  |  The Multi-AZ DB instance deployment detected a storage issue on the primary DB instance and failed over.  | 

# Setting the JVM TTL for DNS name lookups
<a name="timestream-for-influx-managing-jvm"></a>

The failover mechanism automatically changes the Domain Name System (DNS) record of the DB instance to point to the standby DB instance. As a result, you need to re-establish any existing connections to your DB instance. In a Java virtual machine (JVM) environment, due to how the Java DNS caching mechanism works, you might need to reconfigure JVM settings.

The JVM caches DNS name lookups. When the JVM resolves a host name to an IP address, it caches the IP address for a specified period of time, known as the *time-to-live* (TTL).

Because AWS resources use DNS name entries that occasionally change, we recommend that you configure your JVM with a TTL value of no more than 60 seconds. Doing this makes sure that when a resource's IP address changes, your application can receive and use the resource's new IP address by requerying the DNS.

On some Java configurations, the JVM default TTL is set so that it never refreshes DNS entries until the JVM is restarted. Thus, if the IP address for an AWS resource changes while your application is still running, it can't use that resource until you manually restart the JVM and the cached IP information is refreshed. In this case, it's crucial to set the JVM's TTL so that it periodically refreshes its cached IP information.

You can get the JVM default TTL by retrieving the `networkaddress.cache.ttl` property value:

```
String ttl = java.security.Security.getProperty("networkaddress.cache.ttl");
```

**Note**  
The default TTL can vary according to the version of your JVM and whether a security manager is installed. Many JVMs provide a default TTL less than 60 seconds. If you're using such a JVM and not using a security manager, you can ignore the rest of this topic.   
To modify the JVM's TTL, set the networkaddress.cache.ttl property value. Use one of the following methods, depending on your needs:  
To set the property value globally for all applications that use the JVM, set `networkaddress.cache.ttl` in the `$JAVA_HOME/jre/lib/security/java.security` file.  

  ```
  networkaddress.cache.ttl=60 
  ```
To set the property locally for your application only, set `networkaddress.cache.ttl` in your application's initialization code before any network connections are established.  

  ```
  java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
  ```

# Setup to view InfluxDB logs on Timestream Influxdb Instances
<a name="timestream-for-influx-managing-view-influx-logs"></a>

By default InfluxDB generates logs that go to stdout. For more information, see [Manage InfluxDB logs](https://docs.influxdata.com/influxdb/v2/admin/logs)

To view InfluxDB logs generated from the Instance you have created through Timestream InfluxDB, we provide the opportunity to provide hourly logs. These logs will go to a specified S3 bucket that you must create before creating your instance. 
+ Before creating the instance, the provided Amazon S3 bucket must also give Timestream-InfluxDB permission to send logs to this bucket by providing a bucket policy with Timestream InfluxDB Service Principal as following (replace *\$1BUCKET\$1NAME\$1* with the actual name of your Amazon S3 bucket:

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Id": "PolicyForInfluxLogs",
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Service": "timestream-influxdb.amazonaws.com"
              },
              "Action": "s3:PutObject",
              "Resource": "arn:aws:s3:::{BUCKET_NAME}/InfluxLogs/*"
          }
      ]
  }
  ```

------
+ The bucket provided must be in the same account and same Region of your created Timestream InfluxDB instance

  Here is the command you can call to make an instance to receive influx logs:

  ```
  aws timestream-influxdb create-db-instance \
      --name myinfluxDbinstance \
      --allocated-storage 400 \
      --db-instance-type db.influx.4xlarge \
      --vpc-subnet-ids subnetid1 subnetid2
      --vpc-security-group-ids mysecuritygroup \    
      --username masterawsuser \
      --password \
      --db-storage-type InfluxIOIncludedT2
  ```

  Here is the format of this parameter.

  ```
  -- log-delivery-configuration
  {
      "S3Configuration": {
        "BucketName": "string",
        "Enabled": true|false
      }
  }
  ```
+ This field is not required and logging is not enabled by default.
+ Not setting this field is the same as not having logs enabled.
+ Logs will be sent to specified bucket with a prefix of `InfluxLogs/`.
+ After creating the instance, you can modify the log delivery configuration with the `update-db-instance` API command.

InfluxDB offers different types of logs. These can be configured by setting the InfluxDB Parameters. Use the flux-log-enabled and log-level parameters to configure the type of logs that is emitted from the instance. For more information, see [Supported parameters and parameter values](timestream-for-influx-db-connecting.md#timestream-for-influx-parameter-groups-overview-supported-parameters). 

# Monitoring and Configuration Optimization for Timestream for InfluxDB 2
<a name="timestream-for-influx-monitoring-configuration-optimization"></a>

## Overview
<a name="monitoring-overview"></a>

Effective monitoring and configuration optimization are critical for maintaining optimal performance, reliability, and cost-efficiency in your Timestream for InfluxDB deployment. This guide provides comprehensive guidance on CloudWatch metrics, performance thresholds, and configuration tuning strategies to help you proactively manage your InfluxDB instances.

## CloudWatch Metrics Reference
<a name="cloudwatch-metrics-reference"></a>

Amazon CloudWatch provides detailed metrics for monitoring your Timestream for InfluxDB instances. Understanding these metrics and their thresholds is essential for maintaining system health and performance.

### Resource Utilization Metrics
<a name="resource-utilization-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| CPUUtilization | DbInstanceName | Percentage of CPU being used | Percent |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| MemoryUtilization | DbInstanceName | Percentage of memory being used | Percent |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| HeapMemoryUsage | DbInstanceName | Amount of heap memory in use | Bytes |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| ActiveMemoryAllocation | DbInstanceName | Current active memory allocation | Bytes |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| DiskUtilization | DbInstanceName | Percentage of disk space being used | Percent |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 

### I/O Operations Metrics
<a name="io-operations-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| ReadOpsPerSec | DbInstanceName | Number of read operations per second | Count/Second | Maintain ≥ 30% headroom below provisioned IOPSExample: 12K IOPS → keep < 8,400 IOPS total | 
| WriteOpsPerSec | DbInstanceName | Number of write operations per second | Count/Second | Maintain ≥ 30% headroom below provisioned IOPSExample: 12K IOPS → keep < 8,400 IOPS total | 
| TotalIOpsPerSec | DbInstanceName | Total I/O operations per second (read \$1 write) | Count/Second | Maintain ≥ 30% headroom below provisioned IOPSMonitor against instance class capabilities | 

### Throughput Metrics
<a name="throughput-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| ReadThroughput | DbInstanceName | Data read throughput | Bytes/Second | Monitor against storage throughput limits | 
| WriteThroughput | DbInstanceName | Data write throughput | Bytes/Second | Monitor against storage throughput limits | 

### API Performance Metrics
<a name="api-performance-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| APIRequestRate | DbInstanceName, Endpoint, Status | Rate of API requests to specific endpoints with status codes (2xx, 4xx, 5xx) | Count/Second |  Error rates: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| QueryResponseVolume | DbInstanceName, Endpoint, Status | Volume of query responses by endpoint and status code | Bytes |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 

### Query Execution Metrics
<a name="query-execution-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| QueryRequestsTotal | DbInstanceName, Result | Total count of query requests by result type (success, runtime\$1error, compile\$1error, queue\$1error) | Count |  Success rate: > 99% Error rates: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 

### Data Organization Metrics
<a name="data-organization-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Critical Thresholds | 
| --- | --- | --- | --- | --- | 
| SeriesCardinality | DbInstanceName, Bucket | Number of unique time series in a bucket | Count |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 
| TotalBuckets | DbInstanceName | Total number of buckets in the instance | Count |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-monitoring-configuration-optimization.html)  | 

### System Health Metrics
<a name="system-health-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| EngineUptime | DbInstanceName | Time the InfluxDB engine has been running | Seconds | Monitor for unexpected restartsAlert: Uptime resets unexpectedly | 
| WriteTimeouts | DbInstanceName | Number of write operations that timed out | Count | Alert: > 0.1% of write operationsCritical: Increasing trend | 

### Task Management Metrics
<a name="task-management-metrics"></a>


| CloudWatch Metric Name | Dimensions | Description | Unit | Recommended Thresholds | 
| --- | --- | --- | --- | --- | 
| ActiveTaskWorkers | DbInstanceName | Number of active task workers | Count | Monitor against configured task worker limitAlert: Consistently at maximum | 
| TaskExecutionFailures | DbInstanceName | Number of failed task executions | Count | Alert: > 1% of task executionsCritical: Increasing failure rate | 

### Understanding Key Metric Relationships
<a name="understanding-key-metric-relationships"></a>

#### IOPS and Throughput Relationship
<a name="iops-throughput-relationship"></a>

**The 30% Headroom Rule:** Always maintain at least **30% headroom** between your sustained operations per second and your provisioned IOPS. This provides buffer for:
+ Compaction operations (can spike IOPS significantly)
+ Any database restart to run smoothly
+ Query bursts during peak usage
+ Write spikes from batch ingestion
+ Index maintenance operations

**Example Calculation:**
+ Provisioned IOPS: 12,000
+ Target Maximum Sustained IOPS (TotalIOpsPerSec): 8,400 (70% utilization)
+ Reserved Headroom: 3,600 IOPS (30%)

If TotalIOpsPerSec consistently exceeds 8,400: → Upgrade storage tier or optimize workload

**Monitoring Formula:**

IOPS Utilization % = (ReadOpsPerSec \$1 WriteOpsPerSec) / Provisioned IOPS × 100
+ Target: Keep IOPS Utilization < 70%
+ Warning: IOPS Utilization > 70%
+ Critical: IOPS Utilization > 90%

### Understanding Series Cardinality Performance Impact
<a name="series-cardinality-performance-impact"></a>

Series cardinality has a multiplicative effect on system resources:


| **Series Count** | **Memory Impact** | **Query Performance Impact** | **Index Size Impact** | **Recommendation** | 
| --- | --- | --- | --- | --- | 
| < 100K | Minimal | Negligible | Small | Standard configuration | 
| 100K - 1M | Moderate | 10-20% slower | Medium | Tune cache settings | 
| 1M - 5M | Significant | 30-50% slower | Large | Aggressive optimization required | 
| 5M - 10M | High | 50-70% slower | Very Large | Maximum tuning, consider redesign | 
| > 10M | Severe | 70%\$1 slower | Excessive | Migrate to InfluxDB 3.0 | 

**Why 10M is the Critical Threshold:**
+ InfluxDB 2.x architecture uses in-memory indexing
+ Beyond 10M series, index operations become prohibitively expensive
+ Memory requirements grow non-linearly
+ Query planning overhead increases dramatically
+ InfluxDB 3.0 uses a columnar storage engine designed for high cardinality

## Instance Sizing and Performance Guidelines
<a name="instance-sizing-guidelines"></a>

The following table provides guidance on appropriate instance sizing based on your series cardinality and workload characteristics:


| **Max Series Count** | **Writes (lines/sec)** | **Reads (queries/sec)** | **Recommended Instance** | **Storage Type** | **Use Case** | 
| --- | --- | --- | --- | --- | --- | 
| < 100K | \$150,000 | < 10 | db.influx.large | Influx IO Included 3K | Small deployments, development, testing | 
| < 1M | \$1150,000 | < 25 | db.influx.2xlarge | Influx IO Included 3K | Small to medium production workloads | 
| \$11M | \$1200,000 | \$125 | db.influx.4xlarge | Influx IO Included 3K | Medium production workloads | 
| < 5M | \$1250,000 | \$135 | db.influx.4xlarge | Influx IO Included 12K | Large production workloads | 
| < 10M | \$1500,000 | \$150 | db.influx.8xlarge | Influx IO Included 12K | Very large production workloads | 
| \$110M | < 750,000 | < 100 | db.influx.12xlarge | Influx IO Included 12K | Maximum InfluxDB 2.x capacity | 
| > 10M | N/A | N/A | Migrate to InfluxDB 3.0 | N/A | Beyond InfluxDB 2.x optimal range | 

## Configuration Optimization by Metric
<a name="configuration-optimization-by-metric"></a>

### High CPU Utilization (CPUUtilization > 70%)
<a name="high-cpu-utilization"></a>

**Symptoms:**
+ **CPUUtilization** > 70% sustained
+ **QueryRequestsTotal** (high volume or slow queries)
+ **ActiveTaskWorkers** (high task load)

**Configuration Adjustments:**

**Priority 1: Control Query Concurrency**
+ query-concurrency: Set to 50-75% of vCPU count
+ Example: 8 vCPU instance → query-concurrency = 4-6

**Priority 2: Limit Query Complexity**
+ influxql-max-select-series: 10000 (prevent unbounded queries)
+ influxql-max-select-point: 100000000
+ query-queue-size: 2048 (prevent queue buildup)

**Priority 3: Enable Query Analysis**
+ flux-log-enabled: TRUE (temporarily for debugging)
+ log-level: info (or debug for detailed analysis)

**Important Considerations:**

Reducing `query-concurrency` will limit the number of queries that can execute simultaneously, which may increase queued queries and lead to higher query latency during peak periods. Users may experience slower dashboard loads or report timeouts if query demand exceeds the reduced concurrency limit.

Setting protective limits (`influxql-max-select-series`, `influxql-max-select-point`) will cause queries that exceed these thresholds to fail with **compile\$1error** or **runtime\$1error** in **QueryRequestsTotal**. While this protects the system from resource exhaustion, it may break existing queries that previously worked.

**Best Practice:** Before applying these changes, analyze your query patterns using **QueryResponseVolume** and **QueryRequestsTotal** metrics. Identify and optimize the most expensive queries first - look for queries without time range filters, queries spanning high-cardinality series, or queries requesting excessive data points. Optimizing queries at the application level is always preferable to imposing hard limits that may break functionality.

**Hardware Actions:**
+ Scale to next instance class with more vCPUs
+ Review query patterns for optimization opportunities

### High Memory Utilization (MemoryUtilization > 70%)
<a name="high-memory-utilization"></a>

**Symptoms:**
+ **MemoryUtilization** > 70% sustained
+ **HeapMemoryUsage** trending upward
+ **ActiveMemoryAllocation** showing spikes
+ **SeriesCardinality** (high cardinality increases memory usage)

**Configuration Adjustments:**

**Priority 1: Reduce Cache Memory**
+ storage-cache-max-memory-size: Set to 10-15% of total RAM
+ Example: 32GB RAM → 3,355,443,200 to 5,033,164,800 bytes
+ storage-cache-snapshot-memory-size: 26,214,400 (25MB)

**Priority 2: Limit Query Memory**
+ query-memory-bytes: Set to 60-70% of total RAM
+ query-max-memory-bytes: Same as query-memory-bytes
+ query-initial-memory-bytes: 10% of query-memory-bytes

**Priority 3: Optimize Series Cache**
+ storage-series-id-set-cache-size: Reduce if high cardinality
+ High memory: 100-200
+ Normal: 500-1000

**Important Considerations:**

While these changes will reduce memory pressure, they will have a direct negative impact on application performance. Reducing `storage-cache-max-memory-size` means less data is cached in memory, forcing more disk reads and increasing query latency - you'll likely see **ReadOpsPerSec** increase and **QueryResponseVolume** response times degrade.

Limiting `query-memory-bytes` will cause memory-intensive queries to fail with **runtime\$1error** in **QueryRequestsTotal**, particularly queries that aggregate large datasets or return substantial result sets. Users may encounter "out of memory" errors for queries that previously succeeded.

Reducing `storage-series-id-set-cache-size` degrades performance for queries against high-cardinality data, as the system must recalculate series results more frequently instead of retrieving them from cache. This particularly impacts dashboards that repeatedly query the same series combinations.

**Best Practice:** Before applying these restrictive changes, analyze your query patterns and optimize them first:
+ Review **QueryResponseVolume** to identify queries returning excessive data
+ Use **QueryRequestsTotal** to find frequently executed queries that could benefit from optimization
+ Add time range filters to reduce data scanning to what's necessary for your workload
+ Implement query result caching at the application level
+ Consider pre-aggregating data using downsampling tasks
+ Review **SeriesCardinality** and optimize your data model to reduce unnecessary tags

Query optimization should always be your first approach - configuration restrictions should be a last resort when optimization isn't sufficient.

**Hardware Actions:**
+ Increase instance size for more RAM

### High Storage Utilization (DiskUtilization > 70%)
<a name="high-storage-utilization"></a>

**CloudWatch Metrics to Monitor:**
+ **DiskUtilization** > 70%
+ **WriteThroughput** patterns
+ **TotalBuckets** (many buckets increase overhead)

**Configuration Adjustments:**

**Priority 1: Check Logging Configuration**
+ log-level: Ensure set to "info" (not "debug")
+ flux-log-enabled: Set to FALSE unless actively debugging

**Priority 2: Aggressive Retention**
+ storage-retention-check-interval: 15m0s (more frequent cleanup)

**Priority 3: Optimize Compaction**
+ storage-compact-full-write-cold-duration: 2h0m0s (more frequent)
+ storage-cache-snapshot-write-cold-duration: 5m0s

**Priority 4: Reduce Index Size**
+ storage-max-index-log-file-size: 524,288 (512KB for faster compaction)

**Important Considerations:**

**Critical First Step - Check Your Logging Configuration:** Before making any other changes, verify your logging settings. **Debug logging and Flux query logs can consume as much or more disk space than your actual time-series data**, and this is one of the most common causes of unexpected storage exhaustion.

**Logging Impact:**
+ `log-level: debug` generates extremely verbose logs, potentially hundreds of MB per hour
+ `flux-log-enabled: TRUE` logs every Flux query execution with full details, creating massive log files
+ These logs accumulate rapidly and are often overlooked during capacity planning
+ Log files can fill disk space faster than data ingestion, especially on smaller instances
+ Unlike time-series data, logs are kept in local storage for 24 hours before deletion

**Immediate Actions if Logs are Large:**

1. Set `log-level: info` (from debug)

1. Set `flux-log-enabled: FALSE`

1. Monitor **DiskUtilization** for immediate improvement

**Compaction Configuration Trade-offs:**

These configuration changes are specifically designed for workloads with **high ingestion throughput and short retention windows** where disk usage fluctuates substantially. They force the compaction engine to work more aggressively, which is only beneficial in specific scenarios.

**Critical Trade-offs:** Increasing compaction frequency will significantly increase resource consumption:
+ **CPUUtilization** will rise as compaction operations consume CPU cycles
+ **MemoryUtilization** will increase during compaction as data is loaded and processed
+ **WriteOpsPerSec** and **WriteThroughput** will spike during compaction windows, potentially exceeding your 30% IOPS headroom
+ **WriteTimeouts** may increase if compaction I/O competes with application writes

These changes can create a cascading performance problem where aggressive compaction consumes resources needed for query and write operations, degrading overall system performance even while reducing disk usage.

**Best Practice:** Before adjusting compaction settings, focus on data and logging management:

1. **Check Logging First (Most Common Issue):** Verify log-level is "info" and flux-log-enabled is FALSE

1. **Review Your Data Model:** Are you writing data you don't actually need? Can you reduce measurement or field granularity?

1. **Optimize Retention Policies:** Check **TotalBuckets** and review retention settings for each bucket

1. **Monitor Compaction Impact:** Baseline your **CPUUtilization**, **MemoryUtilization**, and **WriteOpsPerSec** before changes

**Alternative Approaches:**
+ Increase storage capacity (often simpler and more cost-effective)
+ Implement data downsampling or aggregation strategies
+ Consolidate buckets (reduce **TotalBuckets**) to decrease overhead
+ Review and enforce retention policies more strictly

Only apply aggressive compaction settings if you've optimized data management and confirmed your instance has sufficient CPU, memory, and IOPS headroom to handle the increased load.

**Hardware Actions:**
+ Increase storage capacity

### High IOPS Utilization (ReadIOPS/WriteIOPS/TotalOperationsPerSecond > 70% of provisioned)
<a name="high-iops-utilization"></a>

**CloudWatch Metrics to Monitor:**
+ **ReadOpsPerSec** \$1 **WriteOpsPerSec** = **TotalIOpsPerSec**
+ **ReadThroughput** and **WriteThroughput**
+ Compare against provisioned IOPS (3K, 12K, or 16K)

**Configuration Adjustments:**

**Priority 1: Control Compaction I/O**
+ storage-max-concurrent-compactions: 2-3 (limit concurrent compactions)
+ storage-compact-throughput-burst: Adjust based on disk capability
+ 3K IOPS: 25,165,824 (24MB/s)
+ 12K IOPS: 50,331,648 (48MB/s)

**Priority 2: Optimize Write Operations**
+ storage-wal-max-concurrent-writes: 8-12
+ storage-wal-max-write-delay: 5m0s

**Priority 3: Adjust Snapshot Timing**
+ storage-cache-snapshot-write-cold-duration: 15m0s (less frequent)
+ storage-compact-full-write-cold-duration: 6h0m0s (less frequent)

**Important Considerations:**

These changes create significant trade-offs between I/O utilization and system performance:

**Limiting Compaction I/O:**
+ Reducing `storage-max-concurrent-compactions` will slow down compaction operations, causing TSM files to accumulate and **DiskUtilization** to increase more rapidly
+ Lower `storage-compact-throughput-burst` extends compaction duration, keeping the compactor active longer and potentially blocking other operations
+ Slower compaction means query performance degrades over time as the storage engine must read from more, smaller TSM files instead of consolidated ones
+ You may see **QueryRequestsTotal** runtime\$1error rates increase as queries timeout while waiting for I/O

**Reducing Snapshot Frequency:**
+ Increasing `storage-cache-snapshot-write-cold-duration` and `storage-compact-full-write-cold-duration` means data stays in the write-ahead log (WAL) longer
+ This increases **MemoryUtilization** as more data is held in cache before being flushed to disk
+ Risk of data loss increases slightly if the instance crashes before cached data is persisted
+ Recovery time after a restart increases as more WAL data must be replayed

**Write Operation Tuning:**
+ Reducing `storage-wal-max-concurrent-writes` will serialize write operations more, potentially increasing **WriteTimeouts** during high-throughput periods
+ Increasing `storage-wal-max-write-delay` means writes may wait longer before being rejected, which can mask capacity problems but frustrate users with slow responses

**Best Practice:** High IOPS utilization usually indicates you've outgrown your storage tier rather than a configuration problem. Before restricting I/O, analyze I/O patterns and optimize before restricting.

**Hardware Actions:**
+ Upgrade to higher IOPS storage tier (3K → 12K)
+ Ensure 30% IOPS headroom is maintained

### High Series Cardinality (SeriesCardinality > 1M)
<a name="high-series-cardinality"></a>

**CloudWatch Metrics to Monitor:**
+ **SeriesCardinality** per bucket and total
+ **MemoryUtilization** (increases with cardinality)
+ **CPUUtilization** (query planning overhead)
+ **QueryRequestsTotal** (runtime\$1error rate may increase)

**Configuration Adjustments:**

**Priority 1: Optimize Series Handling**
+ storage-series-id-set-cache-size: 1000-2000 (increase cache)
+ storage-series-file-max-concurrent-snapshot-compactions: 4-8

**Priority 2: Set Protective Limits**
+ influxql-max-select-series: 10000 (prevent runaway queries)
+ influxql-max-select-buckets: 1000

**Priority 3: Optimize Index Operations**
+ storage-max-index-log-file-size: 2,097,152 (2MB)

**Important Considerations:**

High series cardinality is fundamentally a data modeling problem, not a configuration problem. Configuration changes can only mitigate symptoms - they cannot solve the underlying issue.

**Configuration Trade-offs:**

Increasing `storage-series-id-set-cache-size` will improve query performance by caching series lookups, but at the cost of increased **MemoryUtilization**. Each cache entry consumes memory, and with millions of series, this can be substantial. Monitor **HeapMemoryUsage** and **ActiveMemoryAllocation** after making this change.

Setting protective limits (`influxql-max-select-series`, `influxql-max-select-buckets`) will cause legitimate queries to fail with **compile\$1error** in **QueryRequestsTotal** if they exceed these thresholds. Dashboards that previously worked may break, and users will need to modify their queries. This is particularly problematic for:
+ Monitoring dashboards that aggregate across many hosts/services
+ Analytics queries that need to compare multiple entities
+ Alerting queries that evaluate fleet-wide conditions

Adjusting `storage-max-index-log-file-size` to smaller values increases index compaction frequency, which raises **CPUUtilization** and **WriteOpsPerSec** as the system performs more frequent index maintenance.

**Critical Understanding:**

When **SeriesCardinality** exceeds 5M, you're approaching the architectural limits of InfluxDB 2.x. At 10M\$1 series, performance degrades exponentially regardless of configuration:
+ Query planning becomes prohibitively expensive (high **CPUUtilization**)
+ Memory requirements grow non-linearly (high **MemoryUtilization**)
+ Index operations dominate I/O (**ReadOpsPerSec**, **WriteOpsPerSec**)
+ **QueryRequestsTotal** runtime\$1error rates increase as queries timeout or exhaust memory

**Best Practice:** Configuration changes are temporary band-aids. You must address the root cause:

1. **Analyze Your Data Model:**
   + Review **SeriesCardinality** per bucket to identify problem areas
   + Identify which tags have high unique value counts
   + Look for unbounded tag values (UUIDs, timestamps, user IDs, session IDs)
   + Find tags that should be fields instead

**Data Model Actions:**
+ Review tag design to reduce unnecessary cardinality
+ Consider consolidating similar series
+ **If > 10M series:** Plan migration to InfluxDB 3.0

### Query Performance Issues
<a name="query-performance-issues"></a>

**CloudWatch Metrics to Monitor:**
+ **QueryRequestsTotal** by result type (success, runtime\$1error, compile\$1error, queue\$1error)
+ **APIRequestRate** with Status=500 or Status=499
+ **QueryResponseVolume** (large responses indicate expensive queries)

**Configuration Adjustments:**

**Priority 1: Increase Query Resources**
+ query-concurrency: Increase to 75% of vCPUs
+ query-memory-bytes: Allocate 70% of total RAM
+ query-queue-size: 4096

**Priority 2: Optimize Query Execution**
+ storage-series-id-set-cache-size: 1000 (increase for better caching)
+ http-read-timeout: 60s (prevent premature timeouts)

**Priority 3: Set Reasonable Limits**
+ influxql-max-select-point: 100000000
+ influxql-max-select-series: 10000
+ influxql-max-select-buckets: 1000

**Important Considerations:**

Increasing query resources creates resource competition and potential system instability:

**Resource Allocation Trade-offs:**

Increasing `query-concurrency` allows more queries to run simultaneously, but each query competes for CPU and memory:
+ **CPUUtilization** will increase, potentially reaching saturation during peak query periods
+ **MemoryUtilization** will rise as more queries allocate memory simultaneously
+ If you increase concurrency without adequate resources, all queries slow down instead of just some queuing
+ Risk of cascading failure if concurrent queries exhaust available resources

Allocating more `query-memory-bytes` means less memory available for caching and other operations:
+ **HeapMemoryUsage** will increase
+ `storage-cache-max-memory-size` may need to be reduced to compensate
+ Fewer cache hits means higher **ReadOpsPerSec** and slower query performance
+ System becomes more vulnerable to memory exhaustion if queries use their full allocation

Increasing `query-queue-size` only delays the problem - it doesn't solve capacity issues:
+ Queries wait longer in queue, increasing end-to-end latency
+ Users perceive the system as slower even though throughput may be unchanged
+ Large queues can mask underlying capacity problems
+ **QueryRequestsTotal** queue\$1error rate decreases, but user experience may not improve

Increasing `http-read-timeout` prevents premature query cancellation, but:
+ Long-running queries consume resources longer, reducing capacity for other queries
+ Users wait longer before receiving timeout errors
+ Can hide inefficient queries that should be optimized
+ May lead to resource exhaustion if many slow queries accumulate

**Best Practice:** Query performance problems are usually caused by inefficient queries, not insufficient resources. Before increasing resource allocation:

1. **Analyze Query Patterns:**
   + Review **QueryResponseVolume** to identify queries returning excessive data (> 1MB)
   + Check **QueryRequestsTotal** runtime\$1error patterns - what's causing failures?
   + Look for **APIRequestRate** with Status=499 (client timeouts) - queries are too slow
   + Identify frequently executed expensive queries

1. **Optimize Queries First:**

   Common Query Anti-patterns:
   + Missing time range filters → Add explicit time bounds
   + Querying all series → Add specific tag filters
   + Excessive aggregation windows → Use appropriate intervals
   + Unnecessary fields in SELECT → Request only needed data
   + No LIMIT clauses → Add reasonable limits

1. **Application-Level Solutions:**
   + Implement query result caching (Redis, Memcached)
   + Use tasks to pre-aggregate common patterns
   + Add pagination for large result sets
   + Implement query rate limiting per user/dashboard
   + Use downsampled data for historical queries

1. **Verify Resource Availability:**
   + Check **CPUUtilization** - if already > 70%, increasing concurrency will make things worse
   + Check **MemoryUtilization** - if already > 70%, allocating more query memory will cause OOM
   + Verify **TotalIOpsPerSec** has 30% headroom before increasing query load

**Recommended Approach:**

1. Start by optimizing the top 10 most expensive queries (by **QueryResponseVolume**)

1. Implement query result caching at the application level

1. Only increase resource allocation if queries are optimized and metrics show headroom

1. Scale to a larger instance class if workload has outgrown current capacity

**Hardware Actions:**
+ Scale your compute capacity, queries benefit from extra processing power (vCPUs)

#### RegEx Performance Pitfalls in Flux Queries
<a name="regex-performance-pitfalls"></a>

When filtering data in Flux, avoid using regular expressions for exact matches or simple pattern matching, as this introduces significant performance penalties. RegEx operations in Flux are **single-threaded** and **bypass the underlying TSM index entirely**. Instead of leveraging InfluxDB's optimized tag indexes for fast lookups, RegEx filters force the query engine to retrieve all matching series from storage and perform text comparisons sequentially against each value. This becomes particularly problematic when:
+ **Filtering on exact tag values** - Use the equality operator (`==`) or the `contains()` function instead of RegEx patterns like `/^exact_value$/`
+ **Matching multiple specific values** - Use the `in` operator with an array of values rather than alternation patterns like `/(value1|value2|value3)/`
+ **Simple prefix or suffix matching** - Consider using `strings.hasPrefix()` or `strings.hasSuffix()` functions, which are more efficient than RegEx anchors

For scenarios requiring multiple pattern matches, restructure your query to use multiple filter predicates combined with logical operators, or pre-filter using tag equality before applying more complex string operations. Reserve RegEx exclusively for cases requiring true pattern matching that cannot be expressed through simpler comparison operators.

### Write Performance Issues
<a name="write-performance-issues"></a>

**CloudWatch Metrics to Monitor:**
+ **WriteTimeouts** (increasing count)
+ **WriteOpsPerSec** and **WriteThroughput**
+ **APIRequestRate** with Status=500 for write endpoints
+ **QueryRequestsTotal** with result=runtime\$1error during writes

**Configuration Adjustments:**

**Priority 1: Optimize WAL Writes**
+ storage-wal-max-concurrent-writes: 12-16
+ storage-wal-max-write-delay: 10m0s
+ http-write-timeout: 60s

**Priority 2: Optimize Cache Snapshots**
+ storage-cache-snapshot-memory-size: 52,428,800 (50MB)
+ storage-cache-snapshot-write-cold-duration: 10m0s

**Priority 3: Control Field Validation**
+ storage-no-validate-field-size: TRUE (if data source is trusted)

**Important Considerations:**

Write performance tuning involves careful trade-offs between throughput, reliability, and resource consumption:

**WAL Configuration Trade-offs:**

Increasing `storage-wal-max-concurrent-writes` allows more parallel write operations, but:
+ **CPUUtilization** increases as more write threads compete for CPU
+ **MemoryUtilization** rises as more data is buffered in memory before WAL flush
+ **WriteOpsPerSec** will spike, potentially exceeding your 30% IOPS headroom
+ Increased contention for disk I/O may actually slow down individual writes
+ If you exceed disk I/O capacity, **WriteTimeouts** may increase rather than decrease

Increasing `storage-wal-max-write-delay` means writes wait longer before timing out:
+ Masks capacity problems by making writes wait instead of failing quickly
+ Users experience slower write response times even when writes eventually succeed
+ Can lead to write queue buildup and memory pressure
+ Doesn't actually increase capacity - just delays the timeout

Increasing `http-write-timeout` similarly delays timeout errors:
+ Allows larger batch writes to complete
+ But also allows slow writes to consume resources longer
+ Can hide underlying performance problems
+ May lead to resource exhaustion if many slow writes accumulate

**Cache Snapshot Trade-offs:**

Increasing `storage-cache-snapshot-memory-size` means more data accumulates in memory before flushing:
+ **MemoryUtilization** increases significantly
+ Risk of data loss increases if instance crashes before snapshot
+ Larger snapshots take longer to write, creating bigger **WriteOpsPerSec** spikes
+ Can improve write throughput by batching more data, but at cost of memory and reliability

Increasing `storage-cache-snapshot-write-cold-duration` delays snapshots:
+ Further increases **MemoryUtilization** as data stays in cache longer
+ Increases data loss risk window
+ Reduces **WriteOpsPerSec** frequency but creates larger spikes when snapshots occur
+ Recovery time after restart increases as more WAL must be replayed

**Field Validation Trade-off:**

Setting `storage-no-validate-field-size: TRUE` disables field size validation:
+ Improves write throughput by skipping validation checks
+ **Critical Risk:** Allows malformed or malicious data to be written
+ Can lead to data corruption if writes contain invalid field sizes
+ Makes debugging data problems much harder
+ **Only use if you have complete control and trust of your data source**

**Best Practice:** Write performance problems usually indicate capacity limits or inefficient write patterns. Before tuning configuration:

1. **Analyze Write Patterns:**
   + Review **WriteThroughput** and **WriteOpsPerSec** trends
   + Check **WriteTimeouts** correlation with write load
   + Monitor **APIRequestRate** for write endpoints by status code
   + Identify write batch sizes and frequency

1. **Optimize Write Operations First:**

   Common Write Anti-patterns:
   + Writing individual points → Batch writes (5,000-10,000 points)
   + Too-frequent writes → Buffer and batch
   + Synchronous writes → Implement async write queues
   + Unbounded write bursts → Implement rate limiting
   + Writing unnecessary precision → Round timestamps appropriately

1. **Verify I/O Capacity:**
   + Check **TotalIOpsPerSec** - if already > 70%, increasing WAL concurrency will make things worse
   + Review **WriteOpsPerSec** during peak periods
   + Ensure 30% IOPS headroom exists before tuning write settings
   + Consider whether 3K IOPS is sufficient or if 12K IOPS tier is needed

1. **Application-Level Improvements:**
   + Implement write buffering with configurable batch sizes
   + Add write retry logic with exponential backoff
   + Use asynchronous write operations
   + Implement write rate limiting during peak periods
   + Monitor write queue depth and apply backpressure

**Recommended Approach:**

1. Start by optimizing write batch sizes at the application level (aim for 5,000-10,000 points per batch)

1. Implement write buffering and async operations

1. Verify **TotalIOpsPerSec** has adequate headroom

1. Upgrade to the next storage tier (3K IOPS → 12K IOPS → 16K IOPS) if consistently above 70% utilization

1. Only tune WAL settings if writes are optimized and I/O capacity is adequate

1. **Never** disable field validation unless you have complete control of data sources

**Hardware Actions:**
+ Upgrade to higher IOPS storage (3K → 12K → 16K)
+ Ensure I/O headroom is adequate
+ Scale to larger instance class if CPU or memory constrained

## Monitoring Best Practices
<a name="monitoring-best-practices"></a>

### CloudWatch Alarms Configuration
<a name="cloudwatch-alarms-configuration"></a>

**Critical Alarms (Immediate Action Required):**

**CPUUtilization:**
+ Threshold: > 90% for 5 minutes
+ Action: Implement traffic remediation measures or Compute Scaling

**MemoryUtilization:**
+ Threshold: > 90% for 5 minutes
+ Action: Implement traffic remediation measures or Compute Scaling

**DiskUtilization:**
+ Threshold: > 85%
+ Action: Try to free up space by deleting old buckets, updating retention configurations or Storage Scaling

**TotalIOpsPerSec:**
+ Threshold: > 90% of provisioned for 10 minutes
+ Action: Implement traffic remediation measures or Increase IOPS

**SeriesCardinality:**
+ Threshold: > 10,000,000
+ Action: Review your Data model, if no changes are possible explore migrate to InfluxDB 3 or shard your data

**EngineUptime:**
+ Threshold: Unexpected reset (< 300 seconds)
+ Action: Check is it coincides with a maintenance window, if not create a ticket to Timestream support.

**Warning Alarms (Investigation Required):**

**CPUUtilization:**
+ Threshold: > 70% for 15 minutes
+ Action: review changes in workload or traffic

**MemoryUtilization:**
+ Threshold: > 70% for 15 minutes
+ Action: review changes in workload or traffic

**DiskUtilization:**
+ Threshold: > 70%
+ Action: Review retention policies

**TotalIOpsPerSec:**
+ Threshold: > 70% of provisioned for 15 minutes
+ Action: review changes in workload or traffic

**QueryRequestsTotal (runtime\$1error):**
+ Threshold: > 1% of total queries
+ Action: review changes in workload or traffic

**WriteTimeouts:**
+ Threshold: > 1% of write operations
+ Action: review changes in workload or traffic

**SeriesCardinality:**
+ Threshold: > 5,000,000
+ Action: Review data model optimization

### Proactive Monitoring Checklist
<a name="proactive-monitoring-checklist"></a>

**Daily:**
+ Review APIRequestRate for error spikes (400, 404, 499, 500)
+ Check QueryRequestsTotal for runtime\$1error and queue\$1error rates
+ Verify WriteTimeouts count is minimal
+ Check for any critical alarms
+ Verify EngineUptime (no unexpected restarts)

**Weekly:**
+ Review CPUUtilization, MemoryUtilization, and DiskUtilization trends
+ Analyze QueryRequestsTotal patterns by result type
+ Check SeriesCardinality growth rate per bucket
+ Review TotalIOpsPerSec utilization trends
+ Verify configuration parameters are optimal
+ Review TaskExecutionFailures patterns

**Monthly:**
+ Capacity planning review (project 3-6 months ahead)
+ Compare current metrics against sizing table
+ Review and optimize retention policies
+ Analyze query patterns from APIRequestRate and QueryResponseVolume
+ Review SeriesCardinality and data model efficiency
+ Assess need for instance scaling or configuration changes
+ Review TotalBuckets and consolidation opportunities

## Troubleshooting Guide
<a name="troubleshooting-guide"></a>

### Scenario: Sudden Performance Degradation
<a name="sudden-performance-degradation"></a>

**Investigation Steps:**

**Check Recent Changes:**
+ Configuration parameter modifications in the AWS Management Console
+ Application deployment changes
+ Query pattern changes
+ Data model modifications
+ Infrastructure changes (instance type, storage)

**Review CloudWatch Metrics:**
+ **CPU spike?** → Check CPUUtilization, QueryRequestsTotal
+ **Memory pressure?** → Check MemoryUtilization, HeapMemoryUsage, ActiveMemoryAllocation
+ **IOPS saturation?** → Check TotalIOpsPerSec, ReadOpsPerSec, WriteOpsPerSec
+ **Series cardinality jump?** → Check SeriesCardinality growth
+ **Error rate increase?** → Check QueryRequestsTotal (runtime\$1error), APIRequestRate (Status=500)
+ **Unexpected restart?** → Check EngineUptime

**Enable Detailed Logging:**

Configuration changes:
+ log-level: debug
+ flux-log-enabled: TRUE

Monitor for 1-2 hours, then review logs

Return to log-level: info after investigation

**Resolution Steps:**
+ Apply appropriate configuration changes based on findings
+ Scale resources if limits are reached
+ Optimize queries or data model if needed
+ Implement rate limiting if sudden load increase

### Scenario: Memory Exhaustion
<a name="memory-exhaustion"></a>

**Symptoms:**
+ MemoryUtilization > 90%
+ HeapMemoryUsage approaching maximum
+ QueryRequestsTotal showing runtime\$1error (out of memory)
+ APIRequestRate showing Status=500

**Resolution Steps:**

Immediate Actions (if critical):

1. Restart instance to clear memory (if safe to do so)

1. Reduce query-concurrency temporarily

1. Eliminate long-running queries if possible

Configuration Changes:

**Priority 1: Reduce Cache Memory**
+ storage-cache-max-memory-size: Reduce to 10% of RAM
+ Example: 32GB → 3,355,443,200 bytes
+ storage-cache-snapshot-memory-size: 26,214,400 (25MB)

**Priority 2: Limit Query Memory**
+ query-memory-bytes: Set to 60% of total RAM
+ query-max-memory-bytes: Match query-memory-bytes
+ query-initial-memory-bytes: 10% of query-memory-bytes

**Priority 3: Set Protective Limits**
+ influxql-max-select-series: 10000
+ influxql-max-select-point: 100000000
+ query-concurrency: Reduce to 50% of vCPUs

**Long-Term Solutions:**
+ Optimize data model to reduce **SeriesCardinality**
+ Implement query result size limits at application level
+ Add query timeout enforcement
+ Review most common queries to ensure these are following best practices mentioned in the section [Query Performance Issues](#query-performance-issues)

### Scenario: High Series Cardinality Impact
<a name="high-series-cardinality-impact"></a>

**Review CloudWatch metrics:**
+ **SeriesCardinality** > 5M
+ **MemoryUtilization** high
+ **QueryRequestsTotal** showing increased runtime\$1error
+ **CPUUtilization** elevated due to query planning overhead

**Investigation Steps:**

**Analyze Cardinality Growth:**
+ SeriesCardinality growth rate (daily/weekly)
+ Projection to 10M threshold
+ Identify sources of high cardinality
+ Review tag design and usage

**Assess Performance Impact:**
+ Compare **QueryRequestsTotal** success rate before/after cardinality increase
+ Review **MemoryUtilization** correlation
+ Check **CPUUtilization** patterns
+ Analyze **QueryResponseVolume** trends

**Identify Cardinality Sources:**

Review data model:
+ Which buckets have highest SeriesCardinality?
+ Which tags have high unique value counts?
+ Are there unnecessary tags?
+ Are tag values unbounded (UUIDs, timestamps, etc.)?

**Review Current Configuration:**

Check optimization parameters:
+ storage-series-id-set-cache-size: Current value?
+ influxql-max-select-series: Is it limiting runaway queries?
+ storage-max-index-log-file-size: Appropriate for cardinality?

**Resolution Steps:**

Immediate Configuration Changes:

**Priority 1: Optimize Series Handling**
+ storage-series-id-set-cache-size: 1500-2000
+ storage-series-file-max-concurrent-snapshot-compactions: 6-8
+ storage-max-index-log-file-size: 2,097,152 (2MB)

**Priority 2: Set Protective Limits**
+ influxql-max-select-series: 10000
+ influxql-max-select-buckets: 1000
+ query-concurrency: Reduce if memory constrained

**Priority 3: Increase Resources**
+ Scale to next instance tier
+ Increase memory allocation
+ Consider 12K IOPS storage tier

**Migration Planning (if > 10M series):**
+ **InfluxDB 3.0 offers superior high-cardinality performance**
+ Plan migration timeline (2-3 months)
+ Test with subset of data first
+ Prepare application for migration
+ InfluxDB 3.0 uses columnar storage optimized for billions of series

### Scenario: Query Queue Buildup
<a name="query-queue-buildup"></a>

**Review CloudWatch metrics:**
+ **QueryRequestsTotal** with result=queue\$1error increasing (queries being rejected)
+ **APIRequestRate** with Status=429 or Status=503 (service unavailable/too many requests)
+ **CPUUtilization** may be elevated (> 70%) indicating resource saturation
+ **MemoryUtilization** may be high (> 70%) limiting query capacity
+ **QueryResponseVolume** showing large response sizes (queries taking excessive resources)

**Investigation Steps:**

**Analyze Queue and Concurrency Metrics:**
+ Review **QueryRequestsTotal** breakdown by result type:
  + High queue\$1error count indicates queries are being rejected
  + Compare success rate to baseline - is it dropping?
  + Check for runtime\$1error increases (queries failing after starting)
+ Monitor **APIRequestRate** patterns:
  + Look for Status=429 (too many requests) or Status=503 (service unavailable)
  + Identify which endpoints are experiencing rejections
  + Check request rate trends over time

**Review Resource Utilization:**
+ **CPUUtilization** during high queue periods:
  + If > 70%, queries are CPU-bound and can't execute faster
  + If < 50%, queue limits may be too restrictive
+ **MemoryUtilization** correlation:
  + High memory may be limiting query concurrency
  + Check **HeapMemoryUsage** and **ActiveMemoryAllocation** for memory pressure
+ **TotalIOpsPerSec** patterns:
  + High I/O may be slowing query execution
  + Check if queries are I/O bound

**Identify Query Patterns:**
+ Review **QueryResponseVolume**:
  + Are queries returning excessive data (> 1MB)?
  + Identify endpoints with largest response volumes
  + Look for patterns in expensive queries
+ Analyze **QueryRequestsTotal** rate:
  + What's the queries per second rate?
  + Are there burst patterns or sustained high load?
  + Compare to instance capacity from sizing table
+ Check **APIRequestRate** by endpoint:
  + Which query endpoints have highest traffic?
  + Are there duplicate or redundant queries?

**Check Resource Availability:**
+ Compare current metrics to sizing table recommendations:
  + **SeriesCardinality** vs. instance class capacity
  + Query rate vs. recommended queries per second
  + **CPUUtilization** and **MemoryUtilization** headroom
+ Verify IOPS capacity:
  + **TotalIOpsPerSec** should have 30% headroom
  + Check if queries are waiting on disk I/O

**Resolution Steps:**

Configuration Changes:

**Priority 1: Increase Queue Capacity**
+ query-queue-size: 4096 (from default 1024)

**Priority 2: Increase Concurrency (if resources allow)**
+ query-concurrency: Increase to 75% of vCPUs
+ Example: 16 vCPU → query-concurrency = 12
+ Verify CPUUtilization stays < 80% after change
+ Verify MemoryUtilization stays < 80% after change

**Priority 3: Optimize Query Execution**
+ query-memory-bytes: Ensure adequate allocation
+ storage-series-id-set-cache-size: 1000-1500
+ http-read-timeout: 120s (prevent premature timeouts)

**Priority 4: Set Protective Limits**
+ influxql-max-select-series: 10000
+ influxql-max-select-point: 100000000

**Application-Level Solutions:**
+ **Implement query result caching** (Redis, Memcached)
  + Cache results for frequently executed queries
  + Set appropriate TTLs based on data freshness requirements
  + Monitor cache hit rates
+ **Use continuous queries** to pre-aggregate common patterns
  + Pre-calculate common aggregations
  + Query pre-aggregated data instead of raw data
+ **Add pagination** for large result sets
  + Limit initial query size
  + Load additional data on demand
+ **Implement query rate limiting** per user/dashboard
  + Prevent single users from overwhelming the system
  + Set fair-use quotas
+ **Use downsampled data** for historical queries
  + Query lower-resolution data for older time ranges
  + Reserve full-resolution queries for recent data

**Scaling Decision:**
+ If CPUUtilization > 70% sustained: Scale to larger instance
+ If MemoryUtilization > 70% sustained: Scale to memory-optimized instance
+ If query rate exceeds instance capacity: Scale to next tier per sizing table

# Adding tags and labels to resources
<a name="tagging-keyspaces-influxdb"></a>

 You can label Amazon Timestream for InfluxDB resources using *tags*. Tags let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. Tags can help you do the following: 
+  Quickly identify a resource based on the tags that you assigned to it. 
+  See AWS bills broken down by tags. 

 Tagging is supported by AWS services like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Timestream for InfluxDB, and more. Efficient tagging can provide cost insights by enabling you to create reports across services that carry a specific tag. 

 Finally, it is good practice to follow optimal tagging strategies. For information, see [AWS Tagging Strategies](https://d0.awsstatic.com/aws-answers/AWS_Tagging_Strategies.pdf). 

# Tagging restrictions
<a name="TagRestrictions-influxdb"></a>

 Each tag consists of a key and a value, both of which you define. The following restrictions apply: 
+  Each Timestream for InfluxDB DB instance can have only one tag with the same key. If you try to add an existing tag, the existing tag value is updated to the new value. 
+ A value acts as a descriptor within a tag category. In Timestream for InfluxDB the value cannot be empty or null.
+  Tag keys and values are case sensitive. 
+  The maximum key length is 128 Unicode characters. 
+ The maximum value length is 256 Unicode characters. 
+  The allowed characters are letters, white space, and numbers, plus the following special characters: `+ - = . _ : /` 
+  The maximum number of tags per resource is 50.
+  AWS-assigned tag names and values are automatically assigned the `aws:` prefix, which you can't assign. AWS-assigned tag names don't count toward the tag limit of 50. User-assigned tag names have the prefix `user:` in the cost allocation report. 
+  You can't backdate the application of a tag. 

## Security best practices for Timestream for InfluxDB
<a name="timestream-for-influx-getting-started-security-best-practices"></a>

### Optimize writes to InfluxDB
<a name="timestream-for-influx-getting-started-security-best-practices-optimize-writes"></a>

As any other time series database, InfluxDB is built to be able to ingest and process data in real-time. To keep the system performing at its best we recommend following optimizations when writing data to InfluxDB:
+ **Batch Writes:** When writing data to InfluxDB, write data in batches to minimize the network overhead related to every write request. The optimal batch size is 5000 lines of line protocol per write request. To write multiple lines in one request, each line of line protocol must be delimited by a new line (\$1n).
+ **Sort tags by key:** Before writing data points to InfluxDB, sort tags by key in lexicographic order. 

  ```
  measurement,tagC=therefore,tagE=am,tagA=i,tagD=i,tagB=think fieldKey=fieldValue 1562020262
  
  # Optimized line protocol example with tags sorted by key
  measurement,tagA=i,tagB=think,tagC=therefore,tagD=i,tagE=am fieldKey=fieldValue 1562020262
  ```
+ **Use the coarsest time precision possible:** – InfluxDB writes data in nanosecond precision, however if your data isn’t collected in nanoseconds, there is no need to write at that precision. For better performance, use the coarsest precision possible for timestamps. You can specify the write precision when:
  + When using the SDK you can specify the WritePrecision when setting the time attribute of your point. For more information on InfluxDB client libraries, see the [InfluxDB Documentation](https://docs.influxdata.com/influxdb/v2/api-guide/client-libraries/).
  + When using Telegraf, you configure the time precision in the Telegraf agent configuration. Precision is specified as an interval with an integer \$1 unit (e.g. 0s,10ms,2us,4s). Valid time units are “ns”, “us”, “ms”, and “s”.

    ```
    [agent]
     interval ="10s"
     metric_batch_size="5000"
     precision = "0s"
    ```
+ **Use gzip compression:** – Use gzip compression to speed up writes to InfluxDB and reduce network bandwidth. Benchmarks have shown up to a 5x speed improvement when data is compressed. 
  + When using Telegraf, in the Influxdb\$1v2 output plugin configuration in your telegraf.conf, set the content\$1encoding option to gzip:

    ```
    [[outputs.influxdb_v2]]
      urls = ["http://localhost:8086"]
      # ...
      content_encoding = "gzip"
    ```
  + When using client libraries, each [InfluxDB client library](https://docs.influxdata.com/influxdb/v2/api-guide/client-libraries/) provides options for compressing write requests or enforces compression by default. The method for enabling compression is different for each library. For specific instructions, see the [InfluxDB Documentation](https://docs.influxdata.com/influxdb/v2/api-guide/client-libraries/)
  + When using the InfluxDB API `/api/v2/write` endpoint to write data, compress the data with gzip and set the Content-Encoding header to gzip.

### Design for performance
<a name="timestream-for-influx-getting-started-security-best-practices-design-for-performance"></a>

Design your schema for simpler and more performance queries. The following guidelines will ensure that your schema will be easy to query and maximize query performance:
+ **Design to query:** Choose [measurements](https://docs.influxdata.com/influxdb/v2/reference/glossary/#measurement), [tag keys](https://docs.influxdata.com/influxdb/v2/reference/glossary/#tag-key), and [field keys](https://docs.influxdata.com/influxdb/v2/reference/glossary/#field-key) that are easy to query. To achieve this goal, follow these principles:
  + Use measurements that have a simple name and accurately describe the schema. 
  + Avoid using the same name for a [tag key](https://docs.influxdata.com/influxdb/v2/reference/glossary/#tag-key) and [field key](https://docs.influxdata.com/influxdb/v2/reference/glossary/#field-key) within the same schema.
  + Avoid using reserved [Flux keywords](https://docs.influxdata.com/flux/v0/spec/lexical-elements/#keywords) and special characters in tag and field keys. 
  + Tags store metadata that describe the fields and are common across many data points. 
  + Fields store unique or highly variable data, usually numeric data points. 
  + Measurements and keys should not contain data, but used to either aggregate or describe data. Data will be stored in tag and field values. 
+ **Keep your time series cardinality under control** High series cardinality is one of the main causes of decreased write and read performance in InfluxDB. In the context of InfluxDB high cardinality refers to the presence of a very large number of unique tag values. Tags values are indexed in InfluxDB which means that a very high number of unique values will generate a larger index which can slow down data ingestion and query performance.

  To better understand and resolve potential high cardinality related issues you can follow these steps:
  + Understand the causes of high cardinality
  + Measure the cardinality of your buckets
  + Take action to resolve high cardinality
+ **Causes of high series cardinality** InfluxDB indexes the data based on measurements and tags to speed up data reads. Each set of indexed data elements forms a [series key](https://docs.influxdata.com/influxdb/v2/reference/glossary/#series-key). [Tags](https://docs.influxdata.com/influxdb/v2/reference/glossary/#tag) containing highly variable information like unique IDs, hashes, and random strings lead to a large number of [series](https://docs.influxdata.com/influxdb/v2/reference/glossary/#series), also known as high [ series cardinality](https://docs.influxdata.com/influxdb/v2/reference/glossary/#series-cardinality). High series cardinality is the primary driver of high memory usage in InfluxDB.
+ **Measuring series cardinality** If you experience performance slowdowns or see an ever increasing memory usage in your Timestream for InfluxDB instance, we recommend measureing the series cardinality of your buckets.

  InfluxDB provides functions that allows you to measure series cardinality both in Flux and InfluxQL.
  + In Flux use the function `influxdb.cardinality()`
  + In FluxQL use the `SHOW SERIES CARDINALITY` command

  In both cases the engine will return the number of unique series keys in your data. Keep in mind that is it not recommended to have more than 10 million series keys on any of your Timestream for InfluxDB instances.
+ **Causes of high series cardinality** If you encounter that any of your buckets have high cardinality there are a few correcting steps you can take to fix it: 
  + **Review your tags:** Ensure that your workloads don’t generate cases were tags have unique values for most entries. This could happen in cases where the number of unique tag values always grows over time, or if log type messages are being written to the database where every message would have an unique combination of timestamp, tags etc. You can use the following Flux code to help you figure out which Tags are contributing most to your high cardinality issues:

    ```
    // Count unique values for each tag in a bucketimport "influxdata/influxdb/schema"
    
    cardinalityByTag = (bucket) => schema.tagKeys(bucket: bucket)
        |> map(
            fn: (r) => ({
                tag: r._value,
                _value: if contains(set: ["_stop", "_start"], value: r._value) then
                    0
                else
                    (schema.tagValues(bucket: bucket, tag: r._value)
                        |> count()
                        |> findRecord(fn: (key) => true, idx: 0))._value,
            }),
        )
        |> group(columns: ["tag"])
        |> sum()
    
    cardinalityByTag(bucket: "amzn-s3-demo-bucket")
    ```

    If you’re experiencing very high cardinality, the query above may time out. If you experience a timeout, run the queries below – one at a time.

    Generate a list of tags:

    ```
    // Generate a list of tagsimport "influxdata/influxdb/schema"
    
    schema.tagKeys(bucket: "amzn-s3-demo-bucket")
    ```

    Count unique tag values for each tag:

    ```
    // Run the following for each tag to count the number of unique tag valuesimport "influxdata/influxdb/schema"
    
    tag = "example-tag-key"
    
    schema.tagValues(bucket: "amzn-s3-demo-bucket1", tag: tag)
        |> count()
    ```

    We recommend that you run these at different points in time to identify which tag is growing faster.
  + **Improve your schema:** Follow the modeling recommendations discussed in our [Security best practices for Timestream for InfluxDB](#timestream-for-influx-getting-started-security-best-practices). 
  + **Remove or aggregate older data to reduce cardinality:** Consider whether or not your use cases needs all the data that is causing your high cardinality issues. If this data is not longer needed or accessed frequently you can aggregate it, delete it or export it to another engine such as Timestream for Live Analytics for long term storage and analysis.

# Troubleshooting
<a name="timestream-for-influx-troubleshooting"></a>

## Warning of "dev" version not recognized
<a name="timestream-for-influx-getting-started-security-troubleshooting-dev-not-recognized"></a>

The warning 'WARN: Couldn't parse version "dev" reported by server, assuming latest backup/restore APIs are supported' may be displayed during migration. This warning can be ignored.

## Migration failed during restoration stage
<a name="timestream-for-influx-getting-started-security-troubleshooting-migration-failed"></a>

In the event of a failed migration during the restoration stage, users can use the `--retry-restore-dir` flag to re-attempt the restoration. Use the `--retry-restore-dir` flag with a path to a previously backed-up directory to skip the backup stage and retry the restoration stage. The created backup directory used for a migration will be indicated if a migration fails during restoration.

Possible reasons for a restore failing include:
+ Invalid InfluxDB destination token – A bucket existing in the destination instance with the same name as in the source instance. For individual bucket migrations use the `--dest-bucket` option to set a unique name for the migrated bucket
+ Connectivity failure, either with the source or destination hosts or with an optional S3 bucket.

## Amazon Timestream for InfluxDB basic operational guidelines
<a name="timestream-for-influx-getting-started-security-best-practices-operational-guidelines"></a>

Following are basic operational guidelines that everyone should follow when working with Amazon Timestream for InfluxDB. Note that the Amazon Timestream for InfluxDB Service Level Agreement requires that you follow these guidelines:
+ Use metrics to monitor your memory, CPU, and storage usage. You can set up Amazon CloudWatch to notify you when usage patterns change or when you approach the capacity of your deployment. This way, you can maintain system performance and availability.
+ Scale up your DB instance when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. Keep in mind that at this time, you will need to create a new instance and migrate your data to achieve this.
+ If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the following:
  + Migrate to a different DB instance with higher I/O capacity.
  + If you are already using Influx IOPS Included storage storage, provision a storage type with higher IOPS Included.
+ If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can change after a failover. Caching the DNS data for an extended time can thus lead to connection failures. Your application might try to connect to an IP address that's no longer in service.

## DB instance RAM recommendations
<a name="timestream-for-influx-getting-started-security-best-practices-ram-recommendations"></a>

An Amazon Timestream for InfluxDB performance best practice is to allocate enough RAM so that your working set resides almost completely in memory. The working set is the data and indexes that are frequently in use on your instance. The more you use the DB instance, the more the working set will grow.

# Security in Timestream for InfluxDB
<a name="security-timestream-for-influxdb"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:
+ **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. The effectiveness of our security is regularly tested and verified by third-party auditors as part of the [AWS compliance programs](https://aws.amazon.com/compliance/programs/). To learn about the compliance programs that apply to Timestream for InfluxDB, see [AWS Services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/).
+ **Security in the cloud** – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization's requirements, and applicable laws and regulations. 

This documentation will help you understand how to apply the shared responsibility model when using Timestream for InfluxDB. The following topics show you how to configure Timestream for InfluxDB to meet your security and compliance objectives. You'll also learn how to use other AWS services that can help you to monitor and secure your Timestream for InfluxDB resources. 

**Topics**
+ [Overview](timestream-for-influx-security.md)
+ [Database authentication with Amazon Timestream for InfluxDB](timestream-for-influx-security-db-authentication.md)
+ [How Amazon Timestream for InfluxDB uses secrets](timestream-for-influx-security-db-secrets.md)
+ [Data protection in Timestream for InfluxDB](data-protection-for-influx-db.md)
+ [Identity and Access Management for Amazon Timestream for InfluxDB](security-iam-for-influxdb.md)
+ [Logging and monitoring in Timestream for InfluxDB](monitoring-influxdb.md)
+ [Compliance validation for Amazon Timestream for InfluxDB](timestream-compliance.md)
+ [Resilience in Amazon Timestream for InfluxDB](disaster-recovery-resiliency-influxdb.md)
+ [Infrastructure security in Amazon Timestream for InfluxDB](infrastructure-security-influxdb.md)
+ [Configuration and vulnerability analysis in Timestream for InfluxDB](ConfigAndVulnerability-timestream-for-influxdb.md)
+ [Incident response in Timestream for InfluxDB](IncidentResponse-timestream-for-influxdb.md)
+ [Amazon Timestream for InfluxDB API and interface VPC endpoints (AWS PrivateLink)](timestream-influxb-privatelink.md)
+ [Security best practices for Timestream for InfluxDB](security-best-practices.md)

# Overview
<a name="timestream-for-influx-security"></a>

This documentation helps you understand how to apply the [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) when using Amazon Timestream for InfluxDB. The following topics show you how to configure Amazon Timestream for InfluxDB to meet your security and compliance objectives. You also learn how to use other AWS services that help you monitor and secure your Amazon Timestream for InfluxDB resources. 

You can manage access to your Amazon Timestream for InfluxDB resources and your databases on a DB instance. The method you use to manage access depends on what type of task the user needs to perform with Amazon Timestream for InfluxDB:
+ Run your DB instance in a Virtual Private Cloud (VPC) based on the Amazon VPC service for network access control.
+ Use AWS Identity and Access Management (IAM) policies to assign permissions that determine who is allowed to manage Amazon Timestream for InfluxDB resources. For example, you can use IAM to determine who is allowed to create, describe, modify, and delete DB instances, tag resources, or modify security groups.
+ Use security groups to control what IP addresses or Amazon EC2 instances can connect to your databases on a DB instance. When you first create a DB instance, it's only accessible through rules specified by an associated security group.
+ Use Secure Socket Layer (SSL) or Transport Layer Security (TLS) connections with your DB instances.
+ Use the security features of your InfluxDB engine to control who can log in to the databases on a DB instance. These features work just as if the database was on your local network. For more information, see [Security in Timestream for InfluxDB](security-timestream-for-influxdb.md).

**Note**  
You have to configure security only for your use cases. You don't have to configure security access for processes that Amazon Timestream for InfluxDB manages. These include creating backups, replicating data between a primary DB instance and a read replica, and other processes.

**Topics**
+ [General security](timestream-for-influx-getting-started-security.md)

# General security
<a name="timestream-for-influx-getting-started-security"></a>

**Topics**
+ [Permissions](#timestream-for-influx-getting-started-security-permissions)
+ [Network access](#timestream-for-influx-getting-started-security-network-access)
+ [Dependencies](#timestream-for-influx-getting-started-security-dependencies)
+ [S3 buckets](#timestream-for-influx-getting-started-security-s3-buckets)

## Permissions
<a name="timestream-for-influx-getting-started-security-permissions"></a>

InfluxDB users should be granted least-privilege permissions. Only tokens granted to specific users, instead of operator tokens, should be used during migration.

Timestream for InfluxDB uses IAM permissions to control user permissions. We recommend users be granted access to the specific actions and resources that they require. For more information, see [Grant least privilege access](https://docs.aws.amazon.com/wellarchitected/2022-03-31/framework/sec_permissions_least_privileges.html). 

## Network access
<a name="timestream-for-influx-getting-started-security-network-access"></a>

The Influx migration script can function locally, migrating data between two InfluxDB instances on the same system, but it is assumed that the primary use case for migrations will be migrating data across the network, either a local or public network. With this comes security considerations. The Influx migration script will, by default, verify TLS certificates for instances with TLS enabled: we recommend that users enable TLS in their InfluxDB instances and do not use the `--skip-verify` option for the script.

We recommend you use an allow-list to restrict network traffic to be from sources you are expecting. You can do this by limiting network traffic to the InfluxDB instances only from known IPs.

## Dependencies
<a name="timestream-for-influx-getting-started-security-dependencies"></a>

The latest major versions of all dependencies should be used, including Influx CLI, InfluxDB, Python, the Requests module, and optional dependencies such as `mountpoint-s3` and `rclone`.

## S3 buckets
<a name="timestream-for-influx-getting-started-security-s3-buckets"></a>

If S3 buckets are used as a temporary storage for migration, we recommend enabling TLS, versioning, and disabling public access.

**Using S3 buckets for migration**

1. Open the AWS Management Console, navigate to **Amazon Simple Storage Service** and then choose **Buckets**.

1. Choose the bucket you wish to use.

1. Choose the **Permissions** tab.

1. Under **Block public access (bucket settings)**, choose **Edit**.

1. Check **Block all public access**.

1. Choose **Save changes**.

1. Under **Bucket policy**, choose **Edit**.

1. Enter the following, replacing *<example-bucket>* with your bucket name, to enforce the use of TLS version 1.2 or later for connections:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "EnforceTLSv12orHigher",
               "Principal": {
                   "AWS": "*"
               },
               "Action": [
                   "s3:*"
               ],
               "Effect": "Deny",
               "Resource": [
                   "arn:aws:s3:::<example bucket>/*",
                   "arn:aws:s3:::<example bucket>"
               ],
               "Condition": {
                   "NumericLessThan": {
                       "s3:TlsVersion": 1.2
                   }
               }
           }
       ]
   }
   ```

------

1. Choose **Save changes**.

1. Choose the **Properties** tab.

1. Under **Bucket Versioning**, choose **Edit**.

1. Check **Enable**.

1. Choose **Save changes**.

For information about Amazon S3 bucket best security practices, see [Security best practices for Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html).

# Database authentication with Amazon Timestream for InfluxDB
<a name="timestream-for-influx-security-db-authentication"></a>

Amazon Timestream for InfluxDB supports two ways to authenticate database users.

Password and access Token database authentication use different methods of authenticating to the database. Therefore, a specific user can log in to a database using only one authentication method. In both cases InfluxDB performs all administration of user accounts and API tokens.

## Password authentication
<a name="timestream-for-influx-security-db-authentication-password"></a>

During the InfluxDB DB instance creation process, you created an organization, user and password. The user has permissions to manage everything in your Timestream for InfluxDB DB instance. With this username and password combination you will be able to LogIn into your instance using the InfluxUI and also use the InfluxCLI to generate an operator token.

An operator token is required to create users, delete buckets , organizations etc. For more information, see [Database authentication options](timestream-for-influx-db-connecting.md#timestream-for-influx-db-connecting-authentication-options).

## API tokens
<a name="timestream-for-influx-security-db-authentication-api-token"></a>

InfluxDB API tokens ensure secure interaction between InfluxDB and external tools such as clients or applications. An API token belongs to a specific user and identifies InfluxDB permissions within the user’s organization.

There are three types of API tokens in InfluxDB: 
+ Operator Token: Grants full read and write access to all organizations and all organization resources in InfluxDB OSS 2.x. Some operations, for example, retrieving the server configuration, require operator permissions. To create an operator token manually with the InfluxDB UI, `api/v2` API, or Influx CLI after the setup process is completed, you must use an existing operator token or your username and password. To create a new operator token without using an existing one, see the [influxd recovery auth](https://docs.influxdata.com/influxdb/v2/reference/cli/influxd/recovery/auth/) CLI.
**Important**  
Because operator tokens have full read and write access to all organizations in the database, we recommend [creating an All-Access token](https://docs.influxdata.com/influxdb/v2/admin/tokens/create-token/) for each organization and using those to manage InfluxDB. This helps to prevent accidental interactions across organizations.
+ All-Access API Token: Grants full read and write access to all resources in an organization.
+ Read/Write Tokens: Grants read access, write access, or both to specific buckets in an organization.

All InfluxDb tokens are long lived tokens with no set expiration date, so it is not recommended to use your operator or all access tokens to sent monitoring data from your clients or Telegraf agents neither to embed them in your dashboarding applications. For these applications create read/write tokens with just the necessary permissions to get the job done. Fo more information on how to create influxDB token, see [Create a token](https://docs.influxdata.com/influxdb/v2/admin/tokens/create-token/).

## Secrets
<a name="timestream-for-influx-getting-started-security-secrets"></a>

InfluxDB operator tokens are generated on instance setup; other kinds of tokens, such as all-access and read/write tokens, can be created using the [Influx CLI](https://docs.influxdata.com/influxdb/v2/tools/influx-cli/), Influx v2 API, or the Timestream for InfluxDB Multi-user rotation function. See [Manage API tokens](https://docs.influxdata.com/influxdb/v2/admin/tokens/) for how to generate, view, assign, and delete tokens.

We recommend that you rotate Timestream for InfluxDB tokens often using AWS Secrets Manager and store tokens via environment variables. See [Use Tokens](https://docs.influxdata.com/influxdb/cloud/admin/tokens/use-tokens/#add-a-token-to-a-cli-request) for token usage in environment variables and [Rotating the secret](timestream-for-influx-security-db-secrets.md#timestream-for-influx-security-db-secrets-rotation) for how to rotate Timestream for InfluxDB users and tokens.

**See also:**
+ [Infrastructure security in Amazon Timestream for InfluxDB](infrastructure-security-influxdb.md)
+ [Security best practices for Timestream for InfluxDB](security-best-practices.md)

# How Amazon Timestream for InfluxDB uses secrets
<a name="timestream-for-influx-security-db-secrets"></a>

Timestream for InfluxDB supports username and password authentication through the user interface, and token credentials for least privilege client and application connections. Timestream for InfluxDB users have `allAccess` permissions within their organization while tokens can have any set of permissions. Following best practices for secure API token management, users should be created to manage tokens for fine-grain access within an organization. Additional information on admin best practices with Timestream for InfluxDB can be found in the [Influxdata documentation](https://docs.influxdata.com/influxdb/v2/admin/tokens/create-token/).

AWS Secrets Manager is a secret storage service that you can use to protect database credentials, API keys, and other secret information. Then in your code, you can replace hardcoded credentials with an API call to Secrets Manager. This helps ensure that the secret can't be compromised by someone examining your code, because the secret isn't there. For an overview of Secrets Manager, see [What is AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html).

When you create a database instance, Timestream for InfluxDB automatically creates an admin secret for you to use with the multi-user rotation AWS Lambda function. In order to rotate Timestream for InfluxDB users and tokens, you must create a new secret by hand for each user or token you wish to rotate. Each secret can be configured to rotate on a schedule with the use of a Lambda function. The process to setup a new rotating secret consists of uploading the Lambda function code, configuring the Lambda role, defining the new secret, and configuring the secret rotation schedule.

## What's in the secret
<a name="timestream-for-influx-security-db-secrets-definition"></a>

When you store Timestream for InfluxDB user credentials in the secret, use the following format.

Single-user:

```
{
  "engine": "<required: must be set to 'timestream-influxdb'>",
  "username": "<required: username>",
  "password": "<required: password>",
  "dbIdentifier": "<required: DB identifier>"
}
```

When you create a Timestream for InfluxDB instance, an admin secret is automatically stored in Secrets Manager with credentials to be used with the multi-user Lambda function. Set the `adminSecretArn` to the `Authentication Properties Secret Manager ARN` value found on the DB instance summary page or to the ARN of an admin secret. To create a new admin secret you must already have the associated credentials and the credentials must have admin privileges.

When you store Timestream for InfluxDB token credentials in the secret, use the following format.

Multi-user:

```
{
  "engine": "<required: must be set to 'timestream-influxdb'>",
  "org": "<required: organization to associate token with>",
  "adminSecretArn": "<required: ARN of the admin secret>",
  "type": "<required: allAccess or operator or custom>",
  "dbIdentifier": "<required: DB identifier>",
  "token": "<required unless generating a new token: token being rotated>",
  "writeBuckets": "<optional: list of bucketIDs for custom type token, must be input within plaintext panel, for example ['id1','id2']>",
  "readBuckets": "<optional: list of bucketIDs for custom type token, must be input within plaintext panel, for example ['id1','id2']>",
  "permissions": "<optional: list of permissions for custom type token, must be input within plaintext panel, for example ['write-tasks','read-tasks']>"
}
```

When you store Timestream for InfluxDB admin credentials in the secret, use the following format:

Admin secret:

```
{
  "engine": "<required: must be set to 'timestream-influxdb'>",
  "username": "<required: username>",
  "password": "<required: password>",
  "dbIdentifier": "<required: DB identifier>",
  "organization": "<optional: initial organization>",
  "bucket": "<optional: initial bucket>"
}
```

To turn on automatic rotation for the secret, the secret must be in the correct JSON structure. See [Rotating the secret](#timestream-for-influx-security-db-secrets-rotation) for how to rotate Timestream for InfluxDB secrets.

## Modifying the secret
<a name="timestream-for-influx-security-db-secrets-modification"></a>

The credentials generated during the Timestream for InfluxDB instance creation process are stored in a Secrets Manager secret in your account. The [GetDbInstance](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_GetDbInstance.html) response object contains an `influxAuthParametersSecretArn` which holds the Amazon Resource Name (ARN) to such secret. The secret will only be populated after your Timestream for InfluxDB instance is available. This is a READONLY copy as any updates/modifications/deletions to this secret doesn't impact the created DB instance. If you delete this secret, the [API response](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_GetDbInstance.html#API_GetDbInstance_ResponseSyntax) will still refer to the deleted secret ARN.

To create a new token in the Timestream for InfluxDB instance rather than store existing token credentials, you can create non-operator tokens by leaving the `token` value blank in the secret and using the multi-user rotation function with the `AUTHENTICATION_CREATION_ENABLED` Lambda environment variable set to `true`. If you create a new token, the permissions defined in the secret are assigned to the token and cannot be altered after the first successful rotation. For more information on rotating secrets, see [Rotating AWS Secrets Manager Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html).

If a secret is deleted, the associated user or token in the Timestream for InfluxDB instance will not be deleted.

## Rotating the secret
<a name="timestream-for-influx-security-db-secrets-rotation"></a>

You use the Timestream for InfluxDB single- and multi-user rotation Lambda functions to rotate Timestream for InfluxDB user and token credentials. Use the single-user Lambda function to rotate user credentials for your Timestream for InfluxDB instance, and use the multi-user Lambda function to rotate token credentials for your Timestream for InfluxDB instance.

Rotating users and tokens with the single- and multi-user Lambda functions is optional. Timestream for InfluxDB credentials never expire and any exposed credentials pose a risk for malicious actions against your DB instance. The advantage of rotating Timestream for InfluxDB credentials with Secrets Manager is an added security layer which limits the attack vector of exposed credentials to the window of time until the next rotation cycle. If no rotation mechanism is in place for your DB instance, any exposed credentials will be valid until they are manually deleted.

You can configure Secrets Manager to automatically rotate secrets for you according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps to significantly reduce the risk of compromise. For more information on rotating secrets with Secrets Manager, see [Rotate AWS Secrets Manager Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html).

### Rotating users
<a name="timestream-for-influx-security-db-user-rotation"></a>

When you rotate users with the single-user Lambda function, a new random password will be assigned to the user after each rotation. For more information on how to enable automatic rotation, see [Set up automatic rotation for non-database AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html).

#### Rotating admin secrets
<a name="timestream-for-influx-security-db-admin-rotation"></a>

To rotate an admin secret you use the single-user rotation function. You need to add the `engine` and `dbIdentifier` values to the secret since those values are not automatically populated on DB initialization. See [What's in the secret](#timestream-for-influx-security-db-secrets-definition) for the complete secret template.

To locate an admin secret for a Timestream for InfluxDB instance you use the admin secret ARN from the Timestream for InfluxDB instance summary page. It is recommended that you rotate all Timestream for InfluxDB admin secrets since admin users have elevated permissions for the Timestream for InfluxDB instance.

#### Lambda rotation function
<a name="timestream-for-influx-security-db-user-lambda-function"></a>

You can rotate a Timestream for InfluxDB user with the single-user rotation function by using the [What's in the secret](#timestream-for-influx-security-db-secrets-definition) with a new secret and adding the required fields for your Timestream for InfluxDB user. For more information on secret rotation Lambda functions, see [Rotation by Lambda function](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_lambda.html).

You can rotate a Timestream for InfluxDB user with the single-user rotation function by using the [What's in the secret](#timestream-for-influx-security-db-secrets-definition) with a new secret and adding the required fields for your Timestream for InfluxDB user. For more information on secret rotation Lambda functions, see [Rotation by Lambda function](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_lambda.html).

The single user rotation function authenticates with the Timestream for InfluxDB DB instance using the credentials defined in the secret, then generates a new random password and sets the new password for the user. For more information on secret rotation Lambda functions, see [Rotation by Lambda function](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_lambda.html).

#### Lambda function execution role permissions
<a name="timestream-for-influx-security-db-user-lambda-function-permissions"></a>

Use the following IAM policy as the role for the single-user Lambda function. The policy gives the Lambda function the required permissions to perform a secret rotation for Timestream for InfluxDB users.

Replace all items listed below in the IAM policy with values from your AWS account:
+ **\$1rotating\$1secret\$1arn\$1** — The ARN for the secret being rotated can be found in the Secrets Manager secret details.
+ **\$1db\$1instance\$1arn\$1** — The Timestream for InfluxDB instance ARN can be found on the Timestream for InfluxDB instance summary page.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:DescribeSecret",
                "secretsmanager:GetSecretValue",
                "secretsmanager:PutSecretValue",
                "secretsmanager:UpdateSecretVersionStage"
            ],
            "Resource": "arn:aws:secretsmanager:us-east-2:111122223333:secret:MySecret"
        },
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetRandomPassword"
            ],
            "Resource": "*"
        },
        {
            "Action": [
                "timestream-influxdb:GetDbInstance"
            ],
            "Resource": "arn:aws:timestream-influxdb:us-east-2:111122223333:db-instance/MyDbInstance",
            "Effect": "Allow"
        }
    ]
}
```

------

### Rotating tokens
<a name="timestream-for-influx-security-db-token-rotation"></a>

You can rotate a Timestream for InfluxDB token with the multi-user rotation function by using the [What's in the secret](#timestream-for-influx-security-db-secrets-definition) with a new secret and adding the required fields for your Timestream for InfluxDB token. For more information on secret rotation Lambda functions, see [Rotation by Lambda function](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_lambda.html).

You can rotate a Timestream for InfluxDB token by using the Timestream for InfluxDB multi-user Lambda function. Set the `AUTHENTICATION_CREATION_ENABLED` environment variable to `true` in the Lambda configuration to enable token creation. To create a new token, use the [What's in the secret](#timestream-for-influx-security-db-secrets-definition) for your secret value. Omit the `token` key-value pair in the new secret and set the `type` to `allAccess`, or define the specific permissions and set the type to `custom`. The rotation function will create a new token during the first rotation cycle. You can't change the token permissions by editing the secret after rotation and any subsequent rotations will use the permissions that are set in the DB instance.

#### Lambda rotation function
<a name="timestream-for-influx-security-db-token-lambda-function"></a>

The multi-user rotation function rotates token credentials by creating a new permission identical token using the admin credentials in the admin secret. The Lambda function validates the token value in the secret before creating the replacement token, storing the new token value in the secret, and deleting the old token. If the Lambda function is creating a new token it will first validate that the `AUTHENTICATION_CREATION_ENABLED` environment variable is set to `true`, that there is no token value in the secret, and that the token type is not type operator.

#### Lambda function execution role permissions
<a name="timestream-for-influx-security-db-token-lambda-function-permissions"></a>

Use the following IAM policy as the role for the multi-user Lambda function. The policy gives the Lambda function the required permissions to perform a secret rotation for Timestream for InfluxDB tokens.

Replace all items listed below in the IAM policy with values from your AWS account:
+ **\$1rotating\$1secret\$1arn\$1** — The ARN for the secret being rotated can be found in the Secrets Manager secret details.
+ **\$1authentication\$1properties\$1admin\$1secret\$1arn\$1** — The Timestream for InfluxDB admin secret ARN can be found on the Timestream for InfluxDB instance summary page.
+ **\$1db\$1instance\$1arn\$1** — The Timestream for InfluxDB instance ARN can be found on the Timestream for InfluxDB instance summary page.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:DescribeSecret",
                "secretsmanager:GetSecretValue",
                "secretsmanager:PutSecretValue",
                "secretsmanager:UpdateSecretVersionStage"
            ],
	        "Resource": "arn:aws:secretsmanager:us-east-2:111122223333:secret:MySecret"
        },
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue"
            ],
	        "Resource": "arn:aws:secretsmanager:us-east-2:111122223333:secret:MyAdminSecret"
        },
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetRandomPassword"
            ],
            "Resource": "*"
        },
        {
            "Action": [
                "timestream-influxdb:GetDbInstance"
            ],
	        "Resource": "arn:aws:timestream-influxdb:us-east-2:111122223333:db-instance/MyDbInstance",
            "Effect": "Allow"
        }
    ]
}
```

------

# Data protection in Timestream for InfluxDB
<a name="data-protection-for-influx-db"></a>

The AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in Amazon Timestream for InfluxDB. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [AWS Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *AWS Security Blog*.

For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see [Working with CloudTrail trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the *AWS CloudTrail User Guide*.
+ Use AWS encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.
+ If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see [Federal Information Processing Standard (FIPS) 140-3](https://aws.amazon.com/compliance/fips/).

We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with Timestream for InfluxDB or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.

For more detailed information on Timestream for InfluxDB data protection topics like Encryption at Rest and Key Management, select any of the available topics below.

**Topics**
+ [Encryption at rest](EncryptionAtRest-InfluxDB.md)
+ [Encryption in transit](EncryptionInTransit-for-influx-db.md)

# Encryption at rest
<a name="EncryptionAtRest-InfluxDB"></a>

Timestream for InfluxDB encryption at rest provides enhanced security by encrypting all your data at rest using encryption keys stored in [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/). This functionality helps reduce the operational burden and complexity involved in protecting sensitive data. With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. 
+ Encryption is turned on by default on your Timestream for InfluxDB DB instance, and cannot be turned off. The industry standard AES-256 encryption algorithm is the default encryption algorithm used.
+ AWS KMS is used for encryption at rest in Timestream for InfluxDB.
+  You don't need to modify your DB instance client applications to use encryption. 

# Encryption in transit
<a name="EncryptionInTransit-for-influx-db"></a>

All your Timestream for InfluxDB data is encrypted in transit. By default, all communications to and from Timestream for InfluxDB are protected by using Transport Layer Security (TLS) encryption. 

Traffic to and from Amazon Timestream for InfluxDB is secured using supported TLS versions 1.2 or 1.3.

# Identity and Access Management for Amazon Timestream for InfluxDB
<a name="security-iam-for-influxdb"></a>





AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Timestream for InfluxDB resources. IAM is an AWS service that you can use with no additional charge.

**Topics**
+ [Authenticating with identities](#security_iam_authentication)
+ [Managing access using policies](#security_iam_access-manage)
+ [How Amazon Timestream for InfluxDB works with IAM](security_iam_service-with-iam-influxb.md)
+ [Identity-based policy examples for Amazon Timestream for InfluxDB](security_iam_id-based-policy-examples-influxb.md)
+ [Troubleshooting Amazon Timestream for InfluxDB identity and access](security_iam_troubleshoot-influxdb.md)
+ [Controlling access to a DB instance in a VPC](timestream-for-influxdb-controlling-access.md)
+ [Using service-linked roles for Amazon Timestream for InfluxDB](using-service-linked-roles.md)
+ [AWS managed policies for Amazon Timestream for InfluxDB](security-iam-awsmanpol-influxdb.md)
+ [Connecting to Timestream for InfluxDB through a VPC endpoint](timestream-influxdb-vpc-endpoint.md)

## Authenticating with identities
<a name="security_iam_authentication"></a>

Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### IAM users and groups
<a name="security_iam_authentication-iamuser"></a>

An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

### IAM roles
<a name="security_iam_authentication-iamrole"></a>

An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity with specific permissions that provides temporary credentials. You can assume a role by [switching from a user to an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html) or by calling an AWS CLI or AWS API operation. For more information, see [Methods to assume a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage-assume.html) in the *IAM User Guide*.

IAM roles are useful for federated user access, temporary IAM user permissions, cross-account access, cross-service access, and applications running on Amazon EC2. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Managing access using policies
<a name="security_iam_access-manage"></a>

You control access in AWS by creating policies and attaching them to AWS identities or resources. A policy defines permissions when associated with an identity or resource. AWS evaluates these policies when a principal makes a request. Most policies are stored in AWS as JSON documents. For more information about JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

Using policies, administrators specify who has access to what by defining which **principal** can perform **actions** on what **resources**, and under what **conditions**.

By default, users and roles have no permissions. An IAM administrator creates IAM policies and adds them to roles, which users can then assume. IAM policies define permissions regardless of the method used to perform the operation.

### Identity-based policies
<a name="security_iam_access-manage-id-based-policies"></a>

Identity-based policies are JSON permissions policy documents that you attach to an identity (user, group, or role). These policies control what actions identities can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be *inline policies* (embedded directly into a single identity) or *managed policies* (standalone policies attached to multiple identities). To learn how to choose between managed and inline policies, see [Choose between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-choosing-managed-or-inline.html) in the *IAM User Guide*.

### Resource-based policies
<a name="security_iam_access-manage-resource-based-policies"></a>

Resource-based policies are JSON policy documents that you attach to a resource. Examples include IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy.

Resource-based policies are inline policies that are located in that service. You can't use AWS managed policies from IAM in a resource-based policy.

### Access control lists (ACLs)
<a name="security_iam_access-manage-acl"></a>

Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. To learn more about ACLs, see [Access control list (ACL) overview](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html) in the *Amazon Simple Storage Service Developer Guide*.

### Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional policy types that can set the maximum permissions granted by more common policy types:
+ **Permissions boundaries** – Set the maximum permissions that an identity-based policy can grant to an IAM entity. For more information, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – Specify the maximum permissions for an organization or organizational unit in AWS Organizations. For more information, see [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ **Resource control policies (RCPs)** – Set the maximum available permissions for resources in your accounts. For more information, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Advanced policies passed as a parameter when creating a temporary session for a role or federated user. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*.

### Multiple policy types
<a name="security_iam_access-manage-multiple-policies"></a>

When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon Timestream for InfluxDB works with IAM
<a name="security_iam_service-with-iam-influxb"></a>






**IAM features you can use with Amazon Timestream for InfluxDB**  

| IAM feature | Timestream for InfluxDB support | 
| --- | --- | 
|  [Identity-based policies](security_iam_service-with-iam.md#security_iam_service-with-iam-id-based-policies)  |   Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies-influxb)  |  No  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions-influxb)  |   Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources-influxb)  |   Yes  | 
|  [Policy condition keys](#security_iam_service-with-iam-id-based-policies-conditionkeys-influxb)  |  No  | 
|  [ACLs](#security_iam_service-with-iam-acls-influxb)  |  No  | 
|  [ABAC (tags in policies)](#security_iam_service-with-iam-tags-influxb)  |   Yes  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds-influxb)  |   Yes  | 
|  [Principal permissions](#security_iam_service-with-iam-principal-permissions-influxb)  |   Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service-influxb)  |  No  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked-influxb)  |  Yes  | 

To get a high-level view of how Timestream for InfluxDB and other AWS services work with most IAM features, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Identity-based policies for Timestream for InfluxDB
<a name="security_iam_service-with-iam-id-based-policies-influxb"></a>

**Supports identity-based policies:** Yes

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Timestream for InfluxDB
<a name="security_iam_service-with-iam-id-based-policies-examples-influxb"></a>



To view examples of Timestream for InfluxDB identity-based policies, see [Identity-based policy examples for Amazon Timestream for InfluxDB](security_iam_id-based-policy-examples-influxb.md).

## Resource-based policies within Timestream for InfluxDB
<a name="security_iam_service-with-iam-resource-based-policies-influxb"></a>

**Supports resource-based policies:** No 

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services.

To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Policy actions for Timestream for InfluxDB
<a name="security_iam_service-with-iam-id-based-policies-actions-influxb"></a>

**Supports policy actions:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.



To see a list of Timestream for InfluxDB actions, see [Actions, resources and condition keys for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazontimestreaminfluxdb.html) in the *Service Authorization Reference*.

Policy actions in Timestream for InfluxDB use the following prefix before the action:

```
timestream-influxdb
```

To specify multiple actions in a single statement, separate them with commas.

```
"Action": [
      "timestream-influxdb:action1",
      "timestream-influxdb:action2"
         ]
```





You can specify multiple actions using wildcards (\$1). For example, to specify all actions that begin with the word `Describe`, include the following action:

```
"Action": "timestream-influxdb:Describe*"
```

## Policy resources for Timestream for InfluxDB
<a name="security_iam_service-with-iam-id-based-policies-resources-influxb"></a>

**Supports policy resources:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

To see a list of Timestream for InfluxDB resource types and their ARNs, see [Resource types defined by Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazontimestreaminfluxdb.html#amazontimestreaminfluxdb-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions, resources and condition keys for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazontimestreaminfluxdb.html).





## Policy condition keys for Timestream for InfluxDB
<a name="security_iam_service-with-iam-id-based-policies-conditionkeys-influxb"></a>

**Supports service-specific policy condition keys:** No 

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

## Access control lists (ACLs) in Timestream for InfluxDB
<a name="security_iam_service-with-iam-acls-influxb"></a>

**Supports ACLs:** No 

Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

## Attribute-based access control (ABAC) with Timestream for InfluxDB
<a name="security_iam_service-with-iam-tags-influxb"></a>

**Supports ABAC (tags in policies):** Yes

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

## Using Temporary credentials with Timestream for InfluxDB
<a name="security_iam_service-with-iam-roles-tempcreds-influxb"></a>

**Supports temporary credentials:** Yes

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Cross-service principal permissions for Timestream for InfluxDB
<a name="security_iam_service-with-iam-principal-permissions-influxb"></a>

**Supports forward access sessions (FAS):** Yes

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 

## Service roles for Timestream for InfluxDB
<a name="security_iam_service-with-iam-roles-service-influxb"></a>

**Supports service roles:** No 

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Timestream for InfluxDB functionality. Edit service roles only when Timestream for InfluxDB provides guidance to do so.

## Service-linked roles for Timestream for InfluxDB
<a name="security_iam_service-with-iam-roles-service-linked-influxb"></a>

**Supports service-linked roles:** Yes

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

For details about creating or managing service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html). Find a service in the table that includes a `Yes` in the **Service-linked role** column. Choose the **Yes** link to view the service-linked role documentation for that service.

# Identity-based policy examples for Amazon Timestream for InfluxDB
<a name="security_iam_id-based-policy-examples-influxb"></a>

By default, users and roles don't have permission to create or modify Timestream for InfluxDB resources. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies.

To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Create IAM policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*.

For details about actions and resource types defined by Timestream for InfluxDB, including the format of the ARNs for each of the resource types, see [Actions, resources, and condition Keys for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazontimestreaminfluxdb.html) in the *Service Authorization Reference*.

**Topics**
+ [Policy best practices](#security_iam_service-with-iam-policy-best-practices-influxb)
+ [Using the Timestream for InfluxDB console](#security_iam_id-based-policy-examples-console-influxb)
+ [Allow users to view their own permissions](#security_iam_id-based-policy-examples-view-own-permissions-influxb)
+ [Accessing one Amazon S3 bucket](#security_iam_id-based-policy-examples-access-one-bucket)
+ [Allowing all operations](#security_iam_id-based-policy-examples-common-operations.all-influxdb)
+ [Create, describe, delete and update a DB instance](#security_iam_id-based-policy-examples-common-operations.cddd-influxdb)

## Policy best practices
<a name="security_iam_service-with-iam-policy-best-practices-influxb"></a>

Identity-based policies determine whether someone can create, access, or delete Timestream for InfluxDB resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

## Using the Timestream for InfluxDB console
<a name="security_iam_id-based-policy-examples-console-influxb"></a>

To access the Amazon Timestream for InfluxDB console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Timestream for InfluxDB resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy.

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that they're trying to perform.

To ensure that users and roles can still use the Timestream for InfluxDB console, also attach the Timestream for InfluxDB `ConsoleAccess` or `ReadOnly` AWS managed policy to the entities. For more information, see [Adding permissions to a user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Allow users to view their own permissions
<a name="security_iam_id-based-policy-examples-view-own-permissions-influxb"></a>

This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ViewOwnUserInfo",
            "Effect": "Allow",
            "Action": [
                "iam:GetUserPolicy",
                "iam:ListGroupsForUser",
                "iam:ListAttachedUserPolicies",
                "iam:ListUserPolicies",
                "iam:GetUser"
            ],
            "Resource": ["arn:aws:iam::*:user/${aws:username}"]
        },
        {
            "Sid": "NavigateInConsole",
            "Effect": "Allow",
            "Action": [
                "iam:GetGroupPolicy",
                "iam:GetPolicyVersion",
                "iam:GetPolicy",
                "iam:ListAttachedGroupPolicies",
                "iam:ListGroupPolicies",
                "iam:ListPolicyVersions",
                "iam:ListPolicies",
                "iam:ListUsers"
            ],
            "Resource": "*"
        }
    ]
}
```

## Accessing one Amazon S3 bucket
<a name="security_iam_id-based-policy-examples-access-one-bucket"></a>

In this example, you want to grant an IAM user in your AWS account access to one of your Amazon S3 buckets, `amzn-s3-demo-bucket`. You also want to allow the user to add, update, and delete objects.

In addition to granting the `s3:PutObject`, `s3:GetObject`, and `s3:DeleteObject` permissions to the user, the policy also grants the `s3:ListAllMyBuckets`, `s3:GetBucketLocation`, and `s3:ListBucket` permissions. These are the additional permissions required by the console. Also, the `s3:PutObjectAcl` and the `s3:GetObjectAcl` actions are required to be able to copy, cut, and paste objects in the console. For an example walkthrough that grants permissions to users and tests them using the console, see [An example walkthrough: Using user policies to control access to your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/walkthrough1.html).

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"ListBucketsInConsole",
         "Effect":"Allow",
         "Action":[
            "s3:ListAllMyBuckets"
         ],
         "Resource":"arn:aws:s3:::*"
      },
      {
         "Sid":"ViewSpecificBucketInfo",
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket"
      },
      {
         "Sid":"ManageBucketContents",
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-bucket/*"
      }
   ]
}
```

------

## Allowing all operations
<a name="security_iam_id-based-policy-examples-common-operations.all-influxdb"></a>

The following is a sample policy that allows all operations in Timestream for InfluxDB.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "timestream-influxdb:*"
            ],
            "Resource": "*"
        }
    ]
}
```

------

## Create, describe, delete and update a DB instance
<a name="security_iam_id-based-policy-examples-common-operations.cddd-influxdb"></a>

The following sample policy allows a user to create, describe, delete and update a DB instance `sampleDB`:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "timestream-influxdb:CreateDbInstance",
                "timestream-influxdb:GetDbInstance",
                "timestream-influxdb:DeleteDbInstance",
                "timestream-influxdb:UpdateDbInstance"
            ],
            "Resource": "arn:aws:timestream-influxdb:us-east-2:111122223333:db-instance/MyDbInstance"
        }
    ]
}
```

------







# Troubleshooting Amazon Timestream for InfluxDB identity and access
<a name="security_iam_troubleshoot-influxdb"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Timestream for InfluxDB and IAM.

**Topics**
+ [I am not authorized to perform an action in Timestream for InfluxDB](#security_iam_troubleshoot-no-permissions-influxdb)
+ [I want to allow people outside of my AWS account to access my Timestream for InfluxDB resources](#security_iam_troubleshoot-cross-account-access-influxdb)

## I am not authorized to perform an action in Timestream for InfluxDB
<a name="security_iam_troubleshoot-no-permissions-influxdb"></a>

If the AWS Management Console tells you that you're not authorized to perform an action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your user name and password.

The following example error occurs when the `mateojackson` user tries to use the console to view details about a fictional `my-example-widget` resource but does not have the fictional `timestream-influxdb:GetWidget` permissions.

```
User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: timestream-influxdb:GetWidget on resource: my-example-widget
```

In this case, Mateo asks his administrator to update his policies to allow him to access the `my-example-widget` resource using the `timestream-influxdb:GetWidget` action.

## I want to allow people outside of my AWS account to access my Timestream for InfluxDB resources
<a name="security_iam_troubleshoot-cross-account-access-influxdb"></a>

You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ [Controlling access to a DB instance in a VPC](timestream-for-influxdb-controlling-access.md)
+ To learn whether Timestream for InfluxDB supports these features, see [How Amazon Timestream for InfluxDB works with IAM](https://docs.aws.amazon.com/timestream/latest/developerguide/security_iam_service-with-iam-influxb.html).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*. 
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*. 
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*. 
+ To learn the difference between using roles and resource-based policies for cross-account access, see [How IAM roles differ from resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*. 

# Controlling access to a DB instance in a VPC
<a name="timestream-for-influxdb-controlling-access"></a>

Using Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources, such as Amazon Timestream for InfluxDB DB instances, into a virtual private cloud (VPC). When you use Amazon VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists.

A VPC security group controls access to DB instances inside a VPC. Each VPC security group rule enables a specific source to access a DB instance in a VPC that is associated with that VPC security group. The source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group. By specifying a VPC security group as the source, you allow incoming traffic from all instances (typically application servers) that use the source VPC security group. Before attempting to connect to your DB instance, configure your VPC for your use case. The following are common scenarios for accessing a DB instance in a VPC: 

**A DB instance in a VPC accessed by an Amazon EC2 instance in the same VPC**  
A common use of a DB instance in a VPC is to share data with an application server that is running in an EC2 instance in the same VPC. The EC2 instance might run a web server with an application that interacts with the DB instance.

**A DB instance in a VPC accessed by an EC2 instance in a different VPC**  
In some cases, your DB instance is in a different VPC from the EC2 instance that you're using to access it. If so, you can use VPC peering to access the DB instance.

**A DB instance in a VPC accessed by a client application through the internet**  
To access a DB instance in a VPC from a client application through the internet, you configure a VPC with a single public subnet and use the public subnets to create the DB instance. You also configure an internet gateway in the VPC to enable communication over the internet. To connect to a DB instance from outside of its VPC, the DB instance must be publicly accessible. Also, access must be granted using the inbound rules of the DB instance's security group, and other requirements must be met.

For more information on VPC security groups, see [Control traffic to your AWS resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) in the *Amazon Virtual Private Cloud User Guide*. 

For details on how to connect to a Timestream for InfluxDB DB instance, see [Connecting to an Amazon Timestream for InfluxDB DB instance](timestream-for-influx-db-connecting.md). 

## Security group scenario
<a name="Overview.SecurityGroups.Scenarios"></a>

A common use of a DB instance in a VPC is to share data with an application server running in an Amazon EC2 instance in the same VPC, which is accessed by a client application outside the VPC. For this scenario, you use the Timestream for InfluxDB and VPC pages on the AWS Management Console or the Timestream for InfluxDB and EC2 API operations to create the necessary instances and security groups: 

1. Create a VPC security group (for example, `sg-0123ec2example`) and define inbound rules that use the IP addresses of the client application as the source. This security group allows your client application to connect to EC2 instances in a VPC that uses this security group.

1. Create an EC2 instance for the application and add the EC2 instance to the VPC security group (`sg-0123ec2example`) that you created in the previous step.

1. Create a second VPC security group (for example, `sg-6789rdsexample`) and create a new rule by specifying the VPC security group that you created in step 1 (`sg-0123ec2example`) as the source.

1. Create a new DB instance and add the DB instance to the VPC security group (`sg-6789rdsexample`) that you created in the previous step. When you create the DB, use the same port number as the one specified for the VPC security group (`sg-6789rdsexample`) rule that you created in step 3.

## Creating a VPC security group
<a name="Overview.SecurityGroups.Create"></a>

You can create a VPC security group for a DB instance by using the VPC console. For information about creating a security group, see [Create a security group for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/creating-security-groups.html) in the *Amazon Virtual Private Cloud User Guide*.

## Associating a security group with a DB instance
<a name="Overview.SecurityGroups.Associate"></a>

Once a Timestream for InfluxDB DB instance has been created, you will not be able to associate it to new security groups since changes to these configurations are not currently supported.

# Using service-linked roles for Amazon Timestream for InfluxDB
<a name="using-service-linked-roles"></a>

Amazon Timestream for InfluxDB uses AWS Identity and Access Management (IAM) [service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to an AWS service, such as Amazon Timestream for InfluxDB. Amazon Timestream for InfluxDB service-linked roles are predefined by Amazon Timestream for InfluxDB. They include all the permissions that the service requires to call AWS services on behalf of your dbinstances. 

A service-linked role makes setting up Amazon Timestream for InfluxDB easier because you don’t have to manually add the necessary permissions. The roles already exist within your AWS account but are linked to Amazon Timestream for InfluxDB use cases and have predefined permissions. Only Amazon Timestream for InfluxDB can assume these roles, and only these roles can use the predefined permissions policy. You can delete the roles only after first deleting their related resources. This protects your Amazon Timestream for InfluxDB resources because you can't inadvertently remove necessary permissions to access the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

**Contents**
+ [Service-Linked Role Permissions](#service-linked-role-permissions)
+ [Creating a Service-Linked Role (IAM)](#create-service-linked-role-iam)
+ [Editing a Service-Linked Role Description](#edit-service-linked-role)
  + [Using the IAM Console](#edit-service-linked-role-iam-console)
  + [Using the IAM CLI](#edit-service-linked-role-iam-cli)
  + [Using the IAM API](#edit-service-linked-role-iam-api)
+ [Deleting a Service-Linked Role for Amazon Timestream for InfluxDB](#delete-service-linked-role)
  + [Cleaning Up a Service-Linked Role](#service-linked-role-review-before-delete)
  + [Deleting a Service-Linked Role (IAM Console)](#delete-service-linked-role-iam-console)
  + [Deleting a Service-Linked Role (IAM CLI)](#delete-service-linked-role-iam-cli)
  + [Deleting a Service-Linked Role (IAM API)](#delete-service-linked-role-iam-api)
+ [Supported Regions for Amazon Timestream for InfluxDB Service-Linked Roles](#supported-regions)

## Service-Linked Role Permissions for Amazon Timestream for InfluxDB
<a name="service-linked-role-permissions"></a>

Amazon Timestream for InfluxDB uses the service-linked role named **AmazonTimestreamInfluxDBServiceRolePolicy** – This policy allows Timestream for InfluxDB to manage AWS resources on your behalf as necessary for managing your clusters.

The AmazonTimestreamInfluxDBServiceRolePolicy service-linked role permissions policy allows Amazon Timestream for InfluxDB to complete the following actions on the specified resources:

------
#### [ JSON ]

****  

```
{
	"Version":"2012-10-17",		 	 	 
	"Statement": [
		{
			"Sid": "DescribeNetworkStatement",
			"Effect": "Allow",
			"Action": [
				"ec2:DescribeSubnets",
				"ec2:DescribeVpcs",
				"ec2:DescribeNetworkInterfaces"
			],
			"Resource": "*"
		},
		{
			"Sid": "CreateEniInSubnetStatement",
			"Effect": "Allow",
			"Action": [
				"ec2:CreateNetworkInterface"
			],
			"Resource": [
				"arn:aws:ec2:*:*:subnet/*",
				"arn:aws:ec2:*:*:security-group/*"
			]
		},
		{
			"Sid": "CreateEniStatement",
			"Effect": "Allow",
			"Action": [
				"ec2:CreateNetworkInterface"
			],
			"Resource": "arn:aws:ec2:*:*:network-interface/*",
			"Condition": {
				"Null": {
					"aws:RequestTag/AmazonTimestreamInfluxDBManaged": "false"
				}
			}
		},
		{
			"Sid": "CreateTagWithEniStatement",
			"Effect": "Allow",
			"Action": [
				"ec2:CreateTags"
			],
			"Resource": "arn:aws:ec2:*:*:network-interface/*",
			"Condition": {
				"Null": {
					"aws:RequestTag/AmazonTimestreamInfluxDBManaged": "false"
				},
				"StringEquals": {
					"ec2:CreateAction": [
						"CreateNetworkInterface"
					]
				}
			}
		},
		{
			"Sid": "ManageEniStatement",
			"Effect": "Allow",
			"Action": [
				"ec2:CreateNetworkInterfacePermission",
				"ec2:DeleteNetworkInterface"
			],
			"Resource": "arn:aws:ec2:*:*:network-interface/*",
			"Condition": {
				"Null": {
					"aws:ResourceTag/AmazonTimestreamInfluxDBManaged": "false"
				}
			}
		},
		{
			"Sid": "PutCloudWatchMetricsStatement",
			"Effect": "Allow",
			"Action": [
				"cloudwatch:PutMetricData"
			],
			"Condition": {
				"StringEquals": {
					"cloudwatch:namespace": [
						"AWS/Timestream/InfluxDB",
						"AWS/Usage"
					]
				}
			},
			"Resource": [
				"*"
			]
		},
		{
			"Sid": "ManageSecretStatement",
			"Effect": "Allow",
			"Action": [
				"secretsmanager:CreateSecret",
				"secretsmanager:DeleteSecret"
			],
			"Resource": [
				"arn:aws:secretsmanager:*:*:secret:READONLY-InfluxDB-auth-parameters-*"
			],
			"Condition": {
				"StringEquals": {
					"aws:ResourceAccount": "${aws:PrincipalAccount}"
				}
			}
		}
	]
}
```

------

**To allow an IAM entity to create AmazonTimestreamInfluxDBServiceRolePolicy service-linked roles**

Add the following policy statement to the permissions for that IAM entity:

```
{
    "Effect": "Allow",
    "Action": [
        "iam:CreateServiceLinkedRole",
        "iam:PutRolePolicy"
    ],
    "Resource": "arn:aws:iam::*:role/aws-service-role/timestreamforinfluxdb.amazonaws.com/AmazonTimestreamInfluxDBServiceRolePolicy*",
    "Condition": {"StringLike": {"iam:AWSServiceName": "timestreamforinfluxdb.amazonaws.com"}}
}
```

**To allow an IAM entity to delete AmazonTimestreamInfluxDBServiceRolePolicy service-linked roles**

Add the following policy statement to the permissions for that IAM entity:

```
{
    "Effect": "Allow",
    "Action": [
        "iam:DeleteServiceLinkedRole",
        "iam:GetServiceLinkedRoleDeletionStatus"
    ],
    "Resource": "arn:aws:iam::*:role/aws-service-role/timestreamforinfluxdb.amazonaws.com/AmazonTimestreamInfluxDBServiceRolePolicy*",
    "Condition": {"StringLike": {"iam:AWSServiceName": "timestreamforinfluxdb.amazonaws.com"}}
}
```

Alternatively, you can use an AWS managed policy to provide full access to Amazon Timestream for InfluxDB.

## Creating a Service-Linked Role (IAM)
<a name="create-service-linked-role-iam"></a>

You don't need to manually create a service-linked role. When you create a DB instance, Amazon Timestream for InfluxDB creates the service-linked role for you.

If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a DB instance, Amazon Timestream for InfluxDB creates the service-linked role for you again.

## Editing the Description of a Service-Linked Role for Amazon Timestream for InfluxDB
<a name="edit-service-linked-role"></a>

Amazon Timestream for InfluxDB does not allow you to edit the AmazonTimestreamInfluxDBServiceRolePolicy service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM.

### Editing a Service-Linked Role Description (IAM Console)
<a name="edit-service-linked-role-iam-console"></a>

You can use the IAM console to edit a service-linked role description.

**To edit the description of a service-linked role (console)**

1. In the left navigation pane of the IAM console, choose **Roles**.

1. Choose the name of the role to modify.

1. To the far right of **Role description**, choose **Edit**. 

1. Enter a new description in the box and choose **Save**.

### Editing a Service-Linked Role Description (IAM CLI)
<a name="edit-service-linked-role-iam-cli"></a>

You can use IAM operations from the AWS Command Line Interface to edit a service-linked role description.

**To change the description of a service-linked role (CLI)**

1. (Optional) To view the current description for a role, use the AWS CLI for IAM operation `[get-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/get-role.html)`.  
**Example**  

   ```
   $ aws iam get-role --role-name AmazonTimestreamInfluxDBServiceRolePolicy
   ```

   Use the role name, not the ARN, to refer to roles with the CLI operations. For example, if a role has the following ARN: `arn:aws:iam::123456789012:role/myrole`, refer to the role as **myrole**.

1. To update a service-linked role's description, use the AWS CLI for IAM operation `[update-role-description](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/update-role-description.html)`.

   **Linux and MacOS**

   ```
   $ aws iam update-role-description \
       --role-name AmazonTimestreamInfluxDBServiceRolePolicy \
       --description "new description"
   ```

   **Windows**

   ```
   $ aws iam update-role-description ^
       --role-name AmazonTimestreamInfluxDBServiceRolePolicy ^
       --description "new description"
   ```

### Editing a Service-Linked Role Description (IAM API)
<a name="edit-service-linked-role-iam-api"></a>

You can use the IAM API to edit a service-linked role description.

**To change the description of a service-linked role (API)**

1. (Optional) To view the current description for a role, use the IAM API operation [GetRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_GetRole.html).  
**Example**  

   ```
   https://iam.amazonaws.com/
      ?Action=[GetRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_GetRole.html)
      &RoleName=AmazonTimestreamInfluxDBServiceRolePolicy
      &Version=2010-05-08
      &AUTHPARAMS
   ```

1. To update a role's description, use the IAM API operation [UpdateRoleDescription](https://docs.aws.amazon.com/IAM/latest/APIReference/API_UpdateRoleDescription.html).  
**Example**  

   ```
   https://iam.amazonaws.com/
      ?Action=[UpdateRoleDescription](https://docs.aws.amazon.com/IAM/latest/APIReference/API_UpdateRoleDescription.html)
      &RoleName=AmazonTimestreamInfluxDBServiceRolePolicy
      &Version=2010-05-08
      &Description="New description"
   ```

## Deleting a Service-Linked Role for Amazon Timestream for InfluxDB
<a name="delete-service-linked-role"></a>

If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don’t have an unused entity that is not actively monitored or maintained. However, you must clean up your service-linked role before you can delete it.

Amazon Timestream for InfluxDB does not delete the service-linked role for you.

### Cleaning Up a Service-Linked Role
<a name="service-linked-role-review-before-delete"></a>

Before you can use IAM to delete a service-linked role, first confirm that the role has no resources (clusters) associated with it.

**To check whether the service-linked role has an active session in the IAM console**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane of the IAM console, choose **Roles**. Then choose the name (not the check box) of the AmazonTimestreamInfluxDBServiceRolePolicy role.

1. On the **Summary** page for the selected role, choose the **Access Advisor** tab.

1. On the **Access Advisor** tab, review recent activity for the service-linked role.

### Deleting a Service-Linked Role (IAM Console)
<a name="delete-service-linked-role-iam-console"></a>

You can use the IAM console to delete a service-linked role.

**To delete a service-linked role (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane of the IAM console, choose **Roles**. Then select the check box next to the role name that you want to delete, not the name or row itself. 

1. For **Role actions** at the top of the page, choose **Delete role**.

1. In the confirmation page, review the service last accessed data, which shows when each of the selected roles last accessed an AWS service. This helps you to confirm whether the role is currently active. If you want to proceed, choose **Yes, Delete** to submit the service-linked role for deletion.

1. Watch the IAM console notifications to monitor the progress of the service-linked role deletion. Because the IAM service-linked role deletion is asynchronous, after you submit the role for deletion, the deletion task can succeed or fail. If the task fails, you can choose **View details** or **View Resources** from the notifications to learn why the deletion failed.

### Deleting a Service-Linked Role (IAM CLI)
<a name="delete-service-linked-role-iam-cli"></a>

You can use IAM operations from the AWS Command Line Interface to delete a service-linked role.

**To delete a service-linked role (CLI)**

1. If you don't know the name of the service-linked role that you want to delete, enter the following command. This command lists the roles and their Amazon Resource Names (ARNs) in your account.

   ```
   $ aws iam get-role --role-name role-name
   ```

   Use the role name, not the ARN, to refer to roles with the CLI operations. For example, if a role has the ARN `arn:aws:iam::123456789012:role/myrole`, you refer to the role as **myrole**.

1. Because a service-linked role cannot be deleted if it is being used or has associated resources, you must submit a deletion request with the [delete-service-linked-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-service-linked-role.html) command. That request can be denied if these conditions are not met. You must capture the `deletion-task-id` from the response to check the status of the deletion task. Enter the following to submit a service-linked role deletion request.

   ```
   $ aws iam delete-service-linked-role --role-name role-name
   ```

1. Run the [get-service-linked-role-deletion-status](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/get-service-linked-role-deletion-status.html) command to check the status of the deletion task.

   ```
   $ aws iam get-service-linked-role-deletion-status --deletion-task-id deletion-task-id
   ```

   The status of the deletion task can be `NOT_STARTED`, `IN_PROGRESS`, `SUCCEEDED`, or `FAILED`. If the deletion fails, the call returns the reason that it failed so that you can troubleshoot.

### Deleting a Service-Linked Role (IAM API)
<a name="delete-service-linked-role-iam-api"></a>

You can use the IAM API to delete a service-linked role.

**To delete a service-linked role (API)**

1. To submit a deletion request for a service-linked roll, call [DeleteServiceLinkedRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_DeleteServiceLinkedRole.html). In the request, specify a role name.

   Because a service-linked role cannot be deleted if it is being used or has associated resources, you must submit a deletion request. That request can be denied if these conditions are not met. You must capture the `DeletionTaskId` from the response to check the status of the deletion task.

1. To check the status of the deletion, call [GetServiceLinkedRoleDeletionStatus](https://docs.aws.amazon.com/IAM/latest/APIReference/API_GetServiceLinkedRoleDeletionStatus.html). In the request, specify the `DeletionTaskId`.

   The status of the deletion task can be `NOT_STARTED`, `IN_PROGRESS`, `SUCCEEDED`, or `FAILED`. If the deletion fails, the call returns the reason that it failed so that you can troubleshoot.

## Supported Regions for Amazon Timestream for InfluxDB Service-Linked Roles
<a name="supported-regions"></a>

Amazon Timestream for InfluxDB supports using service-linked roles in all of the Regions where the service is available. For more information, see [AWS service endpoints](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html).

# AWS managed policies for Amazon Timestream for InfluxDB
<a name="security-iam-awsmanpol-influxdb"></a>







To add permissions to users, groups, and roles, it is easier to use AWS managed policies than to write policies yourself. It takes time and expertise to [create IAM customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services do not remove permissions from an AWS managed policy, so policy updates won't break your existing permissions.

Additionally, AWS supports managed policies for job functions that span multiple services. For example, the **ReadOnlyAccess** AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.









## AWS managed policy: AmazonTimestreamInfluxDBServiceRolePolicy
<a name="security-iam-awsmanpol-timestreamforinfluxdbServiceRolePolicy"></a>







You cannot attach the AmazonTimestreamInfluxDBServiceRolePolicy AWS managed policy to identities in your account. This policy is part of the AWS TimestreamforInfluxDB service-linked role. This role allows the service to manage network interfaces and security groups in your account. 



Timestream for InfluxDB uses the permissions in this policy to manage EC2 security groups and network interfaces. This is required to manage Timestream for InfluxDB DB instances.





To review this policy in JSON format, see [AmazonTimestreamInfluxDBServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonTimestreamInfluxDBServiceRolePolicy.html).

## AWS-managed policies for Amazon Timestream for InfluxDB
<a name="iam.identitybasedpolicies.predefinedpolicies"></a>

AWS addresses many common use cases by providing standalone IAM policies that are created and administered by AWS. Managed policies grant necessary permissions for common use cases so you can avoid having to investigate what permissions are needed. For more information, see [AWS Managed Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*. 

The following AWS managed policies, which you can attach to users in your account, are specific to Timestream for InfluxDB:

### AmazonTimestreamInfluxDBFullAccess
<a name="iam.identitybasedpolicies.predefinedpolicies-fullaccess"></a>

You can attach the `AmazonTimestreamInfluxDBFullAccess` policy to your IAM identities. This policy grants administrative permissions that allow full access to all Timestream for InfluxDB resources. 

You can also create your own custom IAM policies to allow permissions for Amazon Timestream for InfluxDB API actions. You can attach these custom policies to the IAM users or groups that require those permissions. 

To review this policy in JSON format, see [AmazonTimestreamInfluxDBFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonTimestreamInfluxDBFullAccess.html).

## AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess
<a name="iam.identitybasedpolicies.predefinedpolicies-fullaccess-without-marketplace-access"></a>

You can attach the `AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` policy to your IAM identities. This policy grants administrative permissions that allow full access to all Timestream for InfluxDB resources, excluding any marketplace-related actions.

You can also create your own custom IAM policies to allow permissions for Timestream for InfluxDB API actions. You can attach these custom policies to the IAM users or groups that require those permissions.

To review this policy in JSON format, see [AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess.html).





## Timestream for InfluxDB updates to AWS managed policies
<a name="security-iam-awsmanpol-updates"></a>



View details about updates to AWS managed policies for Timestream for InfluxDB since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Timestream for InfluxDB Document history page.




| Change | Description | Date | 
| --- | --- | --- | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB has added the RebootDbInstance and RebootDbCluster actions to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy for rebooting Amazon Timestream InfluxDB resources.  | 12/17/2025 | 
|  [AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess-without-marketplace-access) – Update to an existing policy  |  Amazon Timestream for InfluxDB has added the RebootDbInstance and RebootDbCluster actions to the existing `AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` managed policy for rebooting Amazon Timestream InfluxDB resources.  | 12/17/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB has added the `ec2:DescribeVpcEndpoints` action to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy for describing the VPC endpoints.  | 11/13/2025 | 
|  [AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess-without-marketplace-access) – Update to an existing policy  |  Amazon Timestream for InfluxDB has added the `ec2:DescribeVpcEndpoints` action to the existing `AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` managed policy for describing the VPC endpoints.  | 11/13/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB updated the existing managed policy `AmazonTimestreamInfluxDBFullAccess` that adds necessary permissions to access Marketplace APIs for managing subscription required for creating and updating Timestream for InfluxDB cluster resources.  | 4/16/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB updated the existing managed policy `AmazonTimestreamInfluxDBFullAccess` that adds marketplace product ID to support subscription to InfluxDB enterprise marketplace offerings for Timestream for InfluxDB cluster resources.  | 10/17/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB updated the existing managed policy `AmazonTimestreamInfluxDBFullAccess` that adds necessary permissions to access Marketplace APIs for managing subscription required for creating and updating Timestream for InfluxDB cluster resources.  | 4/16/2025 | 
|  [AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess-without-marketplace-access) – New policy  |  Amazon Timestream for InfluxDB added a new policy to provide administrative access to manage Amazon Timestream for InfluxDB instances and parameter groups except marketplace operations.  | 04/16/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Amazon Timestream for InfluxDB updated the existing managed policy `AmazonTimestreamInfluxDBFullAccess` to also provide full administrative access to create, update, delete, and list Amazon Timestream InfluxDB clusters.  | 2/17/2025 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – Update to an existing policy  |  Added the `ec2:DescribeRouteTables` action to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy. This action is used for describing your route tables  | 10/08/2024 | 
|  [AWS managed policy: AmazonTimestreamInfluxDBServiceRolePolicy](#security-iam-awsmanpol-timestreamforinfluxdbServiceRolePolicy) – New policy  |  Amazon Timestream for InfluxDB added a new policy that allows the service to manage network interfaces and security groups in your account.  | 03/14/2024 | 
|  [AmazonTimestreamInfluxDBFullAccess](#iam.identitybasedpolicies.predefinedpolicies-fullaccess) – New policy  |  Amazon Timestream for InfluxDB added a new policy to provide full administrative access to create, update, delete and list Amazon Timestream InfluxDB instances and create and list parameter groups.  | 03/14/2024 | 

# Connecting to Timestream for InfluxDB through a VPC endpoint
<a name="timestream-influxdb-vpc-endpoint"></a>

You can connect directly to Timestream for InfluxDB through a private interface endpoint in your virtual private cloud (VPC). When you use an interface VPC endpoint, communication between your VPC and Timestream for InfluxDB is conducted entirely within the AWS network.

Timestream for InfluxDB supports Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/). Each VPC endpoint is represented by one or more [Elastic Network Interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) (ENIs) with private IP addresses in your VPC subnets. 

The interface VPC endpoint connects your VPC directly to Timestream for InfluxDB without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. The instances in your VPC do not need public IP addresses to communicate with Timestream for InfluxDB. <a name="vpc-regions"></a>

**Regions**  
Timestream for InfluxDB supports VPC endpoints and VPC endpoint policies in all AWS Regions in which Timestream for InfluxDB is supported.

**Topics**
+ [Considerations for Timestream for InfluxDB VPC endpoints](#vpce-considerations)
+ [Creating a VPC endpoint for Timestream for InfluxDB](#vpce-create-endpoint)
+ [Connecting to an Timestream for InfluxDB VPC endpoint](#vpce-connect)
+ [Controlling access to a VPC endpoint](#vpce-policy)
+ [Using a VPC endpoint in a policy statement](#vpce-policy-condition)
+ [Logging your VPC endpoint](#vpce-logging)

## Considerations for Timestream for InfluxDB VPC endpoints
<a name="vpce-considerations"></a>

Before you set up an interface VPC endpoint for Timestream for InfluxDB, review the [Interface endpoint properties and limitations](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#vpce-interface-limitations) topic in the *AWS PrivateLink Guide*.

Timestream for InfluxDB support for a VPC endpoint includes the following.
+ You can use your VPC endpoint to call all [Timestream for InfluxDB API operations](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/API_Operations.html) from your VPC.
+ You can use AWS CloudTrail logs to audit your use of Timestream for InfluxDB resources through the VPC endpoint. For details, see [Logging your VPC endpoint](#vpce-logging).

## Creating a VPC endpoint for Timestream for InfluxDB
<a name="vpce-create-endpoint"></a>

You can create a VPC endpoint for Timestream for InfluxDB by using the Amazon VPC console or the Amazon VPC API. For more information, see [Create an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#create-interface-endpoint) in the *AWS PrivateLink Guide*.
+ To create a VPC endpoint for Timestream for InfluxDB, use the following service name: 

  ```
  com.amazonaws.region.timestream-influxdb
  ```

  For example, in the US West (Oregon) Region (`us-west-2`), the service name would be:

  ```
  com.amazonaws.us-west-2.timestream-influxdb
  ```

To make it easier to use the VPC endpoint, you can enable a [private DNS name](https://docs.aws.amazon.com/vpc/latest/privatelink/verify-domains.html) for your VPC endpoint. If you select the **Enable DNS Name** option, the standard Timestream for InfluxDB DNS hostname resolves to your VPC endpoint. For example, `https://timestream-influxdb.us-west-2.amazonaws.com` would resolve to a VPC endpoint connected to service name `com.amazonaws.us-west-2.timestream-influxdb`.

This option makes it easier to use the VPC endpoint. The AWS SDKs and AWS CLI use the standard Timestream for InfluxDB DNS hostname by default, so you do not need to specify the VPC endpoint URL in applications and commands.

For more information, see [Accessing a service through an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#access-service-though-endpoint) in the *AWS PrivateLink Guide*.

## Connecting to an Timestream for InfluxDB VPC endpoint
<a name="vpce-connect"></a>

You can connect to Timestream for InfluxDB through the VPC endpoint by using an AWS SDK, the AWS CLI or AWS Tools for PowerShell. To specify the VPC endpoint, use its DNS name. 

If you enabled private hostnames when you created your VPC endpoint, you do not need to specify the VPC endpoint URL in your CLI commands or application configuration. The standard Timestream for InfluxDB DNS hostname resolves to your VPC endpoint. The AWS CLI and SDKs use this hostname by default, so you can begin using the VPC endpoint to connect to an Timestream for InfluxDB regional endpoint without changing anything in your scripts and applications. 

To use private hostnames, the `enableDnsHostnames` and `enableDnsSupport` attributes of your VPC must be set to `true`. To set these attributes, use the [ModifyVpcAttribute](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVpcAttribute.html) operation. For details, see [View and update DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating) in the *Amazon VPC User Guide*.

## Controlling access to a VPC endpoint
<a name="vpce-policy"></a>

To control access to your VPC endpoint for Timestream for InfluxDB, attach a *VPC endpoint policy* to your VPC endpoint. The endpoint policy determines whether principals can use the VPC endpoint to call Timestream for InfluxDB operations on Timestream for InfluxDB resources.

You can create a VPC endpoint policy when you create your endpoint, and you can change the VPC endpoint policy at any time. Use the VPC management console, or the [CreateVpcEndpoint](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateVpcEndpoint.html) or [ModifyVpcEndpoint](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVpcEndpoint.html) operations. You can also create and change a VPC endpoint policy by [using an AWS CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpcendpoint.html). For help using the VPC management console, see [Create an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#create-interface-endpoint) and [Modifying an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#modify-interface-endpoint) in the *AWS PrivateLink Guide*.

**Note**  
Timestream for InfluxDB supports VPC endpoint policies beginning in July 2020. VPC endpoints for Timestream for InfluxDB that were created before that date have the [default VPC endpoint policy](#vpce-default-policy), but you can change it at any time.

**Topics**
+ [About VPC endpoint policies](#vpce-policy-about)
+ [Default VPC endpoint policy](#vpce-default-policy)
+ [Creating a VPC endpoint policy](#vpce-policy-create)
+ [Viewing a VPC endpoint policy](#vpce-policy-get)

### About VPC endpoint policies
<a name="vpce-policy-about"></a>

For an Timestream for InfluxDB request that uses a VPC endpoint to be successful, the principal requires permissions from two sources:
+ A [IAM policy](security-iam-for-influxdb.md) must give principal permission to call the operation on the resource.
+ A VPC endpoint policy must give the principal permission to use the endpoint to make the request.

### Default VPC endpoint policy
<a name="vpce-default-policy"></a>

Every VPC endpoint has a VPC endpoint policy, but you are not required to specify the policy. If you don't specify a policy, the default endpoint policy allows all operations by all principals on all resources over the endpoint. 

However, for Timestream for InfluxDB resources, the principal must also have permission to call the operation from an [IAM policy](security-iam-for-influxdb.md) Therefore, in practice, the default policy says that if a principal has permission to call an operation on a resource, they can also call it by using the endpoint.

```
{
  "Statement": [
    {
      "Action": "*", 
      "Effect": "Allow", 
      "Principal": "*", 
      "Resource": "*"
    }
  ]
}
```

 To allow principals to use the VPC endpoint for only a subset of their permitted operations, [create or update the VPC endpoint policy](#vpce-policy-create).

### Creating a VPC endpoint policy
<a name="vpce-policy-create"></a>

A VPC endpoint policy determines whether a principal has permission to use the VPC endpoint to perform operations on a resource. For Timestream for InfluxDB resources, the principal must also have permission to perform the operations from a [IAM policy](security-iam-for-influxdb.md),

Each VPC endpoint policy statement requires the following elements:
+ The principal that can perform actions
+ The actions that can be performed
+ The resources on which actions can be performed

The policy statement doesn't specify the VPC endpoint. Instead, it applies to any VPC endpoint to which the policy is attached. For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*. 

AWS CloudTrail logs all operations that use the VPC endpoint. 

### Viewing a VPC endpoint policy
<a name="vpce-policy-get"></a>

To view the VPC endpoint policy for an endpoint, use the [VPC management console](https://console.aws.amazon.com/vpc/) or the [DescribeVpcEndpoints](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcEndpoints.html) operation.

The following AWS CLI command gets the policy for the endpoint with the specified VPC endpoint ID. 

Before using this command, replace the example endpoint ID with a valid one from your account.

```
$ aws ec2 describe-vpc-endpoints \

--query 'VpcEndpoints[?VpcEndpointId==`vpc-endpoint-id`].[PolicyDocument]'

--output text
```

## Using a VPC endpoint in a policy statement
<a name="vpce-policy-condition"></a>

You can control access to Timestream for InfluxDB resources and operations when the request comes from VPC or uses a VPC endpoint. To do so, use one of the following [global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#AvailableKeys) in a [IAM policy](security-iam-for-influxdb.md).
+ Use the `aws:sourceVpce` condition key to grant or restrict access based on the VPC endpoint.
+ Use the `aws:sourceVpc` condition key to grant or restrict access based on the VPC that hosts the private endpoint.

**Note**  
Use caution when creating key policies and IAM policies based on your VPC endpoint. If a policy statement requires that requests come from a particular VPC or VPC endpoint, requests from integrated AWS services that use an Timestream for InfluxDB resource on your behalf might fail.   
Also, the `aws:sourceIP` condition key is not effective when the request comes from an [Amazon VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html). To restrict requests to a VPC endpoint, use the `aws:sourceVpce` or `aws:sourceVpc` condition keys. For more information, see [Identity and access management for VPC endpoints and VPC endpoint services](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-iam.html) in the *AWS PrivateLink Guide*. 

You can use these global condition keys to control access to operations like [CreateDbInstance](https://docs.aws.amazon.com//ts-influxdb/latest/ts-influxdb-api/API_CreateDbInstance.html) that don't depend on any particular resource.

## Logging your VPC endpoint
<a name="vpce-logging"></a>

AWS CloudTrail logs all operations that use the VPC endpoint. When a request to Timestream for InfluxDB uses a VPC endpoint, the VPC endpoint ID appears in the [AWS CloudTrail log](logging-using-cloudtrail.md) entry that records the request. You can use the endpoint ID to audit the use of your Timestream for InfluxDB VPC endpoint.

However, your CloudTrail logs don't include operations requested by principals in other accounts or requests for Timestream for InfluxDB operations on Timestream for InfluxDB resources and aliases in other accounts. Also, to protect your VPC, requests that are denied by a [VPC endpoint policy](#vpce-policy), but otherwise would have been allowed, are not recorded in [AWS CloudTrail](logging-using-cloudtrail.md).

# Logging and monitoring in Timestream for InfluxDB
<a name="monitoring-influxdb"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Timestream for InfluxDB and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. However, before you start monitoring Timestream for InfluxDB, you should create a monitoring plan that includes answers to the following questions:
+ What are your monitoring goals?
+ What resources will you monitor?
+ How often will you monitor these resources?
+ What monitoring tools will you use?
+ Who will perform the monitoring tasks?
+ Who should be notified when something goes wrong?

The next step is to establish a baseline for normal Timestream for InfluxDB performance in your environment, by measuring performance at various times and under different load conditions. As you monitor Timestream for InfluxDB, store historical monitoring data so that you can compare it with current performance data, identify normal performance patterns and performance anomalies, and devise methods to address issues.

To establish a baseline, you should, at a minimum, monitor the following items:
+ System errors, so that you can determine whether any requests resulted in an error.

**Topics**
+ [Monitoring tools](monitoring-automated-manual-influxdb.md)
+ [Logging Timestream for InfluxDB API calls with AWS CloudTrail](logging-using-cloudtrail-influxdb.md)

# Monitoring tools
<a name="monitoring-automated-manual-influxdb"></a>

AWS provides various tools that you can use to monitor Timestream for InfluxDB. You can configure some of these tools to do the monitoring for you, while some of the tools require manual intervention. We recommend that you automate monitoring tasks as much as possible.

**Topics**
+ [Automated monitoring tools](#monitoring-automated_tools-influxdb)
+ [Manual monitoring tools](#monitoring-manual-tools-influxdb)

## Automated monitoring tools
<a name="monitoring-automated_tools-influxdb"></a>

You can use the following automated monitoring tools to watch Timestream for InfluxDB and report when something is wrong:
+ **Amazon CloudWatch Alarms** – Watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a particular state; the state must have changed and been maintained for a specified number of periods. For more information, see [Monitoring with Amazon CloudWatch](monitoring-cloudwatch.md).

## Manual monitoring tools
<a name="monitoring-manual-tools-influxdb"></a>

Another important part of monitoring Timestream for InfluxDB involves manually monitoring those items that the CloudWatch alarms don't cover. The Timestream for InfluxDB, CloudWatch, Trusted Advisor, and other AWS Management Console dashboards provide an at-a-glance view of the state of your AWS environment.
+ The CloudWatch home page shows the following:
  + Current alarms and status
  + Graphs of alarms and resources
  + Service health status

  In addition, you can use CloudWatch to do the following: 
  + Create [customized dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CloudWatch_Dashboards.html) to monitor the services you care about
  + Graph metric data to troubleshoot issues and discover trends
  + Search and browse all your AWS resource metrics
  + Create and edit alarms to be notified of problems

# Logging Timestream for InfluxDB API calls with AWS CloudTrail
<a name="logging-using-cloudtrail-influxdb"></a>



Timestream for InfluxDB is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Timestream for InfluxDB. CloudTrail captures Data Definition Language (DDL) API calls for Timestream for InfluxDB as events. The calls that are captured include calls from the Timestream for InfluxDB console and code calls to the Timestream for InfluxDB API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon Simple Storage Service (Amazon S3) bucket, including events for Timestream for InfluxDB. If you don't configure a trail, you can still view the most recent events on the CloudTrail console in **Event history**. Using the information collected by CloudTrail, you can determine the request that was made to Timestream for InfluxDB, the IP address from which the request was made, who made the request, when it was made, and additional details. 

To learn more about CloudTrail, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).

## Timestream for InfluxDB information in CloudTrail
<a name="service-name-info-in-cloudtrail"></a>

CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Timestream for InfluxDB, that activity is recorded in a CloudTrail event along with other AWS service events in **Event history**. You can view, search, and download recent events in your AWS account. For more information, see [Viewing Events with CloudTrail Event History](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html). 

For an ongoing record of events in your AWS account, including events for Timestream for InfluxDB, create a trail. A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.

For more information, see the following topics in the *AWS CloudTrail User Guide*: 
+ [Overview for Creating a Trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail Supported Services and Integrations](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html#cloudtrail-aws-service-specific-topics-integrations)
+ [Configuring Amazon SNS Notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/getting_notifications_top_level.html)
+ [Receiving CloudTrail Log Files from Multiple Regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html)
+ [Receiving CloudTrail Log Files from Multiple Accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)
+ [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html)

Every event or log entry contains information about who generated the request. The identity information helps you determine the following: 
+ Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials
+ Whether the request was made with temporary security credentials for a role or federated user
+ Whether the request was made by another AWS service

For more information, see the [CloudTrail userIdentity Element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).

# Compliance validation for Amazon Timestream for InfluxDB
<a name="timestream-compliance"></a>

Third-party auditors assess the security and compliance of Amazon Timestream for InfluxDB as part of multiple AWS compliance programs. These include the following:
+ GDPR
+ HIPAA
+ PCI
+ SOC

# Resilience in Amazon Timestream for InfluxDB
<a name="disaster-recovery-resiliency-influxdb"></a>

The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. 

For more information about AWS Regions and Availability Zones, see [AWS Global Infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/).

Amazon Timestream for InfluxDB periodically takes internal backups and retains them for 24 hours to support availability and durability. Snapshots are taken during deletes and retained for 30 days to support restores. To access or use these, file a ticket at [AWS support](https://support.console.aws.amazon.com/support/home?nc2=h_ql_cu#/).

You can create your instance with Multi-AZ recovery capabilities. For more information, see [Multi-AZ DB instance deployments](https://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-managing.html#timestream-for-influx-managing-multi-az-instance-deployments.html).

# Infrastructure security in Amazon Timestream for InfluxDB
<a name="infrastructure-security-influxdb"></a>

As a managed service, Amazon Timestream for InfluxDB is protected by the AWS global network security procedures that are described in the [Amazon Web Services: Overview of Security Processes](https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf) whitepaper.

You use AWS published control plane API calls to access Timestream for InfluxDB through the network. For more information, see [Control planes and data planes](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/control-planes-and-data-planes.html). Clients must support Transport Layer Security (TLS) 1.2 or later. We recommend TLS 1.2 or 1.3. Clients must also support cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes.

Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) (AWS STS) to generate temporary security credentials to sign requests.

Timestream for InfluxDB is architected so that your traffic is isolated to the specific AWS Region that your Timestream for InfluxDB instance resides in.

## Security groups
<a name="infrastructure-security-influxdb-security-groups"></a>

 Security groups control the access that traffic has in and out of a DB instance. By default, network access is turned off to a DB instance. You can specify rules in a security group that allow access from an IP address range, port, or security group. After ingress rules are configured, the same rules apply to all DB instances that are associated with that security group.

For more information, see [Controlling access to a DB instance in a VPC](timestream-for-influxdb-controlling-access.md).

# Configuration and vulnerability analysis in Timestream for InfluxDB
<a name="ConfigAndVulnerability-timestream-for-influxdb"></a>

 Configuration and IT controls are a shared responsibility between AWS and you, our customer. For more information, see the AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/). In addition to the shared responsibility model, Timestream for InfluxDB users should be aware of the following: 
+ It is the customer responsibility to patch their client applications with the relevant client side dependencies.
+ Customers should consider penetration testing if appropriate (see [https://aws.amazon.com/security/penetration-testing/](https://aws.amazon.com/security/penetration-testing/).)

# Incident response in Timestream for InfluxDB
<a name="IncidentResponse-timestream-for-influxdb"></a>

Amazon Timestream for InfluxDB service incidents are reported in the [Personal Health Dashboard](https://phd.aws.amazon.com/phd/home#/). You can learn more about the dashboard and AWS Health [here](https://docs.aws.amazon.com//health/latest/ug/what-is-aws-health.html).

Timestream for InfluxDB supports reporting using AWS CloudTrail. For more information, see [Logging Timestream for InfluxDB API calls with AWS CloudTrail](logging-using-cloudtrail-influxdb.md). 

# Amazon Timestream for InfluxDB API and interface VPC endpoints (AWS PrivateLink)
<a name="timestream-influxb-privatelink"></a>

You can establish a private connection between your VPC and Amazon Amazon Timestream for InfluxDB control plane API endpoints by creating an *interface VPC endpoint*. Interface endpoints are powered by [AWS PrivateLink](https://aws.amazon.com/privatelink). AWS PrivateLink allows you to privately access Amazon Timestream for InfluxDB API operations without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. 

Instances in your VPC don't need public IP addresses to communicate with Amazon Timestream for InfluxDB API endpoints. Your instances also don't need public IP addresses to use any of the available Timestream for InfluxDB API operations. Traffic between your VPC and Amazon Timestream for InfluxDB doesn't leave the Amazon network. Each interface endpoint is represented by one or more elastic network interfaces in your subnets. For more information on elastic network interfaces, see [Elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the *Amazon EC2 User Guide*. 
+ For more information about VPC endpoints, see [Interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) in the *Amazon VPC User Guide*.
+ For more information about Timestream for InfluxDB API operations, see [Timestream for InfluxDB API operations](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/Welcome.html). 

After you create an interface VPC endpoint, if you enable [private DNS](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-private-dns) hostnames for the endpoint, the default Timestream for InfluxDB endpoint (https://timestream-influxb.*Region*.amazonaws.com) resolves to your VPC endpoint. If you do not enable private DNS hostnames, Amazon VPC provides a DNS endpoint name that you can use in the following format:

```
VPC_Endpoint_ID.timestream-influxb.Region.vpce.amazonaws.com
```

For more information, see [Interface VPC Endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) in the *Amazon VPC User Guide*. Timestream for InfluxDB supports making calls to all of its [API Actions](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/Welcome.html) inside your VPC. 

**Note**  
Private DNS hostnames can be enabled for only one VPC endpoint in the VPC. If you want to create an additional VPC endpoint then private DNS hostname should be disabled for it.

## Considerations for VPC endpoints
<a name="timestream-influxb-privatelink-considerations"></a>

Before you set up an interface VPC endpoint for Amazon Timestream for InfluxDB API endpoints, ensure that you review [Interface endpoint properties and limitations](https://docs.aws.amazon.com/vpc/latest/privatelink/endpoint-services-overview.html) in the *Amazon VPC User Guide*. All Timestream for InfluxDB API operations that are relevant to managing Amazon Timestream for InfluxDB resources are available from your VPC using AWS PrivateLink. VPC endpoint policies are supported for Timestream for InfluxDB API endpoints. By default, full access to Timestream for InfluxDB API operations is allowed through the endpoint. For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*. 

### Creating an interface VPC endpoint for the Timestream for InfluxDB API
<a name="timestream-influxb-privatelink-create-vpc-endpoint"></a>

You can create a VPC endpoint for the Amazon Timestream for InfluxDB API using either the Amazon VPC console or the AWS CLI. For more information, see [Creating an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html) in the *Amazon VPC User Guide*.

 After you create an interface VPC endpoint, you can enable private DNS host names for the endpoint. When you do, the default Amazon Timestream for InfluxDB endpoint (https://timestream-influxb.*Region*.amazonaws.com) resolves to your VPC endpoint. For more information, see [Accessing a service through an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#access-service-though-endpoint) in the *Amazon VPC User Guide*. 

### Creating a VPC endpoint policy for the Amazon Timestream for InfluxDB API
<a name="timestream-influxb-privatelink-policy"></a>

You can attach an endpoint policy to your VPC endpoint that controls access to the Timestream for InfluxDB API. The policy specifies the following:
+ The principal that can perform actions.
+ The actions that can be performed.
+ The resources on which actions can be performed.

For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

**Example VPC endpoint policy for Timestream for InfluxDB API actions**  
The following is an example of an endpoint policy for the Timestream for InfluxDB API. When attached to an endpoint, this policy grants access to the listed Timestream for InfluxDB API actions for all principals on all resources.  

```
{
	"Statement": [{
		"Principal": "*",
		"Effect": "Allow",
		"Action": [
			"timestream-influxb:CreateDbInstance",
			"timestream-influxb:UpdateDbInstance"		
		],
		"Resource": "*"
	}]
}
```

**Example VPC endpoint policy that denies all access from a specified AWS account**  
The following VPC endpoint policy denies AWS account *123456789012* all access to resources using the endpoint. The policy allows all actions from other accounts.  

```
{
	"Statement": [{
			"Action": "*",
			"Effect": "Allow",
			"Resource": "*",
			"Principal": "*"
		},
		{
			"Action": "*",
			"Effect": "Deny",
			"Resource": "*",
			"Principal": {
				"AWS": [
					"123456789012"
				]
			}
		}
	]
}
```

# Security best practices for Timestream for InfluxDB
<a name="security-best-practices"></a>

Amazon Timestream for InfluxDB provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions. 

## Implement least privilege access
<a name="security-best-practices-privileges"></a>

When granting permissions, you decide who is getting what permissions to which Timestream for InfluxDB resources. You enable specific actions that you want to allow on those resources. Therefore you should grant only the permissions that are required to perform a task. Implementing least privilege access is fundamental in reducing security risk and the impact that could result from errors or malicious intent. 

## Use IAM roles
<a name="security-best-practices-roles"></a>

Producer and client applications must have valid credentials to access Timestream for InfluxDB DB instances. You should not store AWS credentials directly in a client application or in an Amazon S3 bucket. These are long-term credentials that are not automatically rotated and could have a significant business impact if they are compromised. 

Instead, you should use an IAM role to manage temporary credentials for your producer and client applications to access Timestream for InfluxDB DB instances. When you use a role, you don't have to use long-term credentials (such as a user name and password or access keys) to access other resources.

For more information, see the following topics in the *IAM User Guide*:
+ [IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)
+ [Common Scenarios for Roles: Users, Applications, and Services](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios.html)

Use AWS Identity and Access Management (IAM) accounts to control access to Amazon Timestream for InfluxDB API operations, especially operations that create, modify, or delete Amazon Timestream for InfluxDB resources. Such resources include DB instances, security groups, and parameter groups. 
+ Create an individual user for each person who manages Amazon Timestream for InfluxDB resources, including yourself. Don't use AWS root credentials to manage Amazon Timestream for InfluxDB resources.
+ Grant each user the minimum set of permissions required to perform his or her duties.
+ Use IAM groups to effectively manage permissions for multiple users.
+ Rotate your IAM credentials regularly.
+ Configure AWS Secrets Manager to automatically rotate the secrets for Amazon Timestream for InfluxDB. For more information, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the *AWS Secrets Manager User Guide*. You can also retrieve the credential from AWS Secrets Manager programmatically. For more information, see [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*.
+ Secure your Timestream for InfluxDB influx API tokens by using the [API tokens](timestream-for-influx-security-db-authentication.md#timestream-for-influx-security-db-authentication-api-token).

## Implement Server-Side Encryption in Dependent Resources
<a name="security-best-practices-sse"></a>

Data at rest and data in transit can be encrypted in Timestream for InfluxDB. For more information, see [Encryption in transit](EncryptionInTransit-for-influx-db.md).

## Use CloudTrail to Monitor API Calls
<a name="security-best-practices-cloudtrail"></a>

Timestream for InfluxDB is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Timestream for InfluxDB.

Using the information collected by CloudTrail, you can determine the request that was made to Timestream for InfluxDB, the IP address from which the request was made, who made the request, when it was made, and additional details.

For more information, see [Logging Timestream for LiveAnalytics API calls with AWS CloudTrail](logging-using-cloudtrail.md).

Amazon Timestream for InfluxDB supports control plane CloudTrail events, but not data plane. For more information, see [Control planes and data planes](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/control-planes-and-data-planes.html). 

## Public accessibility
<a name="timestream-for-influx-security-public-accessibility"></a>

When you launch a DB instance inside a virtual private cloud (VPC) based on the Amazon VPC service, you can turn on or off public accessibility for that DB instance. To designate whether the DB instance that you create has a DNS name that resolves to a public IP address, you use the Public accessibility parameter. By using this parameter, you can designate whether there is public access to the DB instance

If your DB instance is in a VPC but isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an AWS Direct Connect connection to access it from a private network. 

If your DB instance is publicly accessible, be sure to take steps to prevent or help mitigate denial of service related threats. For more information, see [Introduction to denial of service attacks](https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/introduction-denial-of-service-attacks.html) and [Protecting networks](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/protecting-networks.html).

# Working with other services
<a name="other-services-influxdb"></a>

 Amazon Timestream for InfluxDB integrates with a variety of AWS services and popular third-party tools. All services and tools compatible with open-source InfluxDB should work seamlessly with Timestream for InfluxDB. Among those we would like to note: 

**Topics**
+ [DBeaver](other-services-influxdb-dbeaver.md)
+ [Grafana](other-services-influxdb-grafana.md)

# DBeaver
<a name="other-services-influxdb-dbeaver"></a>

 DBeaver is a free universal SQL client that can be used to manage any database that has a JDBC driver. It is widely used among developers and database administrators because of its robust data viewing, editing, and management capabilities. Using DBeaver's cloud connectivity options, you can connect DBeaver to Amazon Timestream for InfluxDB natively. DBeaver provides a comprehensive and intuitive interface to work with time series data directly from within a DBeaver application. Using your credentials, it also gives you full access to any queries that you could execute from another query interface. It even lets you create graphs for better understanding and visualization of query results. 

 To configure your DBeaver client to connect to your Timestream for InfluxDB DB instance or cluster, refer to [DBeaver's guide to InfluxDB configuration](https://dbeaver.com/docs/dbeaver/InfluxDB/). 

# Grafana
<a name="other-services-influxdb-grafana"></a>

 Use [Amazon Managed Grafana](https://aws.amazon.com/grafana/), Grafana, or Grafana Cloud to visualize data from your Timestream for InfluxDB instance. 

## Connect to Grafana
<a name="other-services-influxdb-grafana-steps"></a>

**Important**  
 The instructions in this guide require Grafana Cloud or Grafana 10.3\$1. 

1.  Create your [Timestream for InfluxDB DB instance](https://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-getting-started-creating-db-instance.html) or [Timestream for InfluxDB DB cluster](https://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influx-create-rr-cluster.html). 

1.  Create an [Amazon Managed Grafana workspace](https://console.aws.amazon.com/grafana), sign up for [Grafana Cloud](https://grafana.com/products/cloud/), or download and install [Grafana](https://grafana.com/grafana/download). 

1.  Visit your Amazon Managed Grafana, Grafana Cloud user interface (UI) or, if running Grafana locally, start Grafana and visit http://localhost:3000 in your browser. 

1.  In the left navigation of the Grafana UI, open the **Connections** section and select **Add new connection**. 

1.  Select **InfluxDB** from the list of available data sources and click **Add new data source**. 

1.  On the Data Source configuration page, enter a name for your InfluxDB data source. 

1.  In the **Query Language** dropdown list, select one of the query languages supported by InfluxDB 2.7 (**Flux** or **InfluxQL**). 

**Important**  
 SQL is only supported in InfluxDB 3. 

### Configure Grafana to use Flux
<a name="other-services-influxdb-grafana-flux"></a>

 With Flux selected as the query language in your InfluxDB data source, configure your InfluxDB connection: 

1.  In the **HTTP** section, enter your InfluxDB URL in the **URL** field. 

   ```
   https://your-timestream-for-influxdb-endpoint:8086
   ```

1.  In the **InfluxDB Details** section, enter the following: 
   +  In **Organization**: Your InfluxDB [organization name or ID](https://docs.influxdata.com/influxdb/v2/admin/organizations/view-orgs/). 
   +  In **Token**: Your [InfluxDB API token](https://docs.influxdata.com/influxdb/v2/admin/tokens/). 
   +  In **Default Bucket**: The default [bucket](https://docs.influxdata.com/influxdb/v2/admin/buckets/) to use in Flux queries. 
   +  In **Min time interval**: The Grafana minimum time interval. The default is 10 seconds. 
   +  In **Max series**: The maximum number of series or tables Grafana will process. The default is 1,000.   
![\[Settings form for configuring an InfluxDB connection using Flux as the query language.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/grafana-flux-config.png)

1.  Click **Save & test**. Grafana attempts to connect to the InfluxDB 2.7 data source and returns the results of the test. 

### Configure Grafana to use InfluxQL
<a name="other-services-influxdb-grafana-influxql"></a>

 To query InfluxDB 2.7 with InfluxQL, find your use case below and then complete the instructions to configure Grafana. 

#### New install of InfluxDB 2.7:
<a name="other-services-influxdb-grafana-influxql-new-install"></a>

 To configure Grafana to use InfluxQL with a new install of InfluxDB 2.7, do the following: 

1.  Authenticate with [InfluxDB 2.7 tokens](https://docs.influxdata.com/influxdb/v2/admin/tokens/). 

1.  Manually create [DBRP mappings](https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-dbrp-mappings). 

#### Manual migration from InfluxDB 1.x to 2.7:
<a name="other-services-influxdb-grafana-influxql-manual-migration"></a>

To configure Grafana to use InfluxQL when you have manually migrated from InfluxDB 1.x to InfluxDB 2.7, do the following: 

1.  If your InfluxDB 1.x instance required authentication, [create v1-compatible authentication credentials](https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-v1-authorizations) to match your previous 1.x username and password. Otherwise, use [InfluxDB v2 token authentication](https://docs.influxdata.com/influxdb/v2/admin/tokens/). 

1.  Manually create [DBRP mappings](https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-dbrp-mappings). 

 With InfluxQL selected as the query language in your InfluxDB data source, configure your InfluxDB connection: 

1.  In the **HTTP** section, enter your InfluxDB URL in the **URL** field. 

   ```
   https://your-timestream-for-influxdb-endpoint:8086
   ```

1. In the **Custom HTTP Headers** section, enter the following:
   +  Select **Add header**. Provide your InfluxDB API token: 
     +  In **Header**, enter **Authorization**. 
     +  In **Value**, use the `Token` schema and provide your InfluxDB API token. For example, `Token y0uR5uP3rSecr3tT0k3n`. 

1.  In the **InfluxDB Details** section, enter the following: 
   +  In **Database**: The database name [mapped to your InfluxDB 2.7 bucket](https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-dbrp-mappings). 
   +  In **User** and **Password**: The username and password associated with your [InfluxDB 1.x compatibility authorization](https://docs.influxdata.com/influxdb/v2/tools/grafana/?t=InfluxQL#view-and-create-influxdb-v1-authorizations). 
   +  In **HTTP Method**: Select **GET**.   
![\[Settings form for configuring an InfluxDB connection using InfluxQL as the query language.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/grafana-influxql-config.png)

1.  Click **Save & test**. Grafana attempts to connect to the InfluxDB 2.7 data source and returns the results of the test. 

### Query and visualize data
<a name="other-services-influxdb-grafana-query-visualize"></a>

 After configuring your InfluxDB connection, you can use Grafana and Flux to query and visualize time series data stored in your InfluxDB instance. 

 For more information about using Grafana, see the Grafana [technical documentation](https://grafana.com/docs/). If you are just learning Flux, see [Get started with Flux](https://docs.influxdata.com/flux/v0/get-started/). 

# API reference
<a name="Influx_API_Reference"></a>

For a complete list and details of Amazon Timestream for InfluxDB APIs, see [Amazon Timestream for InfluxDB APIs](https://docs.aws.amazon.com/ts-influxdb/latest/ts-influxdb-api/Welcome.html). 

For error codes common to all AWS services, see the [AWS Support section](https://docs.aws.amazon.com/awssupport/latest/APIReference/CommonErrors.html). 

# Document history
<a name="doc-history-influxdb"></a>

| Change | Description | Date | 
| --- |--- |--- |
| [`AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess) | Amazon Timestream for InfluxDB has added the `RebootDbInstance` and `RebootDbCluster` actions to the existing `AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` managed policy for rebooting Amazon Timestream InfluxDB resources. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html).  | December 17, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamInfluxDBFullAccess) | Amazon Timestream for InfluxDB has added the `RebootDbInstance` and `RebootDbCluster` actions to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy for rebooting Amazon Timestream InfluxDB resources. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html).  | December 17, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess) | Amazon Timestream for InfluxDB has added the `ec2:DescribeVpcEndpoints` action to the existing `AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` managed policy for describing the VPC endpoints. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html).  | November 13, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamInfluxDBFullAccess) | Amazon Timestream for InfluxDB has added the `ec2:DescribeVpcEndpoints` action to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy for describing the VPC endpoints. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html).  | November 13, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamInfluxDBFullAccess) | Amazon Timestream for InfluxDB has added Influx Enterprise marketplace product ID to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy to support subscription to enterprise marketplace offerings. See [AmazonTimestreamInfluxDBFullAccess](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html#iam.identitybasedpolicies.predefinedpolicies). | October 17, 2025 | 
| [`AmazonTimestreamConsoleFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamConsoleFullAccess) | Timestream for LiveAnalytics has added the AWS Marketplace permissions to the `AmazonTimestreamConsoleFullAccess` managed policy to access marketplace resources and create agreements for InfluxDB Cluster with Read Replicas creation. | August 20, 2025 | 
| [`AmazonTimestreamConsoleFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamConsoleFullAccess) | Timestream for InfluxDB has added the AWS Marketplace permissions to the `AmazonTimestreamConsoleFullAccess` managed policy to access marketplace resources and create agreements for InfluxDB Cluster with Read Replicas creation. | August 20, 2025 | 
| [`AmazonTimestreamConsoleFullAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamConsoleFullAccess) | Amazon Timestream for InfluxDB has added the `pricing:GetProducts` permission to the `AmazonTimestreamConsoleFullAccess` managed policy to provide pricing estimations for InfluxDB resource configurations during creation. | June 10, 2025 | 
| [Amazon Timestream for LiveAnalytics will no longer be open to new customers starting June 20, 2025.](AmazonTimestreamForLiveAnalytics-availability-change.md) | For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html). | May 20, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccessWithoutMarketplaceAccess` – New policy](#doc-history-influxdb) | This policy grants administrative permissions that allow full access to all Timestream for InfluxDB resources, excluding any marketplace-related actions. For more information see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html). | April 16, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](#doc-history-influxdb) | Amazon Timestream for InfluxDB has added to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html). | April 16, 2025 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](#doc-history-influxdb) | Amazon Timestream for InfluxDB has added access to create, update, delete, and list Amazon Timestream InfluxDB clusters to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html). | February 17, 2025 | 
| [Documentation-only update](https://docs.aws.amazon.com/timestream/latest/developerguide/ts-limits.html) | Updated the Quotas topic to segregate the default quotas and system limits. | October 22, 2024 | 
| [Amazon Timestream now supports query insights](https://docs.aws.amazon.com/timestream/latest/developerguide/using-query-insights.html) | Timestream now includes support for the query insights feature that helps you optimize your queries, improve their performance, and reduce costs. | October 22, 2024 | 
| [Amazon Timestream for InfluxDB update to an existing policy.](#doc-history-influxdb) | Amazon Timestream for InfluxDB has added the `ec2:DescribeRouteTables` action to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy for describing your route tables. For more information, see [AWS managed policies for Amazon Timestream for InfluxDB](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html).  | October 8, 2024 | 
| [`AmazonTimestreamInfluxDBFullAccess` – Update to an existing policy](#doc-history-influxdb) | Amazon Timestream for InfluxDB has added the `ec2:DescribeRouteTables` action to the existing `AmazonTimestreamInfluxDBFullAccess` managed policy. This action is used for describing your route tables. See [AmazonTimestreamInfluxDBFullAccess](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html#iam.identitybasedpolicies.predefinedpolicies). | September 12, 2024 | 
| [`AmazonTimestreamReadOnlyAccess` – Update to an existing policy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonTimestreamReadOnlyAccess) | Timestream for LiveAnalytics has added the `DescribeAccountSettings` permission to the `AmazonTimestreamReadOnlyAccess` managed policy for describing AWS account settings. | June 3, 2024 | 
| [Amazon Timestream for LiveAnalytics now supports Timestream Compute Units (TCUs)](https://docs.aws.amazon.com/timestream/latest/developerguide/tcu.html) | Amazon Timestream for LiveAnalytics now includes support for Timestream Compute Units (TCUs) to measure the compute capacity allocated for your query needs. | April 29, 2024 | 
| [New policies added](#doc-history-influxdb) | Amazon Timestream for InfluxDB added two new policies: One that allows the service to manage network interfaces and security groups in your account. For more information, see [AmazonTimestreamInfluxDBServiceRolePolicy](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html#security-iam-awsmanpol-timestreamforinfluxdbServiceRolePolicy). Another that provide full administrative access to create, update, delete and list Amazon Timestream InfluxDB instances and create and list parameter groups. For more information, see [AmazonTimestreamInfluxDBFullAccess](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol-influxdb.html#iam.identitybasedpolicies.predefinedpolicies). | March 14, 2024 | 
| [Amazon Timestream for InfluxDB is now generally available.](https://docs.aws.amazon.com/timestream/latest/developerguide/timestream-for-influxdb.html) | This documentation covers the initial release of Amazon Timestream for InfluxDB. | March 14, 2024 | 
| [Amazon Timestream for LiveAnalytics Query events are available in AWS CloudTrail](https://docs.aws.amazon.com/timestream/latest/developerguide/logging-using-cloudtrail.html) | Amazon Timestream for LiveAnalytics now publishes Query API data events to AWS CloudTrail. Customers can audit all Query API requests made in their AWS accounts, and see information such as which IAM User/Role made the request, when the request was made, which databases and tables were queried, and the request's Query ID. | September 12, 2023 | 
| [Amazon Timestream for LiveAnalytics UNLOAD](https://docs.aws.amazon.com/timestream/latest/developerguide/export-unload.html) | Amazon Timestream for LiveAnalytics now supports UNLOAD to export query results to S3. | May 12, 2023 | 
| [Amazon Timestream for LiveAnalytics update to an existing policy.](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html) | Batch load permissions added to a managed policy. | February 24, 2023 | 
| [Amazon Timestream for LiveAnalytics batch load.](https://docs.aws.amazon.com/timestream/latest/developerguide/batch-load.html) | Amazon Timestream for LiveAnalytics now supports batch load functionality. | February 24, 2023 | 
| [Amazon Timestream for LiveAnalytics now supports AWS Backup.](https://docs.aws.amazon.com/timestream/latest/developerguide/backups.html) | Amazon Timestream for LiveAnalytics now supports AWS Backup. | December 14, 2022 | 
| [Amazon Timestream for LiveAnalytics updates to AWS managed policies](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html) | New information about AWS managed policies and Amazon Timestream for LiveAnalytics, including updates to existing managed policies. | November 29, 2021 | 
| [Amazon Timestream for LiveAnalytics supports scheduled queries](https://docs.aws.amazon.com/timestream/latest/developerguide/scheduledqueries.html) | Amazon Timestream for LiveAnalytics now supports running a query on your behalf, based on a schedule.  | November 29, 2021 | 
| [Amazon Timestream for LiveAnalytics supports magnetic store.](https://docs.aws.amazon.com/timestream/latest/developerguide/writes.html) | Amazon Timestream for LiveAnalytics now supports using magnetic storage for your table writes. | November 29, 2021 | 
| [Amazon Timestream for LiveAnalytics multi-measure records.](https://docs.aws.amazon.com/timestream/latest/developerguide/writes.html#writes.writing-data-multi-measure) | Amazon Timestream for LiveAnalytics now supports a more compact format for storing your time-series data. | November 29, 2021 | 
| [Amazon Timestream for LiveAnalytics updates to AWS managed policies](https://docs.aws.amazon.com/timestream/latest/developerguide/security-iam-awsmanpol.html) | New information about AWS managed policies and Amazon Timestream for LiveAnalytics, including updates to existing managed policies. | May 24, 2021 | 
| [Amazon Timestream for LiveAnalytics is now available in the Europe (Frankfurt) region.](https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html) | Amazon Timestream for LiveAnalytics is now generally available in the Europe (Frankfurt) region (`eu-central-1`). | April 23, 2021 | 
| [Amazon Timestream for LiveAnalytics now supports VPC endpoints (AWS PrivateLink).](https://docs.aws.amazon.com/timestream/latest/developerguide/VPCEndpoints.html) | Amazon Timestream for LiveAnalytics now supports the use of VPC endpoints (AWS PrivateLink). | March 23, 2021 | 
| [Amazon Timestream now supports cross table queries.](https://docs.aws.amazon.com/timestream/latest/developerguide/supported-sql-constructs.SELECT.html) | You can use Amazon Timestream for LiveAnalytics to run cross table queries.  | February 10, 2021 | 
| [Amazon Timestream for LiveAnalytics now supports enhanced query execution statistics.](https://docs.aws.amazon.com/timestream/latest/developerguide/API_query_Query.html) | Amazon Timestream for LiveAnalytics now supports enhanced query execution statistics, such as amount of data scanned.  | February 10, 2021 | 
| [Amazon Timestream for LiveAnalytics now supports advanced time series functions.](https://docs.aws.amazon.com/timestream/latest/developerguide/timeseries-specific-constructs.functions.html) | You can use Amazon Timestream for LiveAnalytics to run SQL queries with advanced time series functions, such as derivatives, integrals, and correlations.  | February 10, 2021 | 
| [Amazon Timestream for LiveAnalytics is now HIPAA, ISO, and PCI compliant.](https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html) | You can now use Amazon Timestream for LiveAnalytics for workloads that require HIPAA, ISO, and PCI-compliant infrastructure.  | January 27, 2021 | 
| [Amazon Timestream for LiveAnalytics now supports open-source Telegraf and Grafana.](https://docs.aws.amazon.com/timestream/latest/developerguide/OtherServices.html) | You can now use Telegraf, the open-source, plugin-driven server agent for collecting and reporting metrics, and Grafana, the open-source analytics and monitoring platform for databases, with Amazon Timestream for LiveAnalytics.  | November 25, 2020 | 
| [Amazon Timestream for LiveAnalytics is now generally available.](https://docs.aws.amazon.com/timestream/latest/developerguide/what-is-timestream.html) | This documentation covers the initial release of Amazon Timestream for LiveAnalytics. | September 30, 2020 | 