

For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

# Writing data to your Timestream for InfluxDB 3 cluster
<a name="writing-data-to-your-influxdb-3-cluster"></a>

 Amazon Timestream for InfluxDB 3 provides robust capabilities for ingesting time-series data efficiently. Understanding the proper methods for writing data is essential for maximizing performance and ensuring data integrity. 

 Timestream for InfluxDB 3 provides multiple HTTP API endpoints for writing time-series data, offering flexibility for different integration methods and compatibility with existing InfluxDB workloads. 

## Line protocol overview
<a name="line-protocol-overview"></a>

 InfluxDB 3 is designed for high write-throughput and uses an efficient, human-readable write syntax called [line protocol](https://docs.influxdata.com/influxdb3/core/reference/line-protocol/). As a schema-on-write database, InfluxDB automatically creates the logical database, tables, and their schemas when you start writing data, without requiring any manual setup. Once the schema is created, InfluxDB validates future write requests against it before accepting new data, while still allowing schema evolution as your needs change. 

### Line protocol structure
<a name="line-protocol-structure"></a>

 Line protocol consists of the following essential elements: 
+  **Table**: A string identifier for the table where data will be stored. 
+  (Optional) **Tag set**: Comma-delimited key-value pairs representing metadata (indexed). 
+  **Field set**: Comma-delimited key-value pairs representing the actual measurements. 
+  (Optional) **Timestamp**: Unix timestamp associated with the data point up to nanosecond precision. 

 Field values can be one of the following data types: 
+  Strings (must be quoted) 
+  Floats (for example, 23.4) 
+  Integers (for example, 10i) 
+  Unsigned integers (for example, 10u) 
+  Booleans (true/false) 

Line protocol follows this general syntax:

```
myTable,tag1=val1,tag2=val2 field1="v1",field2=1i 0000000000000000000
```

 Example data point using line protocol: 

```
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1735545600
```

 This creates a point in the "home" table with: 
+  Tag: room="Living Room" 
+  Fields: temp=21.1 (float), hum=35.9 (float), co=0 (integer) 
+  Timestamp: 1735545600 (Unix seconds) 

## API endpoints overview
<a name="api-endpoints-overview"></a>

 InfluxDB 3 supports three primary write endpoints: 

1.  **Native v3 API** (`/api/v3/write_lp`): The recommended endpoint for new implementations. 

1.  **v2 Compatibility API** (`/api/v2/write`): For migrating InfluxDB v2.x workloads. 

1.  **v1 Compatibility API** (`/write`): For migrating InfluxDB v1.x workloads. 

### Using the Native v3 write API
<a name="using-the-native-v3-write-api"></a>

 The `/api/v3/write_lp` endpoint is the native InfluxDB 3 API for writing line protocol data. 

 Request format: 

```
POST /api/v3/write_lp?db=DATABASE_NAME&precision=PRECISION&accept_partial=BOOLEAN&no_sync=BOOLEAN
```

 Query parameters: 


|  **Parameter**  |  **Description**  |  **Default**  | 
| --- | --- | --- | 
|  db  |  Database name (required)  |  -  | 
|  precision  |  Timestamp precision (ns, us, ms, s)  |  Auto-detected  | 
|  accept\$1partial  |  Accept partial writes on errors  |  true  | 
|  no\$1sync  |  Acknowledge before WAL persistence  |  false  | 

#### 
<a name="section"></a>

 Example write request: 

```
curl -v "https://your-cluster-endpoint:8086/api/v3/write_lp?db=sensors&precision=s" \
  --header "Authorization: Bearer YOUR_TOKEN" \
  --data-raw "home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1735545600
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1735545600"
```

### Write response modes
<a name="write-response-modes"></a>

 Standard Mode (`no_sync`=false) 
+  Waits for data to be written to the WAL (Write-Ahead Log) before acknowledging. 
+  Provides durability guarantees. 
+  Higher latency due to WAL persistence wait. 
+  Recommended for critical data where durability is essential. 

 Fast Mode (`no_sync`=true) 
+  Acknowledges immediately without waiting for WAL persistence. 
+  Lowest possible write latency. 
+  Risk of data loss if system crashes before WAL write completes. 
+  Ideal for high-throughput scenarios where speed is prioritized over absolute durability. 

### Partial write handling
<a name="partial-write-handling"></a>

 The `accept_partial` parameter controls behavior when write batches contain errors: 

 When `accept_partial` is `true` (default): 
+  Valid lines are written successfully. 
+  Invalid lines are rejected. 
+  Returns 400 status with details about failed lines. 
+  Useful for large batch operations where some failures are acceptable. 

 When `accept_partial` is `false`: 
+  Entire batch is rejected if any line fails. 
+  No data is written. 
+  Returns 400 status with error details. 
+  Ensures all-or-nothing write semantics. 

### Compatibility APIs
<a name="compatibility-apis"></a>

 Compatibility APIs enable seamless migration of existing InfluxDB v1 or v2 workloads to InfluxDB 3. These endpoints work with existing InfluxDB client libraries, Telegraf, and third-party integrations. 

 **Important differences:** 
+  Tags in a table (measurement) are immutable once created. 
+  A tag and a field cannot have the same name within a table. 
+  Schema validation is enforced on write. 

#### InfluxDB v2 compatibility
<a name="influxdb-v2-compatibility"></a>

 The `/api/v2/write` endpoint provides backwards compatibility for v2 clients: 

```
curl -i "https://your-cluster-endpoint:8086/api/v2/write?bucket=DATABASE_NAME&precision=s" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --header "Content-type: text/plain; charset=utf-8" \
  --data-binary 'home,room=kitchen temp=72 1641024000'
```

 V2 API parameters: 


|  **Parameter**  |  **Location**  |  **Description**  | 
| --- | --- | --- | 
|  bucket \$1  |  Query string  |  Maps to database name  | 
|  precision  |  Query string  |  Timestamp precision (ns, us, ms, s, m, h)  | 
|  Authorization  |  Header  |  Bearer or Token scheme  | 
|  Content-Encoding  |  Header  |  gzip or identity  | 

##### InfluxDB v1 Compatibility
<a name="v1-compatability"></a>

 The `/write` endpoint provides backwards compatibility for v1 clients: 

```
curl -i "https://your-cluster-endpoint:8086/write?db=DATABASE_NAME&precision=s" \
  --user "any:DATABASE_TOKEN" \
  --header "Content-type: text/plain; charset=utf-8" \
  --data-binary 'home,room=kitchen temp=72 1641024000'
```

V1 authentication options:
+  Basic authentication: Token as password (`--user "any:TOKEN"`). 
+  Query parameter: `p=TOKEN` in URL. 
+  Bearer/Token header: Standard authorization header. 

 V1 API parameters: 


|  **Parameter**  |  **Location**  |  **Description**  | 
| --- | --- | --- | 
|  db \$1  |  Query string  |  Database name  | 
|  precision  |  Query string  |  Timestamp precision  | 
|  p  |  Query string  |  Token for query auth  | 
|  u  |  Query string  |  Username (ignored)  | 
|  Authorization  |  Header  |  Multiple schemes supported  | 
|  Content-Encoding  |  Header  |  gzip or identity  | 

## Client libraries and integrations
<a name="client-libraries-and-integrations"></a>

### Official InfluxDB 3 client libraries
<a name="official-influxdb-3-client-libraries"></a>

 InfluxDB 3 client libraries provide native language interfaces for constructing and writing time-series data: 
+  **Python**: `influxdb3-python` 
+  **Go**: `influxdb3-go` 
+  **JavaScript/Node.js**: `influxdb3-js` 
+  **Java**: `influxdb3-java` 
+  **C\$1**: `InfluxDB3.Client` 

 Example: Python client 

```
from influxdb3 import InfluxDBClient3

client = InfluxDBClient3(
    host="your-cluster-endpoint:8086",
    token="YOUR_TOKEN",
    database="DATABASE_NAME"
)

# Write using line protocol
client.write("home,room=Living\\ Room temp=21.1,hum=35.9,co=0i")

# Write using Point objects
from influxdb3 import Point
point = Point("home") \
    .tag("room", "Living Room") \
    .field("temp", 21.1) \
    .field("hum", 35.9) \
    .field("co", 0)
    
client.write(point)
```

 Example: Go client 

```
import "github.com/InfluxCommunity/influxdb3-go/v2/influxdb3"

client, err := influxdb3.New(influxdb3.ClientConfig{
    Host: "your-cluster-endpoint:8086",
    Token: "YOUR_TOKEN",
    Database: "DATABASE_NAME",
})

point := influxdb3.NewPoint("home",
    map[string]string{"room": "Living Room"},
    map[string]any{
        "temp": 24.5,
        "hum":  40.5,
        "co":   15,
    },
    time.Now(),
)

err = client.WritePoints(context.Background(), []*influxdb3.Point{point})
```

### Legacy client libraries
<a name="legacy-client-libraries"></a>

 For existing v1 and v2 workloads, you can continue using legacy client libraries with the compatibility endpoints: 

 Example: Node.js v1 client: 

```
const Influx = require('influx')

const client = new Influx.InfluxDB({
  host: 'your-cluster-endpoint',
  port: 8086,
  protocol: 'https',
  database: 'DATABASE_NAME',
  username: 'ignored',
  password: 'DATABASE_TOKEN'
})
```

# Telegraf integration with Timestream for InfluxDB 3
<a name="telegraf-integration"></a>

 Telegraf is a plugin-based data collection agent with over 300 input plugins for collecting metrics from various sources and output plugins for writing data to different destinations. Its "plug-and-play" architecture makes it ideal for quickly collecting and reporting metrics to InfluxDB 3. 

## Requirements
<a name="requirements"></a>
+  Telegraf 1.9.2 or greater – For installation instructions, see the [Telegraf Installation documentation.](https://docs.influxdata.com/telegraf/latest/install/) 
+  InfluxDB 3 cluster endpoint and credentials. 
+  Network connectivity to your InfluxDB 3 cluster. 

## Telegraf configuration options
<a name="telegraf-configuration-options"></a>

 Telegraf provides two output plugins compatible with InfluxDB 3: 

1.  `outputs.influxdb_v2` - Recommended for new deployments. 

1.  `outputs.influxdb` (v1) - For existing v1 configurations. 

### Using the v2 output plugin
<a name="using-the-v2-output-plugin-recommended"></a>

 We recommend that you use the `outputs.influxdb_v2` plugin to connect to the InfluxDB v2 compatibility API: 

```
[[outputs.influxdb_v2]]
  urls = ["https://your-cluster-endpoint:8086"]
  token = "${INFLUX_TOKEN}"  # Use environment variable for security
  organization = ""           # Can be left empty for InfluxDB 3
  bucket = "DATABASE_NAME"
  
  ## Optional: Enable gzip compression
  content_encoding = "gzip"
  
  ## Optional: Increase timeout for high-latency networks
  timeout = "10s"
  
  ## Optional: Configure batching
  metric_batch_size = 5000
  metric_buffer_limit = 50000
```

### Using the legacy v1 output plugin
<a name="using-the-v1-output-plugin-legacy-support"></a>

 For existing Telegraf configurations using the v1 plugin: 

```
[[outputs.influxdb]]
  urls = ["https://your-cluster-endpoint:8086"]
  database = "DATABASE_NAME"
  skip_database_creation = true
  username = "ignored"           # Required but ignored
  password = "${INFLUX_TOKEN}"   # Use environment variable
  content_encoding = "gzip"
  
  ## Optional: Configure write parameters
  timeout = "10s"
  metric_batch_size = 5000
  metric_buffer_limit = 50000
```

## Basic Telegraf configuration Example
<a name="basic-telegraf-configuration-example"></a>

 The following is a complete example that collects system metrics and writes them to InfluxDB 3: 

```
# Global Agent Configuration
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 5000
  metric_buffer_limit = 50000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = "s"
  hostname = ""
  omit_hostname = false

# Input Plugins - Collect system metrics
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

[[inputs.mem]]

[[inputs.net]]
  interfaces = ["eth*", "en*"]

[[inputs.system]]

# Output Plugin - Write to InfluxDB 3
[[outputs.influxdb_v2]]
  urls = ["https://your-cluster-endpoint:8086"]
  token = "${INFLUX_TOKEN}"
  organization = ""
  bucket = "telegraf_metrics"
  content_encoding = "gzip"
```

## Best practices for Telegraf with InfluxDB 3
<a name="best-practices-for-telegraf-with-influxdb-3"></a>
+  **Security** 
  +  Store tokens in environment variables or secret stores. 
  +  Never hardcode tokens in configuration files. 
  +  Use HTTPS endpoints for production deployments. 
+  **Performance optimization** 
  +  Enable gzip compression with content\$1encoding = "gzip". 
  +  Configure appropriate batch sizes (5000-10000 metrics). 
  +  Set buffer limits based on available memory. 
  +  Use precision appropriate for your use case (seconds often sufficient). 
+  **Network configuration** 
  +  For private clusters, run Telegraf within the same VPC. 
  +  Configure appropriate timeouts for your network latency. 
  +  Use the writer/reader endpoint for write operations. 
+  **Monitoring** 
  +  Enable Telegraf's internal metrics plugin to monitor agent performance. 
  +  Monitor write errors and retries. 
  +  Set up alerts for buffer overflow conditions. 
+  **Data organization** 
  +  Use consistent tag naming across input plugins. 
  +  Leverage Telegraf's processor plugins to normalize data. 
  +  Apply tag filtering to control cardinality. 

## Running Telegraf
<a name="running-telegraf"></a>

 To start Telegraf with your configuration, do the following: 

```
# Test configuration
telegraf --config telegraf.conf --test

# Run Telegraf
telegraf --config telegraf.conf

# Run as a service (systemd)
sudo systemctl start telegraf
```

### Common Telegraf plugins for time series data
<a name="common-telegraf-plugins-for-time-series-data"></a>

 **Popular input plugins:** 
+  `inputs.cpu`, `inputs.mem`, `inputs.disk` - System metrics. 
+  `inputs.docker`, `inputs.kubernetes` - Container metrics. 
+  `inputs.prometheus` - Scrape Prometheus endpoints. 
+  `inputs.snmp` - Network device monitoring. 
+  `inputs.mqtt_consumer` - IoT data collection. 
+  `inputs.http_listener_v2` - HTTP webhook receiver. 

 **Useful processor plugins:** 
+  `processors.regex` - Transform tag/field names. 
+  `processors.converter` - Change field data types. 
+  `processors.aggregator` - Aggregate metrics. 
+  `processors.filter` - Filter metrics based on conditions. 

 By leveraging Telegraf's extensive plugin ecosystem with InfluxDB 3, you can build comprehensive monitoring solutions that collect data from diverse sources and efficiently write it to your time-series database. 

## Best practices for writing data
<a name="best-practices-for-writing-data"></a>

When writing data, we recommend the following:
+ Batch optimization
  +  Optimal batch size: 5,000-10,000 lines or 10MB per request. 
  +  Use compression (gzip) for large payloads. 
  + Sort tags by key in lexicographic order for better performance. 
+ Timestamp precision
  +  Use the coarsest precision that meets your needs. 
  +  Explicitly specify precision to avoid ambiguity. 
  +  Maintain consistent precision across your application. 
+ Error handling
  +  Implement retry logic for transient failures. 
  +  Use accept\$1partial=true for resilient batch operations. 
  +  Monitor write errors through CloudWatch metrics. 
+ Performance tuning
  +  Use no\$1sync=true for high-throughput scenarios. 
  +  Distribute writes across multiple connections. 
  +  Use the writer/reader endpoint for all write operations. 
+ Schema considerations
  +  Tags are immutable once created. 
  +  Fields and tags cannot share the same nam.e 
  +  Design schemas with query patterns in mind. 
  +  Keep tag cardinality under control. 

Important differences from previous versions:
+  Immutable tags: Once a tag is created in a table, its type cannot be changed 
+  No tag/field name conflicts: A tag and field cannot have the same name within a table 
+  Schema-on-write: InfluxDB 3 validates data types on write 
+  Automatic table creation: Tables are created automatically on first write 
+  Strict type checking: Field types must remain consistent across all writes 

 By leveraging the appropriate write API and following these best practices, you can efficiently ingest time-series data into your Timestream for InfluxDB 3 instance while maintaining high performance and data integrity. 