InfluxDBv3CoreParameters
All the customer-modifiable InfluxDB v3 Core parameters in Timestream for InfluxDB.
Contents
- dataFusionConfig
-
Provides custom configuration to DataFusion as a comma-separated list of key:value pairs.
Type: String
Pattern:
[a-zA-Z0-9_]+=[^,\s]+(?:,[a-zA-Z0-9_]+=[^,\s]+)*Required: No
- dataFusionMaxParquetFanout
-
When multiple parquet files are required in a sorted way (deduplication for example), specifies the maximum fanout.
Default: 1000
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 1000000.
Required: No
- dataFusionNumThreads
-
Sets the maximum number of DataFusion runtime threads to use.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 2048.
Required: No
- dataFusionRuntimeDisableLifoSlot
-
Disables the LIFO slot of the DataFusion runtime.
Type: Boolean
Required: No
- dataFusionRuntimeEventInterval
-
Sets the number of scheduler ticks after which the scheduler of the DataFusion tokio runtime polls for external events–for example: timers, I/O.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 128.
Required: No
- dataFusionRuntimeGlobalQueueInterval
-
Sets the number of scheduler ticks after which the scheduler of the DataFusion runtime polls the global task queue.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 128.
Required: No
- dataFusionRuntimeMaxBlockingThreads
-
Specifies the limit for additional threads spawned by the DataFusion runtime.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 1024.
Required: No
- dataFusionRuntimeMaxIoEventsPerTick
-
Configures the maximum number of events processed per tick by the tokio DataFusion runtime.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 4096.
Required: No
- dataFusionRuntimeThreadKeepAlive
-
Sets a custom timeout for a thread in the blocking pool of the tokio DataFusion runtime.
Type: Duration object
Required: No
- dataFusionRuntimeThreadPriority
-
Sets the thread priority for tokio DataFusion runtime workers.
Default: 10
Type: Integer
Valid Range: Minimum value of -20. Maximum value of 19.
Required: No
- dataFusionRuntimeType
-
Specifies the DataFusion tokio runtime type.
Default: multi-thread
Type: String
Valid Values:
multi-thread | multi-thread-altRequired: No
- dataFusionUseCachedParquetLoader
-
Uses a cached parquet loader when reading parquet files from the object store.
Type: Boolean
Required: No
- deleteGracePeriod
-
Specifies the grace period before permanently deleting data.
Default: 24h
Type: Duration object
Required: No
- disableParquetMemCache
-
Disables the in-memory Parquet cache. By default, the cache is enabled.
Type: Boolean
Required: No
- distinctCacheEvictionInterval
-
Specifies the interval to evict expired entries from the distinct value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
Type: Duration object
Required: No
- execMemPoolBytes
-
Specifies the size of memory pool used during query execution. Can be given as absolute value in bytes or as a percentage of the total available memory–for example: 8000000000 or 10%.
Default: 20%
Type: PercentOrAbsoluteLong object
Note: This object is a Union. Only one member of this object can be specified or returned.
Required: No
- forceSnapshotMemThreshold
-
Specifies the threshold for the internal memory buffer. Supports either a percentage (portion of available memory) or absolute value in MB–for example: 70% or 100
Default: 70%
Type: PercentOrAbsoluteLong object
Note: This object is a Union. Only one member of this object can be specified or returned.
Required: No
- gen1Duration
-
Specifies the duration that Parquet files are arranged into. Data timestamps land each row into a file of this duration. Supported durations are 1m, 5m, and 10m. These files are known as “generation 1” files that the compactor in InfluxDB 3 Enterprise can merge into larger generations.
Default: 10m
Type: Duration object
Required: No
- gen1LookbackDuration
-
Specifies how far back to look when creating generation 1 Parquet files.
Default: 24h
Type: Duration object
Required: No
- hardDeleteDefaultDuration
-
Sets the default duration for hard deletion of data.
Default: 90d
Type: Duration object
Required: No
- lastCacheEvictionInterval
-
Specifies the interval to evict expired entries from the Last-N-Value cache, expressed as a human-readable duration–for example: 20s, 1m, 1h.
Default: 10s
Type: Duration object
Required: No
- logFilter
-
Sets the filter directive for logs.
Type: String
Length Constraints: Minimum length of 0. Maximum length of 1024.
Required: No
- logFormat
-
Defines the message format for logs.
Default: full
Type: String
Valid Values:
fullRequired: No
- maxHttpRequestSize
-
Specifies the maximum size of HTTP requests.
Default: 10485760
Type: Long
Valid Range: Minimum value of 1024. Maximum value of 16777216.
Required: No
- parquetMemCachePruneInterval
-
Sets the interval to check if the in-memory Parquet cache needs to be pruned.
Default: 1s
Type: Duration object
Required: No
- parquetMemCachePrunePercentage
-
Specifies the percentage of entries to prune during a prune operation on the in-memory Parquet cache.
Default: 0.1
Type: Float
Valid Range: Minimum value of 0. Maximum value of 1.
Required: No
- parquetMemCacheQueryPathDuration
-
Specifies the time window for caching recent Parquet files in memory.
Default: 5h
Type: Duration object
Required: No
- parquetMemCacheSize
-
Specifies the size of the in-memory Parquet cache in megabytes or percentage of total available memory.
Default: 20%
Type: PercentOrAbsoluteLong object
Note: This object is a Union. Only one member of this object can be specified or returned.
Required: No
- preemptiveCacheAge
-
Specifies the interval to prefetch into the Parquet cache during compaction.
Default: 3d
Type: Duration object
Required: No
- queryFileLimit
-
Limits the number of Parquet files a query can access. If a query attempts to read more than this limit, InfluxDB 3 returns an error.
Default: 432
Type: Integer
Valid Range: Minimum value of 0. Maximum value of 1024.
Required: No
- queryLogSize
-
Defines the size of the query log. Up to this many queries remain in the log before older queries are evicted to make room for new ones.
Default: 1000
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 10000.
Required: No
- retentionCheckInterval
-
The interval at which retention policies are checked and enforced. Enter as a human-readable time–for example: 30m or 1h.
Default: 30m
Type: Duration object
Required: No
- snapshottedWalFilesToKeep
-
Specifies the number of snapshotted WAL files to retain in the object store. Flushing the WAL files does not clear the WAL files immediately; they are deleted when the number of snapshotted WAL files exceeds this number.
Default: 300
Type: Integer
Valid Range: Minimum value of 0. Maximum value of 10000.
Required: No
- tableIndexCacheConcurrencyLimit
-
Limits the concurrency level for table index cache operations.
Default: 8
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 100.
Required: No
- tableIndexCacheMaxEntries
-
Specifies the maximum number of entries in the table index cache.
Default: 1000
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 1000.
Required: No
- walMaxWriteBufferSize
-
Specifies the maximum number of write requests that can be buffered before a flush must be executed and succeed.
Default: 100000
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 1000000.
Required: No
- walReplayConcurrencyLimit
-
Concurrency limit during WAL replay. Setting this number too high can lead to OOM. The default is dynamically determined.
Default: max(num_cpus, 10)
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 100.
Required: No
- walReplayFailOnError
-
Determines whether WAL replay should fail when encountering errors.
Default: false
Type: Boolean
Required: No
- walSnapshotSize
-
Defines the number of WAL files to attempt to remove in a snapshot. This, multiplied by the interval, determines how often snapshots are taken.
Default: 600
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 10000.
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: