Package software.amazon.awscdk.services.dynamodb
Amazon DynamoDB Construct Library
The DynamoDB construct library has two table constructs -
TableandTableV2.TableV2is the preferred construct for all use cases, including creating a single table or a table with multiplereplicas.
Here is a minimal deployable DynamoDB table using TableV2:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build();
By default, TableV2 will create a single table in the main deployment region referred to as the primary table. The properties of the primary table are configurable via TableV2 properties. For example, consider the following DynamoDB table created using the TableV2 construct defined in a Stack being deployed to us-west-2:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(true)
.build())
.tableClass(TableClass.STANDARD_INFREQUENT_ACCESS)
.pointInTimeRecoverySpecification(PointInTimeRecoverySpecification.builder()
.pointInTimeRecoveryEnabled(true)
.build())
.build();
The above TableV2 definition will result in the provisioning of a single table in us-west-2 with properties that match the properties set on the TableV2 instance.
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html
Replicas
The TableV2 construct can be configured with replica tables. This will enable you to work with your table as a global table. To do this, the TableV2 construct must be defined in a Stack with a defined region. The main deployment region must not be given as a replica because this is created by default with the TableV2 construct. The following is a minimal example of defining TableV2 with replicas. This TableV2 definition will provision three copies of the table - one in us-west-2 (primary deployment region), one in us-east-1, and one in us-east-2.
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
Alternatively, you can add new replicas to an instance of the TableV2 construct using the addReplica method:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build()))
.build();
globalTable.addReplica(ReplicaTableProps.builder().region("us-east-2").deletionProtection(true).build());
The following properties are configurable on a per-replica basis, but will be inherited from the TableV2 properties if not specified:
- contributorInsightsSpecification
- deletionProtection
- pointInTimeRecoverySpecification
- tableClass
- readCapacity (only configurable if the
TableV2billing mode isPROVISIONED) - globalSecondaryIndexes (only
contributorInsightsSpecificationandreadCapacity)
The following example shows how to define properties on a per-replica basis:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(true)
.build())
.pointInTimeRecoverySpecification(PointInTimeRecoverySpecification.builder()
.pointInTimeRecoveryEnabled(true)
.build())
.replicas(List.of(ReplicaTableProps.builder()
.region("us-east-1")
.tableClass(TableClass.STANDARD_INFREQUENT_ACCESS)
.pointInTimeRecoverySpecification(PointInTimeRecoverySpecification.builder()
.pointInTimeRecoveryEnabled(false)
.build())
.build(), ReplicaTableProps.builder()
.region("us-east-2")
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(false)
.build())
.build()))
.build();
To obtain an ITableV2 reference to a specific replica table, call the replica method on an instance of the TableV2 construct and pass the replica region as an argument:
import software.amazon.awscdk.*;
User user;
public class FooStack extends Stack {
public final TableV2 globalTable;
public FooStack(Construct scope, String id, StackProps props) {
super(scope, id, props);
this.globalTable = TableV2.Builder.create(this, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
}
}
public class BarStackProps extends StackProps {
private ITableV2 replicaTable;
public ITableV2 getReplicaTable() {
return this.replicaTable;
}
public BarStackProps replicaTable(ITableV2 replicaTable) {
this.replicaTable = replicaTable;
return this;
}
}
public class BarStack extends Stack {
public BarStack(Construct scope, String id, BarStackProps props) {
super(scope, id, props);
// user is given grantWriteData permissions to replica in us-east-1
props.replicaTable.grantWriteData(user);
}
}
App app = new App();
FooStack fooStack = FooStack.Builder.create(app, "FooStack").env(Environment.builder().region("us-west-2").build()).build();
BarStack barStack = new BarStack(app, "BarStack", new BarStackProps()
.replicaTable(fooStack.globalTable.replica("us-east-1"))
.env(Environment.builder().region("us-east-1").build())
);
Note: You can create an instance of the TableV2 construct with as many replicas as needed as long as there is only one replica per region. After table creation you can add or remove replicas, but you can only add or remove a single replica in each update.
Multi-Region Strong Consistency (MRSC)
By default, DynamoDB global tables provide eventual consistency across regions. For applications requiring strong consistency across regions, you can configure Multi-Region Strong Consistency (MRSC) using the multiRegionConsistency property.
MRSC global tables can be configured in two ways:
- Three replicas: Deploy your table across three regions within the same region set
- Two replicas + one witness: Deploy your table across two regions with a witness region for consensus
Region Sets
MRSC global tables must be deployed within the same region set. The supported region sets are:
- US Region set:
us-east-1,us-east-2,us-west-2 - EU Region set:
eu-west-1,eu-west-2,eu-west-3,eu-central-1 - AP Region set:
ap-northeast-1,ap-northeast-2,ap-northeast-3
Three Replicas Configuration
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 mrscTable = TableV2.Builder.create(stack, "MRSCTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.multiRegionConsistency(MultiRegionConsistency.STRONG)
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
Two Replicas + Witness Configuration
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 mrscTable = TableV2.Builder.create(stack, "MRSCTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.multiRegionConsistency(MultiRegionConsistency.STRONG)
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build()))
.witnessRegion("us-east-2")
.build();
Important Considerations
- Witness regions can only be used with
MultiRegionConsistency.STRONG. Attempting to specify a witness region with eventual consistency will result in a validation error. - Region validation: All regions (primary, replicas, and witness) must be within the same region set.
- Replica count: When using a witness region, you must have exactly 2 replicas (including the primary). Without a witness region, you must have exactly 3 replicas.
- Performance: MRSC provides strong consistency but may have higher latency compared to eventual consistency.
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/V2globaltables_HowItWorks.html#V2globaltables_HowItWorks.consistency-modes-mrsc
Billing
The TableV2 construct can be configured with on-demand or provisioned billing:
- On-demand - The default option. This is a flexible billing option capable of serving requests without capacity planning. The billing mode will be
PAY_PER_REQUEST. - You can optionally specify the
maxReadRequestUnitsormaxWriteRequestUnitson individual tables and associated global secondary indexes (GSIs). When you configure maximum throughput for an on-demand table, throughput requests that exceed the maximum amount specified will be throttled. - Provisioned - Specify the
readCapacityandwriteCapacitythat you need for your application. The billing mode will bePROVISIONED. Capacity can be configured using one of the following modes:- Fixed - provisioned throughput capacity is configured with a fixed number of I/O operations per second.
- Autoscaled - provisioned throughput capacity is dynamically adjusted on your behalf in response to actual traffic patterns.
Note: writeCapacity can only be configured using autoscaled capacity.
The following example shows how to configure TableV2 with on-demand billing:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.onDemand())
.build();
The following example shows how to configure TableV2 with on-demand billing with optional maximum throughput configured:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.onDemand(MaxThroughputProps.builder()
.maxReadRequestUnits(100)
.maxWriteRequestUnits(115)
.build()))
.build();
When using provisioned billing, you must also specify readCapacity and writeCapacity. You can choose to configure readCapacity with fixed capacity or autoscaled capacity, but writeCapacity can only be configured with autoscaled capacity. The following example shows how to configure TableV2 with provisioned billing:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.provisioned(ThroughputProps.builder()
.readCapacity(Capacity.fixed(10))
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(15).build()))
.build()))
.build();
When using provisioned billing, you can configure the readCapacity on a per-replica basis:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.provisioned(ThroughputProps.builder()
.readCapacity(Capacity.fixed(10))
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(15).build()))
.build()))
.replicas(List.of(ReplicaTableProps.builder()
.region("us-east-1")
.build(), ReplicaTableProps.builder()
.region("us-east-2")
.readCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(20).targetUtilizationPercent(50).build()))
.build()))
.build();
When changing the billing for a table from provisioned to on-demand or from on-demand to provisioned, seedCapacity must be configured for each autoscaled resource:
TableV2 globalTable = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.provisioned(ThroughputProps.builder()
.readCapacity(Capacity.fixed(10))
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(10).seedCapacity(20).build()))
.build()))
.build();
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
Warm Throughput
Warm throughput refers to the number of read and write operations your DynamoDB table can instantaneously support.
This optional configuration allows you to pre-warm your table or index to handle anticipated throughput, ensuring optimal performance under expected load.
The Warm Throughput configuration settings are automatically replicated across all Global Table replicas.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("id").type(AttributeType.STRING).build())
.warmThroughput(WarmThroughput.builder()
.readUnitsPerSecond(15000)
.writeUnitsPerSecond(20000)
.build())
.build();
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/warm-throughput.html
Encryption
All user data stored in a DynamoDB table is fully encrypted at rest. When creating an instance of the TableV2 construct, you can select the following table encryption options:
- AWS owned keys - Default encryption type. The keys are owned by DynamoDB (no additional charge).
- AWS managed keys - The keys are stored in your account and are managed by AWS KMS (AWS KMS charges apply).
- Customer managed keys - The keys are stored in your account and are created, owned, and managed by you. You have full control over the KMS keys (AWS KMS charges apply).
The following is an example of how to configure TableV2 with encryption using an AWS owned key:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.encryption(TableEncryptionV2.dynamoOwnedKey())
.build();
The following is an example of how to configure TableV2 with encryption using an AWS managed key:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.encryption(TableEncryptionV2.awsManagedKey())
.build();
When configuring TableV2 with encryption using customer managed keys, you must specify the KMS key for the primary table as the tableKey. A map of replicaKeyArns must be provided containing each replica region and the associated KMS key ARN:
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kms.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
Key tableKey = new Key(stack, "Key");
Map<String, String> replicaKeyArns = Map.of(
"us-east-1", "arn:aws:kms:us-east-1:123456789012:key/g24efbna-az9b-42ro-m3bp-cq249l94fca6",
"us-east-2", "arn:aws:kms:us-east-2:123456789012:key/h90bkasj-bs1j-92wp-s2ka-bh857d60bkj8");
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.encryption(TableEncryptionV2.customerManagedKey(tableKey, replicaKeyArns))
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
Note: When encryption is configured with customer managed keys, you must have a key already created in each replica region.
Further reading: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt
Secondary Indexes
Secondary indexes allow efficient access to data with attributes other than the primaryKey. DynamoDB supports two types of secondary indexes:
- Global secondary index - An index with partition key(s) and optional sort key(s) that can be different from those on the base table. A
globalSecondaryIndexis considered "global" because queries on the index can span all of the data in the base table, across all partitions. AglobalSecondaryIndexis stored in its own partition space away from the base table and scales separately from the base table. - Local secondary index - An index that has the same
partitionKeyas the base table, but a differentsortKey. AlocalSecondaryIndexis "local" in the sense that every partition of alocalSecondaryIndexis scoped to a base table partition that has the samepartitionKeyvalue.
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html
Global Secondary Indexes
TableV2 can be configured with globalSecondaryIndexes by providing them as a TableV2 property:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.globalSecondaryIndexes(List.of(GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build()))
.build();
Compound Keys
Global secondary indexes support compound keys, allowing you to specify multiple partition keys and/or multiple sort keys. This enables more flexible query patterns for complex data models.
Key Constraints:
- You can specify up to 4 partition keys per global secondary index
- You can specify up to 4 sort keys per global secondary index
- Use either
partitionKey(singular) orpartitionKeys(plural), but not both - Use either
sortKey(singular) orsortKeys(plural), but not both - At least one partition key must be specified (either
partitionKeyorpartitionKeys) - For multiple keys, you must use the plural parameters (
partitionKeysand/orsortKeys) - Keys cannot be added or modified after index creation - attempting to add additional keys to an existing index will result in an error
Example with compound partition and sort keys:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.globalSecondaryIndexes(List.of(GlobalSecondaryIndexPropsV2.builder()
.indexName("compound-gsi")
.partitionKeys(List.of(Attribute.builder().name("gsi_pk1").type(AttributeType.STRING).build(), Attribute.builder().name("gsi_pk2").type(AttributeType.NUMBER).build()))
.sortKeys(List.of(Attribute.builder().name("gsi_sk1").type(AttributeType.STRING).build(), Attribute.builder().name("gsi_sk2").type(AttributeType.BINARY).build()))
.build()))
.build();
You can also add a globalSecondaryIndex using the addGlobalSecondaryIndex method:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.globalSecondaryIndexes(List.of(GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi1")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build()))
.build();
table.addGlobalSecondaryIndex(GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi2")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build());
// Add a GSI with compound keys
table.addGlobalSecondaryIndex(GlobalSecondaryIndexPropsV2.builder()
.indexName("compound-gsi2")
.partitionKeys(List.of(Attribute.builder().name("compound_pk1").type(AttributeType.STRING).build(), Attribute.builder().name("compound_pk2").type(AttributeType.NUMBER).build()))
.sortKey(Attribute.builder().name("sk").type(AttributeType.STRING).build())
.build());
You can configure readCapacity and writeCapacity on a globalSecondaryIndex when an TableV2 is configured with provisioned billing. If TableV2 is configured with provisioned billing but readCapacity or writeCapacity are not configured on a globalSecondaryIndex, then they will be inherited from the capacity settings specified with the billing configuration:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.billing(Billing.provisioned(ThroughputProps.builder()
.readCapacity(Capacity.fixed(10))
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(10).build()))
.build()))
.globalSecondaryIndexes(List.of(GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi1")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.readCapacity(Capacity.fixed(15))
.build(), GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi2")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().minCapacity(5).maxCapacity(20).build()))
.build()))
.build();
All globalSecondaryIndexes for replica tables are inherited from the primary table. You can configure contributorInsightsSpecification and readCapacity for each globalSecondaryIndex on a per-replica basis:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(true)
.build())
.billing(Billing.provisioned(ThroughputProps.builder()
.readCapacity(Capacity.fixed(10))
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().maxCapacity(10).build()))
.build()))
// each global secondary index will inherit contributor insights as true
.globalSecondaryIndexes(List.of(GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi1")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.readCapacity(Capacity.fixed(15))
.build(), GlobalSecondaryIndexPropsV2.builder()
.indexName("gsi2")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.writeCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().minCapacity(5).maxCapacity(20).build()))
.build()))
.replicas(List.of(ReplicaTableProps.builder()
.region("us-east-1")
.globalSecondaryIndexOptions(Map.of(
"gsi1", ReplicaGlobalSecondaryIndexOptions.builder()
.readCapacity(Capacity.autoscaled(AutoscaledCapacityOptions.builder().minCapacity(1).maxCapacity(10).build()))
.build()))
.build(), ReplicaTableProps.builder()
.region("us-east-2")
.globalSecondaryIndexOptions(Map.of(
"gsi2", ReplicaGlobalSecondaryIndexOptions.builder()
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(false)
.build())
.build()))
.build()))
.build();
Local Secondary Indexes
TableV2 can only be configured with localSecondaryIndexes when a sortKey is defined as a TableV2 property.
You can provide localSecondaryIndexes as a TableV2 property:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.localSecondaryIndexes(List.of(LocalSecondaryIndexProps.builder()
.indexName("lsi")
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.build()))
.build();
Alternatively, you can add a localSecondaryIndex using the addLocalSecondaryIndex method:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.localSecondaryIndexes(List.of(LocalSecondaryIndexProps.builder()
.indexName("lsi1")
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.build()))
.build();
table.addLocalSecondaryIndex(LocalSecondaryIndexProps.builder()
.indexName("lsi2")
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.build());
Streams
Each DynamoDB table produces an independent stream based on all its writes, regardless of the origination point for those writes. DynamoDB supports two stream types:
- DynamoDB streams - Capture item-level changes in your table, and push the changes to a DynamoDB stream. You then can access the change information through the DynamoDB Streams API.
- Kinesis streams - Amazon Kinesis Data Streams for DynamoDB captures item-level changes in your table, and replicates the changes to a Kinesis data stream. You then can consume and manage the change information from Kinesis.
DynamoDB Streams
A dynamoStream can be configured as a TableV2 property. If the TableV2 instance has replica tables, then all replica tables will inherit the dynamoStream setting from the primary table. If replicas are configured, but dynamoStream is not configured, then the primary table and all replicas will be automatically configured with the NEW_AND_OLD_IMAGES stream view type.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kinesis.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(this, "GlobalTable")
.partitionKey(Attribute.builder().name("id").type(AttributeType.STRING).build())
.dynamoStream(StreamViewType.OLD_IMAGE)
// tables in us-west-2, us-east-1, and us-east-2 all have dynamo stream type of OLD_IMAGES
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
Kinesis Streams
A kinesisStream can be configured as a TableV2 property. Replica tables will not inherit the kinesisStream configured for the primary table and should added on a per-replica basis.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kinesis.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
Stream stream1 = new Stream(stack, "Stream1");
IStream stream2 = Stream.fromStreamArn(stack, "Stream2", "arn:aws:kinesis:us-east-2:123456789012:stream/my-stream");
TableV2 globalTable = TableV2.Builder.create(this, "GlobalTable")
.partitionKey(Attribute.builder().name("id").type(AttributeType.STRING).build())
.kinesisStream(stream1) // for table in us-west-2
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder()
.region("us-east-2")
.kinesisStream(stream2)
.build()))
.build();
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html
Keys
When an instance of the TableV2 construct is defined, you must define its schema using the partitionKey (required) and sortKey (optional) properties.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.sortKey(Attribute.builder().name("sk").type(AttributeType.NUMBER).build())
.build();
Contributor Insights
Enabling contributorInsightSpecification for TableV2 will provide information about the most accessed and throttled or throttled only items in a table or globalSecondaryIndex. DynamoDB delivers this information to you via CloudWatch Contributor Insights rules, reports, and graphs of report data.
By default, Contributor Insights for DynamoDB monitors all requests, including both the most accessed and most throttled items.
To limit the scope to only the most accessed or only the most throttled items, use the optional mode parameter.
- To monitor all traffic on a table or index, set
modetoContributorInsightsMode.ACCESSED_AND_THROTTLED_KEYS. - To monitor only throttled traffic on a table or index, set
modetoContributorInsightsMode.THROTTLED_KEYS.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.contributorInsightsSpecification(ContributorInsightsSpecification.builder()
.enabled(true)
.mode(ContributorInsightsMode.ACCESSED_AND_THROTTLED_KEYS)
.build())
.build();
When you use Table, you can enable contributor insights for a table or specific global secondary index by setting contributorInsightsSpecification parameter enabled to true.
Table table = Table.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.contributorInsightsSpecification(ContributorInsightsSpecification.builder() // for a table
.enabled(true)
.mode(ContributorInsightsMode.THROTTLED_KEYS).build())
.build();
table.addGlobalSecondaryIndex(GlobalSecondaryIndexProps.builder()
.contributorInsightsSpecification(ContributorInsightsSpecification.builder() // for a specific global secondary index
.enabled(true).build())
.indexName("gsi")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build());
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/contributorinsights_HowItWorks.html
Deletion Protection
deletionProtection determines if your DynamoDB table is protected from deletion and is configurable as a TableV2 property. When enabled, the table cannot be deleted by any user or process.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.deletionProtection(true)
.build();
You can also specify the removalPolicy as a property of the TableV2 construct. This property allows you to control what happens to tables provisioned using TableV2 during stack deletion. By default, the removalPolicy is RETAIN which will cause all tables provisioned using TableV2 to be retained in the account, but orphaned from the stack they were created in. You can also set the removalPolicy to DESTROY which will delete all tables created using TableV2 during stack deletion:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
// applies to all replicas, i.e., us-west-2, us-east-1, us-east-2
.removalPolicy(RemovalPolicy.DESTROY)
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
deletionProtection is configurable on a per-replica basis. If the removalPolicy is set to DESTROY, but some replicas have deletionProtection enabled, then only the replicas without deletionProtection will be deleted during stack deletion:
import software.amazon.awscdk.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.removalPolicy(RemovalPolicy.DESTROY)
.deletionProtection(true)
// only the replica in us-east-1 will be deleted during stack deletion
.replicas(List.of(ReplicaTableProps.builder()
.region("us-east-1")
.deletionProtection(false)
.build(), ReplicaTableProps.builder()
.region("us-east-2")
.deletionProtection(true)
.build()))
.build();
Point-in-Time Recovery
pointInTimeRecoverySpecifcation provides automatic backups of your DynamoDB table data which helps protect your tables from accidental write or delete operations.
You can also choose to set recoveryPeriodInDays to a value between 1 and 35 which dictates how many days of recoverable data is stored. If no value is provided, the recovery period defaults to 35 days.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.pointInTimeRecoverySpecification(PointInTimeRecoverySpecification.builder()
.pointInTimeRecoveryEnabled(true)
.recoveryPeriodInDays(4)
.build())
.build();
Table Class
You can configure a TableV2 instance with table classes:
- STANDARD - the default mode, and is recommended for the vast majority of workloads.
- STANDARD_INFREQUENT_ACCESS - optimized for tables where storage is the dominant cost.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.tableClass(TableClass.STANDARD_INFREQUENT_ACCESS)
.build();
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.TableClasses.html
Tags
You can add tags to a TableV2 in several ways. By adding the tags to the construct itself it will apply the tags to the
primary table.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.tags(List.of(CfnTag.builder().key("primaryTableTagKey").value("primaryTableTagValue").build()))
.build();
You can also add tags to replica tables by specifying them within the replica table properties.
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder()
.region("us-west-1")
.tags(List.of(CfnTag.builder().key("replicaTableTagKey").value("replicaTableTagValue").build()))
.build()))
.build();
Referencing Existing Global Tables
To reference an existing DynamoDB table in your CDK application, use the TableV2.fromTableName, TableV2.fromTableArn, or TableV2.fromTableAttributes
factory methods:
User user; ITableV2 table = TableV2.fromTableArn(this, "ImportedTable", "arn:aws:dynamodb:us-east-1:123456789012:table/my-table"); // now you can call methods on the referenced table table.grantReadWriteData(user);
If you intend to use the tableStreamArn (including indirectly, for example by creating an
aws-cdk-lib/aws-lambda-event-sources.DynamoEventSource on the referenced table), you must use the
TableV2.fromTableAttributes method and the tableStreamArn property must be populated.
To grant permissions to indexes for a referenced table you can either set grantIndexPermissions to true, or you can provide the indexes via the globalIndexes or localIndexes properties. This will enable grant* methods to also grant permissions to all table indexes.
Resource Policy
Using resourcePolicy you can add a resource policy to a table in the form of a PolicyDocument:
// resource policy document
const policy = new iam.PolicyDocument({
statements: [
new iam.PolicyStatement({
actions: ['dynamodb:GetItem'],
principals: [new iam.AccountRootPrincipal()],
resources: ['*'],
}),
],
});
// table with resource policy
new dynamodb.TableV2(this, 'TableTestV2-1', {
partitionKey: {
name: 'id',
type: dynamodb.AttributeType.STRING,
},
removalPolicy: RemovalPolicy.DESTROY,
resourcePolicy: policy,
});
Adding Resource Policy Statements Dynamically
You can also add resource policy statements to a table after it's created using the addToResourcePolicy method. Following the same pattern as KMS, resource policies use wildcard resources to avoid circular dependencies:
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build();
// Standard resource policy (recommended approach)
table.addToResourcePolicy(PolicyStatement.Builder.create()
.actions(List.of("dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query"))
.principals(List.of(new AccountRootPrincipal()))
.resources(List.of("*"))
.build());
// Allow specific service access
table.addToResourcePolicy(PolicyStatement.Builder.create()
.actions(List.of("dynamodb:Query"))
.principals(List.of(new ServicePrincipal("lambda.amazonaws.com")))
.resources(List.of("*"))
.build());
Scoped Resource Policies (Advanced)
For scoped resource policies that reference specific table ARNs, you must specify an explicit table name:
import software.amazon.awscdk.Fn;
// Table with explicit name enables scoped resource policies
TableV2 table = TableV2.Builder.create(this, "Table")
.tableName("my-explicit-table-name") // Required for scoped resources
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build();
// Now you can use scoped resources
table.addToResourcePolicy(PolicyStatement.Builder.create()
.actions(List.of("dynamodb:GetItem"))
.principals(List.of(new AccountRootPrincipal()))
.resources(List.of(Fn.sub("arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/my-explicit-table-name"), Fn.sub("arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/my-explicit-table-name/index/*")))
.build());
Important Limitations:
- Auto-generated table names: Must use
resources: ['*']to avoid circular dependencies - Explicit table names: Enable scoped resources but lose CDK's automatic naming benefits
- CloudFormation constraint: Resource policies cannot reference the resource they're attached to during creation
TableV2 doesn’t support creating a replica and adding a resource-based policy to that replica in the same stack update in Regions other than the Region where you deploy the stack update. To incorporate a resource-based policy into a replica, you'll need to initially deploy the replica without the policy, followed by a subsequent update to include the desired policy.
Grant Methods and Resource Policies
Grant methods like grantReadData(), grantWriteData(), and grantReadWriteData() automatically add permissions to resource policies when used with same-account principals (like AccountRootPrincipal). This happens transparently:
// Adds to IAM user's policy (not resource policy)
User user;
TableV2 table = TableV2.Builder.create(this, "Table")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.build();
// Automatically adds to table's resource policy (same account)
table.grantReadData(new AccountRootPrincipal());
table.grantReadData(user);
How it works:
- Same-account principals (AccountRootPrincipal, AccountPrincipal): Grant adds statement to table's resource policy
- IAM identities (User, Role, Group): Grant adds statement to the identity's IAM policy
- Resource policy statements: Automatically use wildcard resources (
*) to avoid circular dependencies
This behavior follows the same pattern as other AWS services like KMS and S3, where grants intelligently choose between resource policies and identity policies based on the principal type.
To avoid wildcards in resource policies: If you need scoped resource ARNs instead of wildcards, use addToResourcePolicy() directly with an explicit table name instead of grant methods. See the "Scoped Resource Policies (Advanced)" section above for details.
Grants
Using any of the grant* methods on an instance of the TableV2 construct will only apply to the primary table, its indexes, and any associated encryptionKey. As an example, grantReadData used below will only apply the table in us-west-2:
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kms.*;
User user;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
Key tableKey = new Key(stack, "Key");
Map<String, String> replicaKeyArns = Map.of(
"us-east-1", "arn:aws:kms:us-east-1:123456789012:key/g24efbna-az9b-42ro-m3bp-cq249l94fca6",
"us-east-2", "arn:aws:kms:us-east-2:123456789012:key/g24efbna-az9b-42ro-m3bp-cq249l94fca6");
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.encryption(TableEncryptionV2.customerManagedKey(tableKey, replicaKeyArns))
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
// grantReadData only applies to the table in us-west-2 and the tableKey
globalTable.grantReadData(user);
The replica method can be used to grant to a specific replica table:
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kms.*;
User user;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
Key tableKey = new Key(stack, "Key");
Map<String, String> replicaKeyArns = Map.of(
"us-east-1", "arn:aws:kms:us-east-1:123456789012:key/g24efbna-az9b-42ro-m3bp-cq249l94fca6",
"us-east-2", "arn:aws:kms:us-east-2:123456789012:key/g24efbna-az9b-42ro-m3bp-cq249l94fca6");
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.encryption(TableEncryptionV2.customerManagedKey(tableKey, replicaKeyArns))
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
// grantReadData applies to the table in us-east-2 and the key arn for the key in us-east-2
globalTable.replica("us-east-2").grantReadData(user);
Metrics
You can use metric* methods to generate metrics for a table that can be used when configuring an Alarm or Graphs. The metric* methods only apply to the primary table provisioned using the TableV2 construct. As an example, metricConsumedReadCapacityUnits used below is only for the table in us-west-2:
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.cloudwatch.*;
App app = new App();
Stack stack = Stack.Builder.create(app, "Stack").env(Environment.builder().region("us-west-2").build()).build();
TableV2 globalTable = TableV2.Builder.create(stack, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
// metric is only for the table in us-west-2
Metric metric = globalTable.metricConsumedReadCapacityUnits();
Alarm.Builder.create(this, "Alarm")
.metric(metric)
.evaluationPeriods(1)
.threshold(1)
.build();
The replica method can be used to generate a metric for a specific replica table:
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.cloudwatch.*;
public class FooStack extends Stack {
public final TableV2 globalTable;
public FooStack(Construct scope, String id, StackProps props) {
super(scope, id, props);
this.globalTable = TableV2.Builder.create(this, "GlobalTable")
.partitionKey(Attribute.builder().name("pk").type(AttributeType.STRING).build())
.replicas(List.of(ReplicaTableProps.builder().region("us-east-1").build(), ReplicaTableProps.builder().region("us-east-2").build()))
.build();
}
}
public class BarStackProps extends StackProps {
private ITableV2 replicaTable;
public ITableV2 getReplicaTable() {
return this.replicaTable;
}
public BarStackProps replicaTable(ITableV2 replicaTable) {
this.replicaTable = replicaTable;
return this;
}
}
public class BarStack extends Stack {
public BarStack(Construct scope, String id, BarStackProps props) {
super(scope, id, props);
// metric is only for the table in us-east-1
Metric metric = props.replicaTable.metricConsumedReadCapacityUnits();
Alarm.Builder.create(this, "Alarm")
.metric(metric)
.evaluationPeriods(1)
.threshold(1)
.build();
}
}
App app = new App();
FooStack fooStack = FooStack.Builder.create(app, "FooStack").env(Environment.builder().region("us-west-2").build()).build();
BarStack barStack = new BarStack(app, "BarStack", new BarStackProps()
.replicaTable(fooStack.globalTable.replica("us-east-1"))
.env(Environment.builder().region("us-east-1").build())
);
import from S3 Bucket
You can import data in S3 when creating a Table using the Table construct.
To import data into DynamoDB, it is required that your data is in a CSV, DynamoDB JSON, or Amazon Ion format within an Amazon S3 bucket.
The data may be compressed using ZSTD or GZIP formats, or you may choose to import it without compression.
The data source can be a single S3 object or multiple S3 objects sharing a common prefix.
Further reading: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataImport.HowItWorks.html
use CSV format
The InputFormat.csv method accepts delimiter and headerList options as arguments.
If delimiter is not specified, , is used by default.
And if headerList is specified, the first line of CSV is treated as data instead of header.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.s3.*;
IBucket bucket;
App app = new App();
Stack stack = new Stack(app, "Stack");
Table.Builder.create(stack, "Table")
.partitionKey(Attribute.builder()
.name("id")
.type(AttributeType.STRING)
.build())
.importSource(ImportSourceSpecification.builder()
.compressionType(InputCompressionType.GZIP)
.inputFormat(InputFormat.csv(CsvOptions.builder()
.delimiter(",")
.headerList(List.of("id", "name"))
.build()))
.bucket(bucket)
.keyPrefix("prefix")
.build())
.build();
use DynamoDB JSON format
Use the InputFormat.dynamoDBJson() method to specify the inputFormat property.
There are currently no options available.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.s3.*;
IBucket bucket;
App app = new App();
Stack stack = new Stack(app, "Stack");
Table.Builder.create(stack, "Table")
.partitionKey(Attribute.builder()
.name("id")
.type(AttributeType.STRING)
.build())
.importSource(ImportSourceSpecification.builder()
.compressionType(InputCompressionType.GZIP)
.inputFormat(InputFormat.dynamoDBJson())
.bucket(bucket)
.keyPrefix("prefix")
.build())
.build();
use Amazon Ion format
Use the InputFormat.ion() method to specify the inputFormat property.
There are currently no options available.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.s3.*;
IBucket bucket;
App app = new App();
Stack stack = new Stack(app, "Stack");
Table.Builder.create(stack, "Table")
.partitionKey(Attribute.builder()
.name("id")
.type(AttributeType.STRING)
.build())
.importSource(ImportSourceSpecification.builder()
.compressionType(InputCompressionType.GZIP)
.inputFormat(InputFormat.ion())
.bucket(bucket)
.keyPrefix("prefix")
.build())
.build();
-
ClassDescriptionThe precision associated with the DynamoDB write timestamps that will be replicated to Kinesis.Represents an attribute for describing the key schema for the table and indexes.A builder for
AttributeAn implementation forAttributeData types for attributes within a table.Options used to configure autoscaled capacity.A builder forAutoscaledCapacityOptionsAn implementation forAutoscaledCapacityOptionsRepresents how capacity is managed and how you are charged for read and write throughput for a DynamoDB table.DynamoDB's Read/Write capacity modes.Represents the amount of read and write operations supported by a DynamoDB table.Capacity modes.TheAWS::DynamoDB::GlobalTableresource enables you to create and manage a Version 2019.11.21 global table.Represents an attribute for describing the schema for the table and indexes.A builder forCfnGlobalTable.AttributeDefinitionPropertyAn implementation forCfnGlobalTable.AttributeDefinitionPropertyA fluent builder forCfnGlobalTable.Configures a scalable target and an autoscaling policy for a table or global secondary index's read or write capacity.A builder forCfnGlobalTable.CapacityAutoScalingSettingsPropertyAn implementation forCfnGlobalTable.CapacityAutoScalingSettingsPropertyConfigures contributor insights settings for a replica or one of its indexes.A builder forCfnGlobalTable.ContributorInsightsSpecificationPropertyAn implementation forCfnGlobalTable.ContributorInsightsSpecificationPropertyAllows you to specify a global secondary index for the global table.A builder forCfnGlobalTable.GlobalSecondaryIndexPropertyAn implementation forCfnGlobalTable.GlobalSecondaryIndexPropertyThe witness Region for the MRSC global table.A builder forCfnGlobalTable.GlobalTableWitnessPropertyAn implementation forCfnGlobalTable.GlobalTableWitnessPropertyRepresents a single element of a key schema.A builder forCfnGlobalTable.KeySchemaPropertyAn implementation forCfnGlobalTable.KeySchemaPropertyThe Kinesis Data Streams configuration for the specified global table replica.A builder forCfnGlobalTable.KinesisStreamSpecificationPropertyAn implementation forCfnGlobalTable.KinesisStreamSpecificationPropertyRepresents the properties of a local secondary index.A builder forCfnGlobalTable.LocalSecondaryIndexPropertyAn implementation forCfnGlobalTable.LocalSecondaryIndexPropertyRepresents the settings used to enable point in time recovery.A builder forCfnGlobalTable.PointInTimeRecoverySpecificationPropertyAn implementation forCfnGlobalTable.PointInTimeRecoverySpecificationPropertyRepresents attributes that are copied (projected) from the table into an index.A builder forCfnGlobalTable.ProjectionPropertyAn implementation forCfnGlobalTable.ProjectionPropertySets the read request settings for a replica table or a replica global secondary index.A builder forCfnGlobalTable.ReadOnDemandThroughputSettingsPropertyAn implementation forCfnGlobalTable.ReadOnDemandThroughputSettingsPropertyAllows you to specify the read capacity settings for a replica table or a replica global secondary index when theBillingModeis set toPROVISIONED.A builder forCfnGlobalTable.ReadProvisionedThroughputSettingsPropertyAn implementation forCfnGlobalTable.ReadProvisionedThroughputSettingsPropertyRepresents the properties of a global secondary index that can be set on a per-replica basis.An implementation forCfnGlobalTable.ReplicaGlobalSecondaryIndexSpecificationPropertyDefines settings specific to a single replica of a global table.A builder forCfnGlobalTable.ReplicaSpecificationPropertyAn implementation forCfnGlobalTable.ReplicaSpecificationPropertyAllows you to specify a KMS key identifier to be used for server-side encryption.A builder forCfnGlobalTable.ReplicaSSESpecificationPropertyAn implementation forCfnGlobalTable.ReplicaSSESpecificationPropertyRepresents the DynamoDB Streams configuration for a global table replica.A builder forCfnGlobalTable.ReplicaStreamSpecificationPropertyAn implementation forCfnGlobalTable.ReplicaStreamSpecificationPropertyCreates or updates a resource-based policy document that contains the permissions for DynamoDB resources, such as a table, its indexes, and stream.A builder forCfnGlobalTable.ResourcePolicyPropertyAn implementation forCfnGlobalTable.ResourcePolicyPropertyRepresents the settings used to enable server-side encryption.A builder forCfnGlobalTable.SSESpecificationPropertyAn implementation forCfnGlobalTable.SSESpecificationPropertyRepresents the DynamoDB Streams configuration for a table in DynamoDB .A builder forCfnGlobalTable.StreamSpecificationPropertyAn implementation forCfnGlobalTable.StreamSpecificationPropertyDefines a target tracking scaling policy.An implementation forCfnGlobalTable.TargetTrackingScalingPolicyConfigurationPropertyRepresents the settings used to enable or disable Time to Live (TTL) for the specified table.A builder forCfnGlobalTable.TimeToLiveSpecificationPropertyAn implementation forCfnGlobalTable.TimeToLiveSpecificationPropertyProvides visibility into the number of read and write operations your table or secondary index can instantaneously support.A builder forCfnGlobalTable.WarmThroughputPropertyAn implementation forCfnGlobalTable.WarmThroughputPropertySets the write request settings for a global table or a global secondary index.A builder forCfnGlobalTable.WriteOnDemandThroughputSettingsPropertyAn implementation forCfnGlobalTable.WriteOnDemandThroughputSettingsPropertySpecifies an auto scaling policy for write capacity.An implementation forCfnGlobalTable.WriteProvisionedThroughputSettingsPropertyProperties for defining aCfnGlobalTable.A builder forCfnGlobalTablePropsAn implementation forCfnGlobalTablePropsTheAWS::DynamoDB::Tableresource creates a DynamoDB table.Represents an attribute for describing the schema for the table and indexes.A builder forCfnTable.AttributeDefinitionPropertyAn implementation forCfnTable.AttributeDefinitionPropertyA fluent builder forCfnTable.Configures contributor insights settings for a table or one of its indexes.A builder forCfnTable.ContributorInsightsSpecificationPropertyAn implementation forCfnTable.ContributorInsightsSpecificationPropertyThe options for imported source files in CSV format.A builder forCfnTable.CsvPropertyAn implementation forCfnTable.CsvPropertyRepresents the properties of a global secondary index.A builder forCfnTable.GlobalSecondaryIndexPropertyAn implementation forCfnTable.GlobalSecondaryIndexPropertySpecifies the properties of data being imported from the S3 bucket source to the table.A builder forCfnTable.ImportSourceSpecificationPropertyAn implementation forCfnTable.ImportSourceSpecificationPropertyThe format options for the data that was imported into the target table.A builder forCfnTable.InputFormatOptionsPropertyAn implementation forCfnTable.InputFormatOptionsPropertyRepresents a single element of a key schema.A builder forCfnTable.KeySchemaPropertyAn implementation forCfnTable.KeySchemaPropertyThe Kinesis Data Streams configuration for the specified table.A builder forCfnTable.KinesisStreamSpecificationPropertyAn implementation forCfnTable.KinesisStreamSpecificationPropertyRepresents the properties of a local secondary index.A builder forCfnTable.LocalSecondaryIndexPropertyAn implementation forCfnTable.LocalSecondaryIndexPropertySets the maximum number of read and write units for the specified on-demand table.A builder forCfnTable.OnDemandThroughputPropertyAn implementation forCfnTable.OnDemandThroughputPropertyThe settings used to enable point in time recovery.A builder forCfnTable.PointInTimeRecoverySpecificationPropertyAn implementation forCfnTable.PointInTimeRecoverySpecificationPropertyRepresents attributes that are copied (projected) from the table into an index.A builder forCfnTable.ProjectionPropertyAn implementation forCfnTable.ProjectionPropertyThroughput for the specified table, which consists of values forReadCapacityUnitsandWriteCapacityUnits.A builder forCfnTable.ProvisionedThroughputPropertyAn implementation forCfnTable.ProvisionedThroughputPropertyCreates or updates a resource-based policy document that contains the permissions for DynamoDB resources, such as a table, its indexes, and stream.A builder forCfnTable.ResourcePolicyPropertyAn implementation forCfnTable.ResourcePolicyPropertyThe S3 bucket that is being imported from.A builder forCfnTable.S3BucketSourcePropertyAn implementation forCfnTable.S3BucketSourcePropertyRepresents the settings used to enable server-side encryption.A builder forCfnTable.SSESpecificationPropertyAn implementation forCfnTable.SSESpecificationPropertyRepresents the DynamoDB Streams configuration for a table in DynamoDB.A builder forCfnTable.StreamSpecificationPropertyAn implementation forCfnTable.StreamSpecificationPropertyRepresents the settings used to enable or disable Time to Live (TTL) for the specified table.A builder forCfnTable.TimeToLiveSpecificationPropertyAn implementation forCfnTable.TimeToLiveSpecificationPropertyProvides visibility into the number of read and write operations your table or secondary index can instantaneously support.A builder forCfnTable.WarmThroughputPropertyAn implementation forCfnTable.WarmThroughputPropertyProperties for defining aCfnTable.A builder forCfnTablePropsAn implementation forCfnTablePropsDynamoDB's Contributor Insights Mode.Reference to ContributorInsightsSpecification.A builder forContributorInsightsSpecificationAn implementation forContributorInsightsSpecificationThe options for imported source files in CSV format.A builder forCsvOptionsAn implementation forCsvOptionsProperties for enabling DynamoDB capacity scaling.A builder forEnableScalingPropsAn implementation forEnableScalingPropsProperties for a global secondary index.A builder forGlobalSecondaryIndexPropsAn implementation forGlobalSecondaryIndexPropsProperties used to configure a global secondary index.A builder forGlobalSecondaryIndexPropsV2An implementation forGlobalSecondaryIndexPropsV2Properties for importing data from the S3.A builder forImportSourceSpecificationAn implementation forImportSourceSpecificationType of compression to use for imported data.The format of the source data.Interface for scalable attributes.Internal default implementation forIScalableTableAttribute.A proxy class which represents a concrete javascript instance of this type.An interface that represents a DynamoDB Table - either created with the CDK, or an existing one.Internal default implementation forITable.A proxy class which represents a concrete javascript instance of this type.Represents an instance of a DynamoDB table.Internal default implementation forITableV2.A proxy class which represents a concrete javascript instance of this type.A description of a key schema of an LSI, GSI or Table.A builder forKeySchemaAn implementation forKeySchemaProperties for a local secondary index.A builder forLocalSecondaryIndexPropsAn implementation forLocalSecondaryIndexPropsProperties used to configure maximum throughput for an on-demand table.A builder forMaxThroughputPropsAn implementation forMaxThroughputPropsGlobal table multi-region consistency mode.Supported DynamoDB table operations.Options for configuring metrics that considers multiple operations.A builder forOperationsMetricOptionsAn implementation forOperationsMetricOptionsReference to PointInTimeRecovey Specification for continuous backups.A builder forPointInTimeRecoverySpecificationAn implementation forPointInTimeRecoverySpecificationThe set of attributes that are projected into the index.Options used to configure global secondary indexes on a replica table.A builder forReplicaGlobalSecondaryIndexOptionsAn implementation forReplicaGlobalSecondaryIndexOptionsProperties used to configure a replica table.A builder forReplicaTablePropsAn implementation forReplicaTablePropsRepresents the table schema attributes.A builder forSchemaOptionsAn implementation forSchemaOptionsProperties for a secondary index.A builder forSecondaryIndexPropsAn implementation forSecondaryIndexPropsA set of permissions to grant on a Table Stream.A fluent builder forStreamGrants.Construction properties for StreamGrants.A builder forStreamGrantsPropsAn implementation forStreamGrantsPropsWhen an item in the table is modified, StreamViewType determines what information is written to the stream for this table.Options for configuring a system errors metric that considers multiple operations.A builder forSystemErrorsForOperationsMetricOptionsAn implementation forSystemErrorsForOperationsMetricOptionsProvides a DynamoDB table.A fluent builder forTable.Reference to a dynamodb table.A builder forTableAttributesAn implementation forTableAttributesAttributes of a DynamoDB table.A builder forTableAttributesV2An implementation forTableAttributesV2Base class for a DynamoDB table.DynamoDB's table class.What kind of server-side encryption to apply to this table.Represents server-side encryption for a DynamoDB table.A set of permissions to grant on a Table.A fluent builder forTableGrants.Construction properties for TableGrants.A builder forTableGrantsPropsAn implementation forTableGrantsPropsProperties of a DynamoDB Table.A builder forTableOptionsAn implementation forTableOptionsOptions used to configure a DynamoDB table.A builder forTableOptionsV2An implementation forTableOptionsV2Properties for a DynamoDB Table.A builder forTablePropsAn implementation forTablePropsProperties used to configure a DynamoDB table.A builder forTablePropsV2An implementation forTablePropsV2A DynamoDB Table.A fluent builder forTableV2.Properties used to configure provisioned throughput for a DynamoDB table.A builder forThroughputPropsAn implementation forThroughputPropsProperties for enabling DynamoDB utilization tracking.A builder forUtilizationScalingPropsAn implementation forUtilizationScalingPropsReference to WarmThroughput for a DynamoDB table.A builder forWarmThroughputAn implementation forWarmThroughput