

# Map classes to DynamoDB items by using the DynamoDB Mapper (Developer Preview)
<a name="ddb-mapper"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

DynamoDB Mapper is a high-level library that offers mechanisms to map Kotlin classes to DynamoDB tables and indices, similar to the AWS SDK for Java's [DynamoDB Enhanced Client](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/dynamodb-enhanced-client.html) or the AWS SDK for .NET’s [Object Persistence Model](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DotNetSDKHighLevel.html). 

You define schemas that describe your data object and how to convert them to DynamoDB items. After you define the schema, DynamoDB Mapper provides an intuitive interface to use your objects in create, read, update, or delete (CRUD) operations on your tables and indices.

**Topics**
+ [Get started with DynamoDB Mapper](ddb-mapper-get-started.md)
+ [Configure DynamoDB Mapper](ddb-mapper-configuration.md)
+ [Generate a schema from annotations](ddb-mapper-anno-schema-gen.md)
+ [Manually define schemas](ddb-mapper-code-schemas.md)
+ [Use secondary indices with DynamoDB Mapper](ddb-mapper-secondary-indices.md)
+ [Use expressions](ddb-mapper-expressions.md)

# Get started with DynamoDB Mapper
<a name="ddb-mapper-get-started"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

The following tutorial introduces the basic components of DynamoDB Mapper and shows how to use it in your code.

## Add dependencies
<a name="ddb-mapper-get-started-deps"></a>

To begin working with DynamoDB Mapper in your Gradle project, add a plugin and two dependencies to your `build.gradle.kts` file.

(You can navigate to the *X.Y.Z* link to see the latest version available.)

```
// build.gradle.kts
val sdkVersion: String = [https://github.com/awslabs/aws-sdk-kotlin/releases/latest](https://github.com/awslabs/aws-sdk-kotlin/releases/latest)

plugins {
    id("aws.sdk.kotlin.hll.dynamodbmapper.schema.generator") version "$sdkVersion-beta" // For the Developer Preview, use the beta version of the latest SDK.
}

dependencies {
    implementation("aws.sdk.kotlin:dynamodb-mapper:$sdkVersion-beta")
    implementation("aws.sdk.kotlin:dynamodb-mapper-annotations:$sdkVersion-beta")
}
```

\$1Replace *<Version>* with the latest release of the SDK. To find the latest version of the SDK, check the [latest release on GitHub](https://github.com/awslabs/aws-sdk-kotlin/releases/latest).

**Note**  
Some of these dependencies are optional if you plan to define schemas manually. See [Manually define schemas](ddb-mapper-code-schemas.md) for more information and the reduced set of dependencies.

## Create and use a mapper
<a name="ddb-mapper-get-started-mapper"></a>

DynamoDB Mapper uses the AWS SDK for Kotlin’s DynamoDB client to interact with DynamoDB. You need to provide a fully configured [https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb/aws.sdk.kotlin.services.dynamodb/-dynamo-db-client/index.html](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb/aws.sdk.kotlin.services.dynamodb/-dynamo-db-client/index.html) instance when you create a mapper instance as shown in the following code snippet:

```
import aws.sdk.kotlin.hll.dynamodbmapper.DynamoDbMapper
import aws.sdk.kotlin.services.dynamodb.DynamoDbClient

val client = DynamoDbClient.fromEnvironment()
val mapper = DynamoDbMapper(client)
```

**Note**  
`DynamoDbMapper` doesn't support table creation operations. Use the `DynamoDbClient` to create tables.

After you have created the mapper instance, you can use it to get the table instance as shown next:

```
val carsTable = mapper.getTable("cars", CarSchema)
```

The previous code gets a reference to a table in `DynamoDB` named `cars` with a schema defined by `CarSchema` (we discuss schemas below). After you create a table instance, you can perform operations against it. The following code snippet show two example operations against the `cars` table:

```
carsTable.putItem {
    item = Car(make = "Ford", model = "Model T", ...)
}

carsTable
   .queryPaginated {
        keyCondition = KeyFilter(partitionKey = "Peugeot")
   }
   .items()
   .collect { car -> println(car) }
```

The previous code creates a new item in the `cars` table. The code creates a `Car` instance inline using the `Car` class, whose definition is shown below. Next, the code queries the `cars` table for items whose partition key is `Peugeot` and prints them. Operations are [described in more detail below](#ddb-mapper-gs-invoke-ops).

## Define a schema with class annotations
<a name="ddb-mapper-gs-anno-schema-def"></a>

For a variety of Kotlin classes, the SDK can automatically generate schemas at build time by using the DynamoDB Mapper Schema Generator plugin for Gradle. When you use the schema generator, the SDK inspects your classes to infer the schema, which alleviates some of the boilerplate involved in manually defining schemas. You can customize the schema that is generated by using additional [annotations](ddb-mapper-anno-schema-gen.md#ddb-mapper-anno-schema-gen-annotate) and [configuration](ddb-mapper-anno-schema-gen.md#ddb-mapper-anno-schema-gen-conf-plugin).

To generate a schema from annotations, first annotate your classes with `@[DynamoDbItem](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-item/index.html)` and any keys with `@[DynamoDbPartitionKey](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-partition-key/index.html)` and `@[DynamoDbSortKey](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-sort-key/index.html)`. The following code shows the annotated `Car` class:

```
// The annotations used in the Car class are used by the plugin to generate a schema.
@DynamoDbItem
data class Car(
    @DynamoDbPartitionKey
    val make: String,
    
    @DynamoDbSortKey
    val model: String,
    
    val initialYear: Int
)
```

After building, you can refer to the automatically generated `CarSchema`. You can use the reference in the mapper's `getTable` method to get a table instance as shown in the following:

```
import aws.sdk.kotlin.hll.dynamodbmapper.generatedschemas.CarSchema

// `CarSchema` is generated at build time.
val carsTable = mapper.getTable("cars", CarSchema)
```

Alternatively, you can get the table instance by taking advantage of an extension method on `[DynamoDbMapper](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-mapper/index.html)` that is automatically generated at build time. By using this approach, you don't need to refer to the schema by name. As shown in the following, the automatically generated `getCarsTable` extension method returns a reference to the table instance:

```
val carsTable = mapper.getCarsTable("cars")
```

See [Generate a schema from annotations](ddb-mapper-anno-schema-gen.md) for more details and examples.

## Invoke operations
<a name="ddb-mapper-gs-invoke-ops"></a>

DynamoDB Mapper supports a subset of the operations available on the SDK's `DynamoDbClient`. Mapper operations are named the same as their counterparts on the SDK client. Many mapper request/response members are the same as their SDK client counterparts, although some have been renamed, re-typed, or dropped altogether.

You invoke an operation on a table instance using a DSL syntax as shown in the following:

```
import aws.sdk.kotlin.hll.dynamodbmapper.operations.putItem
import aws.sdk.kotlin.services.dynamodb.model.ReturnConsumedCapacity

val putResponse = carsTable.putItem {
    item = Car(make = "Ford", model = "Model T", ...)
    returnConsumedCapacity = ReturnConsumedCapacity.Total
}

println(putResponse.consumedCapacity)
```

You can also invoke an operation by using an explicit request object:

```
import aws.sdk.kotlin.hll.dynamodbmapper.operations.PutItemRequest
import aws.sdk.kotlin.services.dynamodb.model.ReturnConsumedCapacity

val putRequest = PutItemRequest<Car> {
    item = Car(make = "Ford", model = "Model T", ...)
    returnConsumedCapacity = ReturnConsumedCapacity.Total
}

val putResponse = carsTable.putItem(putRequest)
println(putResponse.consumedCapacity)
```

The previous two code examples are equivalent.

### Work with paginated responses
<a name="ddb-mapper-gs-pagination"></a>

Some operations like `query` and `scan` can return data collections that might be too large to return in a single response. To ensure that all objects are processed, DynamoDB Mapper provides paginating methods, which do not call DynamoDB immediately, but instead return a `[Flow](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-flow/)` of the operation response type, such as `Flow<ScanResponse<Car>>` shown in the following:

```
import aws.sdk.kotlin.hll.dynamodbmapper.operations.scanPaginated

val scanResponseFlow = carsTable.scanPaginated { }

scanResponseFlow.collect { response ->
    val items = response.items.orEmpty()
    println("Found page with ${items.size} items:")
    
    items.forEach { car -> println(car) }
}
```

Often, a flow of objects is more useful to business logic than a flow of responses *containing* objects. The mapper provides an extension method on paginated responses to access the flow of objects. For example, the following code returns a `Flow<Car>` rather than a `Flow<ScanResponse<Car>>` as shown previously:

```
import aws.sdk.kotlin.hll.dynamodbmapper.operations.items
import aws.sdk.kotlin.hll.dynamodbmapper.operations.scanPaginated

val carFlow = carsTable
    .scanPaginated { }
    .items()

carFlow.collect { car -> println(car) }
```

# Configure DynamoDB Mapper
<a name="ddb-mapper-configuration"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

DynamoDB Mapper offers configuration options that you can use customize the behavior of the library to fit your application.

## Use interceptors
<a name="ddb-mapper-interceptors"></a>

The DynamoDB Mapper library defines hooks that you can tap into at critical stages of the mapper's request pipeline. You can implement the `[Interceptor](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.pipeline/-interceptor/index.html)` interface to implement hooks to observe or modify the mapper process.

You can register one or more interceptors on a single DynamoDB Mapper as a configuration option. See the [example](#ddb-mapper-interceptors-hooks-example-conf) at the end of this section for how you register an interceptor.

### Understand the request pipeline
<a name="ddb-mapper-interceptors-pipeline"></a>

The mapper’s request pipeline consist of the following 5 steps:

1. **Initialization:** Set up the operation and gathering initial context.

1. **Serialization:** Convert high-level request objects into low-level request objects. This step converts high-level Kotlin objects into DynamoDB items that consist of attribute names and values.

1. **Low-level invocation:** Execute a request on the underlying DynamoDB client.

1. **Deserialization:** Convert low-level response objects into high-level response objects. This step includes converting DynamoDB items that consist of attribute names and values into high-level Kotlin objects.

1. **Completion:** Finalize the high-level response to return to the caller. If an exception was thrown during the execution of the pipeline, this step finalizes the exception that is thrown to the caller.

### Hooks
<a name="ddb-mapper-interceptors-hooks"></a>

Hooks are interceptor methods that the mapper invokes before or after specific steps in the pipeline. There are two variants of hooks: *read-only* and *modify* (or read-write). For example, `readBeforeInvocation` is a read-only hook that the mapper executes in the phase before the low-level invocation step.

#### Read-only hooks
<a name="ddb-mapper-interceptors-hooks-ro"></a>

The mapper invokes read-only hooks before and after each step in the pipeline (except before the *Initialization* step and after the *Completion* step). Read-only hoods offer a read-only view of a high-level operation in progress. They provide a mechanism to examine the state of an operation for logging, debugging, collecting metrics, for example. Each read-only hook receives a context argument and returns `[Unit](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-unit/)`.

The mapper catches any exception that is thrown during a read-only hook and adds it to the context. It then passes the context with the exception to subsequent interceptor hooks in the same phase. The mapper throws any exception to the caller only after it calls the last interceptor's read-only hook for the same phase. For example, if a mapper is configured with two interceptors `A` and `B`, and `A`'s `readAfterSerialization` hook throws an exception, the mapper adds the exception to the context passed to `B`'s `readAfterSerialization` hook. After `B`'s `readAfterSerialization` hook has completed, the mapper throws the exception back to the caller. 

#### Modify hooks
<a name="ddb-mapper-interceptors-hooks-modify"></a>

The mapper invokes modify hooks before each step in the pipeline (except before *Initialization*). Modify hooks offer the ability to see and modify a high-level operation in progress. They can be used to customize behavior and data in ways that mapper configuration and item schemas do not. Each modify hook receives a context argument and returns some subset of that context as a result—either modified by the hook or passed-through from the input context.

If the mapper catches any exception while it executes a modify hook, it doesn't execute any other interceptors' modify hooks in the same phase. The mapper adds the exception to the context and passes it to the next read-only hook. The mapper throws any exception to the caller only after it calls the last interceptors' read-only hook for the same phase. For example, if a mapper is configured with two interceptors `A` and `B`, and `A`'s `modifyBeforeSerialization` hook throws an exception, `B`'s `modifyBeforeSerialization` hook will not be invoked. Interceptors `A`'s and `B'`s `readAfterSerialization` hook will execute, after which the exception will be thrown back to the caller.

#### Execution order
<a name="ddb-mapper-interceptors-hooks-ex-order"></a>

The order in which interceptors are defined in a mapper’s configuration determines the order that the mapper calls the hooks:
+ For phases *before* the *Low-level invocation* step, it executes hooks in the *same order* that they were added in the configuration.
+ For phases *after* the* Low-level invocation* step, it executes hooks in the *reverse order* from the order they were added in the configuration.

The following diagram shows the execution order of hook methods:

![\[Flowchart of interceptor hook methods.\]](http://docs.aws.amazon.com/sdk-for-kotlin/latest/developer-guide/images/KotlinDevGuide-dynamodbMapper-hook-flowchart.png)


##### Text description of the execution order of hook methods
<a name="ixg_ypr_ddc"></a>

A mapper executes an interceptor's hooks in the following order:

1. DynamoDB Mapper invokes a high-level request

1. Read before execution

1. Modify before serialization

1. Read before serialization

1. DynamoDB Mapper converts objects to items

1. Read after serialization

1. Modify before invocation

1. Read before invocation

1. DynamoDB Mapper invokes the low-level operation

1. Read after invocation

1. Modify before deserialization

1. Read before deserialization

1. DynamoDB Mapper converts items to objects

1. Read after deserialization

1. Modify before completion

1. Read after execution

1. DynamoDB Mapper returns a high-level response

#### Example configuration
<a name="ddb-mapper-interceptors-hooks-example-conf"></a>

The following example shows how to configure an interceptor on a `DynamoDbMapper` instance:

```
import aws.sdk.kotlin.hll.dynamodbmapper.DynamoDbMapper
import aws.sdk.kotlin.hll.dynamodbmapper.operations.ScanRequest
import aws.sdk.kotlin.hll.dynamodbmapper.operations.ScanResponse
import aws.sdk.kotlin.hll.dynamodbmapper.pipeline.Interceptor
import aws.sdk.kotlin.services.dynamodb.DynamoDbClient
import aws.sdk.kotlin.services.dynamodb.model.ScanRequest as LowLevelScanRequest
import aws.sdk.kotlin.services.dynamodb.model.ScanResponse as LowLevelScanResponse

val printingInterceptor = object : Interceptor<User, ScanRequest<User>, LowLevelScanRequest, LowLevelScanResponse, ScanResponse<User>> {
    override fun readBeforeDeserialization(ctx: LResContext<User, ScanRequest<User>, LowLevelScanRequest, LowLevelScanResponse>) {
        println("Scan response contains ${ctx.lowLevelResponse.count} items.")
    }
}

val client = DynamoDbClient.fromEnvironment()

val mapper = DynamoDbMapper(client) {
    interceptors += printingInterceptor
}
```

# Generate a schema from annotations
<a name="ddb-mapper-anno-schema-gen"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

DynamoDB Mapper relies on schemas that define the mapping between your Kotlin classes and DynamoDB items. Your Kotlin classes can drive the creation of schemas by using the schema generator Gradle plugin. 

## Apply the plugin
<a name="ddb-mapper-anno-schema-gen-plugin"></a>

To start code generating schemas for your classes, apply the plugin in your application’s build script and add a dependency on the annotations module. The following Gradle script snippet shows the necessary setup for code generation. 

(You can navigate to the *X.Y.Z* link to see the latest version available.)

```
// build.gradle.kts
val sdkVersion: String = [https://github.com/awslabs/aws-sdk-kotlin/releases/latest](https://github.com/awslabs/aws-sdk-kotlin/releases/latest) 

plugins {
    id("aws.sdk.kotlin.hll.dynamodbmapper.schema.generator") version "$sdkVersion-beta" // For the Developer Preview, use the beta version of the latest SDK.
}

dependencies {
    implementation("aws.sdk.kotlin:dynamodb-mapper:$sdkVersion-beta")
    implementation("aws.sdk.kotlin:dynamodb-mapper-annotations:$sdkVersion-beta")
}
```

## Configure the plugin
<a name="ddb-mapper-anno-schema-gen-conf-plugin"></a>

The plugin offers a number of configuration options that you can apply by using the `dynamoDbMapper { ... }` plugin extension in your build script: 


| Option | Option description | Values | 
| --- | --- | --- | 
| generateBuilderClasses |  Controls whether DSL-style builder classes will be generated for classes annotated with `@DynamoDbItem`  |  `WHEN_REQUIRED` (default): Builder classes will not be generated for classes which consist of only public mutable members and have a zero-arg constructor `ALWAYS`: Builder classes will always be generated  | 
| visibility | Controls the visibility of generated classes |  `PUBLIC` (default) `INTERNAL`  | 
| destinationPackage | Specifies the package name for generated classes |  `RELATIVE` (default): Schema classes will be generated in a sub-package relative to your annotated class. By default, the sub-package is named `dynamodbmapper.generatedschemas` , and this is configurable by passing a string parameter `ABSOLUTE`: Schema classes will be generated in an absolute package relative to the root of your application. By default, the package is named `aws.sdk.kotlin.hll.dynamodbmapper.generatedschemas`, and this is configurable by passing a string parameter.  | 
| generateGetTableExtension |  Controls whether a `DynamoDbMapper.get${CLASS_NAME}Table` extension method will be generated  |  `true` (default) `false`  | 

**Example of code-generation plugin configuration**  
This following example configures the destination package and visibility of the generated schema:  

```
// build.gradle.kts

import aws.sdk.kotlin.hll.dynamodbmapper.codegen.annotations.DestinationPackage
import aws.sdk.kotlin.hll.dynamodbmapper.codegen.annotations.Visibility
import aws.smithy.kotlin.runtime.ExperimentalApi

@OptIn(ExperimentalApi::class)
dynamoDbMapper {
    destinationPackage = DestinationPackage.RELATIVE("my.configured.package")
    visibility = Visibility.INTERNAL
}
```

## Annotate classes
<a name="ddb-mapper-anno-schema-gen-annotate"></a>

The schema generator looks for class annotations to determine which classes to generate schemas for. To opt in to generating schemas, annotate your classes with `@[DynamoDbItem](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-item/index.html)`. You must also annotate a class property which serves as the item’s partition key with the `@[DynamoDbPartitionKey](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-partition-key/index.html)` annotation.

The following class definition shows the minimally required annotations for schema generation:

**Example**  

```
@DynamoDbItem
data class Employee(
    @DynamoDbPartitionKey
    val id: Int,
    
    val name: String,
    val role: String,
)
```

### Class annotations
<a name="ddb-mapper-anno-schema-gen-class-annos"></a>

The following annotations are applied to classes to control schema generation:
+ `@DynamoDbItem`: Specifies that this class/interface describes an item type in a table. All public properties of this type will be mapped to attributes unless they are explicitly ignored. When present, a schema will be generated for this class.
  + `converterName`: An optional parameter which indicates a custom schema should be used rather than the one created by the schema generator plugin. This is the fully qualified name of the custom `ItemConverter` class. The [Define a custom item converter](#ddb-mapper-anno-schema-custom) section shows an example of creating and using a custom schema.

### Property annotations
<a name="ddb-mapper-anno-schema-gen-prop-annos"></a>

You can apply the following annotations to class properties to control schema generation:
+ `@[https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-item/index.html](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-item/index.html)`: Specifies the partition key for the item.
+ `@[DynamoDbSortKey](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-sort-key/index.html)`: Specifies an optional sort key for the item.
+ `@[DynamoDbIgnore](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-ignore/index.html)`: Specifies that this class property should not be converted to/from an Item attribute by the DynamoDB Mapper.
+ `@[DynamoDbAttribute](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-attribute/index.html)`: Specifies an optional custom attribute name for this class property.

## Define a custom item converter
<a name="ddb-mapper-anno-schema-custom"></a>

In some cases, you may want to define a custom item converter for your class. One reason for this would be if your class uses a type that’s not supported by the schema generator plugin. We use the following version of the `Employee` class as an example:

```
import kotlin.uuid.Uuid

@DynamoDbItem
data class Employee(
    @DynamoDbPartitionKey
    var id: Int,
    
    var name: String,
    var role: String,
    var workstationId: Uuid
)
```

The `Employee` class now uses a `kotlin.uuid.Uuid` type, which is not currently supported by the schema generator. Schema generation fails with an error: `Unsupported attribute type TypeRef(pkg=kotlin.uuid, shortName=Uuid, genericArgs=[], nullable=false)`. This error indicates that the plugin cannot generate an item converter for this class. Therefore, we need to write our own.

To do this, we implement an `[ItemConverter](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-item-converter/index.html)` for the class, then modify the `@DynamoDbItem` class annotation by specifying the fully qualified name of the new item converter.

First, we implement a `[ValueConverter](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.values/-value-converter/index.html)` for the `kotlin.uuid.Uuid` class:

```
import aws.sdk.kotlin.hll.dynamodbmapper.values.ValueConverter
import aws.sdk.kotlin.services.dynamodb.model.AttributeValue
import kotlin.uuid.Uuid

public val UuidValueConverter = object : ValueConverter<Uuid> {
    override fun convertFrom(to: AttributeValue): Uuid = 
        Uuid.parseHex(to.asS())
        
    override fun convertTo(from: Uuid): AttributeValue = 
        AttributeValue.S(from.toHexString())
}
```

Then, we implement an `ItemConverter` for our `Employee` class. The `ItemConverter` uses this new value converter in the attribute descriptor for "workstationId":

```
import aws.sdk.kotlin.hll.dynamodbmapper.items.AttributeDescriptor
import aws.sdk.kotlin.hll.dynamodbmapper.items.ItemConverter
import aws.sdk.kotlin.hll.dynamodbmapper.items.SimpleItemConverter
import aws.sdk.kotlin.hll.dynamodbmapper.values.scalars.IntConverter
import aws.sdk.kotlin.hll.dynamodbmapper.values.scalars.StringConverter

public object MyEmployeeConverter : ItemConverter<Employee> by SimpleItemConverter(
    builderFactory = { Employee() },
    build = { this },
    descriptors = arrayOf(
        AttributeDescriptor(
            "id",
            Employee::id,
            Employee::id::set,
            IntConverter,
        ),
        AttributeDescriptor(
            "name",
            Employee::name,
            Employee::name::set,
            StringConverter,
        ),
        AttributeDescriptor(
            "role",
            Employee::role,
            Employee::role::set,
            StringConverter
        ),
        AttributeDescriptor(
            "workstationId",
            Employee::workstationId,
            Employee::workstationId::set,
            UuidValueConverter
        )
    ),
)
```

Now that we have defined the item converter, we can apply it to our class. We update the `@[DynamoDbItem](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper-annotations/aws.sdk.kotlin.hll.dynamodbmapper/-dynamo-db-item/index.html)` annotation to reference the item converter by providing the fully-qualified class name as shown in the following:

```
import kotlin.uuid.Uuid

@DynamoDbItem("my.custom.item.converter.MyEmployeeConverter")
data class Employee(
    @DynamoDbPartitionKey
    var id: Int,
    
    var name: String,
    var role: String,
    var workstationId: Uuid
)
```

Finally we can begin using the class with DynamoDB Mapper.

# Manually define schemas
<a name="ddb-mapper-code-schemas"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

## Define a schema in code
<a name="ddb-mapper-gs-manual-schema-def"></a>

For maximum control and customizability, you can manually define and customize schemas in code. 

As shown in the following snippet, you need to include fewer dependencies in your `build.gradle.kts` file compared to using annotation-driven schema creation. 

(You can navigate to the *X.Y.Z* link to see the latest version available.)

```
// build.gradle.kts
val sdkVersion: String = [https://github.com/awslabs/aws-sdk-kotlin/releases/latest](https://github.com/awslabs/aws-sdk-kotlin/releases/latest) 

dependencies {
    implementation("aws.sdk.kotlin:dynamodb-mapper:$sdkVersion-beta") // For the Developer Preview, use the beta version of the latest SDK.
}
```

Note that you don't need the schema generator plugin nor the annotation package.

The mapping between a Kotlin class and a DynamoDB item requires an `[ItemSchema<T>](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-item-schema/index.html)` implementation, where `T` is the type of the Kotlin class. A schema consists of the following elements:
+ An item converter, which defines how to convert between Kotlin object instances and DynamoDB items.
+ A partition key specification, which defines the name and type of the partition key attribute.
+ Optionally, a sort key specification, which defines the name and type of the sort key attribute.

In the following code we manually create a `CarSchema` instance:

```
import aws.sdk.kotlin.hll.dynamodbmapper.items.ItemConverter
import aws.sdk.kotlin.hll.dynamodbmapper.items.ItemSchema
import aws.sdk.kotlin.hll.dynamodbmapper.model.itemOf

// We define a schema for this data class.
data class Car(val make: String, val model: String, val initialYear: Int)

// First, define an item converter.
val carConverter = object : ItemConverter<Car> {
    override fun convertTo(from: Car, onlyAttributes: Set<String>?): Item  = itemOf(
        "make" to AttributeValue.S(from.make),
        "model" to AttributeValue.S(from.model),
        "initialYear" to AttributeValue.N(from.initialYear.toString()),
    )

    override fun convertFrom(to: Item): Car = Car(
        make = to["make"]?.asSOrNull() ?: error("Invalid attribute `make`"),
        model = to["model"]?.asSOrNull() ?: error("Invalid attribute `model`"),
        initialYear = to["initialYear"]?.asNOrNull()?.toIntOrNull()
            ?: error("Invalid attribute `initialYear`"),
    )
}

// Next, define the specifications for the partition key and sort key.
val makeKey = KeySpec.String("make")
val modelKey = KeySpec.String("model")

// Finally, create the schema from the converter and key specifications.
// Note that the KeySpec for the partition key comes first in the ItemSchema constructor.
val CarSchema = ItemSchema(carConverter, makeKey, modelKey)
```

The previous code creates a converter named `carConverter`, which is defined as an anonymous implementation of `ItemConverter<Car>`. The converter’s `convertTo` method accepts a `Car` argument and returns an `Item` instance representing the literal keys and values of DynamoDB item attributes. The converter’s `convertFrom` method accepts an `Item` argument and returns a `Car` instance from the attribute values of the `Item` argument.

Next the code creates two key specifications: one for the partition key and one for the sort key. Every DynamoDB table or index must have exactly one partition key and, correspondingly, so must every DynamoDB Mapper schema definition. Schemas may also have one sort key.

In the last statement, the code creates a schema for the `cars` DynamoDB table from the converter and key specifications.

The resulting schema is equivalent to the annotation-driven schema that we generated in the [Define a schema with class annotations](ddb-mapper-get-started.md#ddb-mapper-gs-anno-schema-def) section. For reference, the following is the annotated class we used:

### Car class with DynamoDB Mapper annotations
<a name="ejd_mxz_ddc"></a>

```
@DynamoDbItem
data class Car(
    @DynamoDbPartitionKey
    val make: String,
    
    @DynamoDbSortKey
    val model: String,
    
    val initialYear: Int
)
```

In addition to implementing your own `ItemConverter`, DynamoDB Mapper includes several helpful implementations such as: 
+ `[SimpleItemConverter](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-simple-item-converter/index.html)`: provides simple conversion logic by using a builder class and attribute descriptors. See the example in the [Define a custom item converter](ddb-mapper-anno-schema-gen.md#ddb-mapper-anno-schema-custom) for how you can make use of this implementation.
+ `[HeterogeneousItemConverter](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-heterogeneous-item-converter/index.html)`: provides polymorphic type conversion logic by using a discriminator attribute and delegate `ItemConverter` instances for subtypes.
+ `[DocumentConverter](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-document-converter/index.html)`: provides conversion logic for unstructured data in [https://docs.aws.amazon.com/smithy-kotlin/api/latest/runtime-core/aws.smithy.kotlin.runtime.content/-document.html](https://docs.aws.amazon.com/smithy-kotlin/api/latest/runtime-core/aws.smithy.kotlin.runtime.content/-document.html) objects.

# Use secondary indices with DynamoDB Mapper
<a name="ddb-mapper-secondary-indices"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

## Define a schema for a secondary index
<a name="ddb-mapper-secondary-indices-schema"></a>

DynamoDB tables support secondary indices which provide access to data using different keys from those defined on the base table itself. As with base tables, DynamoDB Mapper interacts with indices by using the `[ItemSchema](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.items/-item-schema/index.html)` type.

DynamoDB secondary indices are not required to contain every attribute from the base table. Accordingly, the Kotlin class that maps to an index may differ from the Kotlin class that maps to that index’s base table. When that is the case, a separate schema must be declared for the index class.

The following code manually creates an index schema for the DynamoDB `cars` table.

```
import aws.sdk.kotlin.hll.dynamodbmapper.items.ItemConverter
import aws.sdk.kotlin.hll.dynamodbmapper.items.ItemSchema
import aws.sdk.kotlin.hll.dynamodbmapper.model.itemOf

// This is a data class for modelling the index of the Car table. Note
// that it contains a subset of the fields from the Car class and also 
// uses different names for them.
data class Model(val name: String, val manufacturer: String)

// We define an item converter.
val modelConverter = object : ItemConverter<Model> {
    override fun convertTo(from: Model, onlyAttributes: Set<String>?): Item  = itemOf(
        "model" to AttributeValue.S(from.name),
        "make" to AttributeValue.S(from.manufacturer),
    )

    override fun convertFrom(to: Item): Model = Model(
        name = to["model"]?.asSOrNull() ?: error("Invalid attribute `model`"),
        manufacturer = to["make"]?.asSOrNull() ?: error("Invalid attribute `make`"),
    )
}
val modelKey = KeySpec.String("model")
val makeKey = KeySpec.String("make")

val modelSchema = ItemSchema(modelConverter, modelKey, makeKey) // The partition key specification is the second parameter.

/* Note that `Model` index's partition key is `model` and its sort key is `make`,
   whereas the `Car` base table uses `make` as the partition key and `model` as the sort key:

        @DynamoDbItem
        data class Car(
            @DynamoDbPartitionKey
            val make: String,
    
            @DynamoDbSortKey
            val model: String,
    
            val initialYear: Int
        )
*/
```

We can now use `Model` instances in operations.

## Use secondary indices in operations
<a name="ddb-mapper-gs-index-ops"></a>

DynamoDB Mapper supports a subset of operations on indices, namely `queryPaginated` and `scanPaginated`. To invoke these operations on an index, you must first obtain a reference to an index from the table object. In the following sample, we use the `modelSchema` that we created previously for the `cars-by-model` index (creation not shown here):

```
val table = mapper.getTable("cars", CarSchema)
val index = table.getIndex("cars-by-model", modelSchema)

val modelFlow = index
    .scanPaginated { }
    .items()

modelFlow.collect { model -> println(model) }
```

# Use expressions
<a name="ddb-mapper-expressions"></a>

****  
**DynamoDB Mapper is a Developer Preview release. It is not feature complete and is subject to change.**

Certain DynamoDB operations accept [expressions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.html) that you can use to specify constraints or conditions. DynamoDB Mapper provides an idiomatic Kotlin DSL to create expressions. The DSL brings greater structure and readability to your code and also makes it easier to write expressions. 

This section describes the DSL syntax and provides various examples. 

## Use expressions in operations
<a name="ddb-mapper-expressions-basic-usage"></a>

You use expressions in operations like `scan`, where they filter the returned items based on criteria that you define. To use expressions with DynamoDB Mapper, add the expression component in the operation request. 

The following snippet shows an example of a filter expression that is used in a `scan` operation. It uses a lambda argument to describe the filter criteria that limits the items to be returned to those with a `year` attribute value of 2001:

```
val table = // A table instance.

table.scanPaginated {
    filter {
        attr("year") eq 2001
    }
}
```

The following example shows a `query` operation that supports expressions in two places—sort key filtering and non-key filtering:

```
table.queryPaginated {
    keyCondition = KeyFilter(partitionKey = 1000) { sortKey startsWith "M" }
    filter {
        attr("year") eq 2001
    }
}
```

The previous code filters results to those that meet all three criteria:
+ Partition key attribute value is 1000 *-AND-*
+ Sort key attribute value starts with the letter *M* *-AND-*
+ year attribute value is 2001

## DSL components
<a name="ddb-mapper-expressions-dsl"></a>

The DSL syntax exposes several types of components—described below—that you use to build expressions.

### Attributes
<a name="ddb-mapper-expressions-dsl-attrs"></a>

Most conditions reference attributes, which are identified by their key or document path. With the DSK, you create all attribute references by using the `attr` function and optionally make additional modifications. 

The following code shows a range of example attribute references from simple to complex, such as list dereferencing by index and map dereferencing by key::

```
attr("foo")           // Refers to the value of top-level attribute `foo`.

attr("foo")[3]        // Refers to the value at index 3 in the list value of
                      // attribute `foo`.

attr("foo")[3]["bar"] // Refers to the value of key `bar` in the map value at
                      // index 3 of the list value of attribute `foo`.
```

### Equalities and inequalities
<a name="ddb-mapper-expressions-dsl-eq-and-ineq"></a>

You can compare attribute values in an expression by equalities and inequalities. You can compare attribute values to literal values or other attribute values. The functions that you use to specify the conditions are:
+ `eq`: is equal to (equivalent to `==`)
+ `neq`: is not equal to (equivalent to `!=`)
+ `gt`: is greater than (equivalent to `>`)
+ `gte`: is greater than or equal to (equivalent to `>=`)
+ `lt`: is less than (equivalent to `<`)
+ `lte`: is less than or equal to (equivalent to `<=`)

You combine the comparison function with arguments by using infix notation as shown in the following examples:

```
attr("foo") eq 42           // Uses a literal. Specifies that the attribute value `foo` must be
                            // equal to 42.

attr("bar") gte attr("baz") // Uses another attribute value. Specifies that the attribute 
                            // value `bar` must be greater than or equal to the
                            // attribute value of `baz`.
```

### Ranges and sets
<a name="ddb-mapper-expressions-dsl-ranges-sets"></a>

In addition to single values, you can compare attribute values to multiple values in ranges or sets. You use the infix `[isIn](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/is-in.html)` function to do the comparison as shown in the following examples:

```
attr("foo") isIn 0..99  // Specifies that the attribute value `foo` must be
                        // in the range of `0` to `99` (inclusive).

attr("foo") isIn setOf( // Specifies that the attribute value `foo` must be
    "apple",            // one of `apple`, `banana`, or `cherry`.
    "banana",
    "cherry",
)
```

The `isIn` function provides overloads for collections (such as `Set<String>`) and for bounds that you can express as a Kotlin `[ClosedRange<T>](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.ranges/-closed-range/)` (such as `[IntRange](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.ranges/-int-range/)`). For bounds that you cannot express as a `ClosedRange<T>` (such as byte arrays or other attribute references), you can use the `[isBetween](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/is-between.html)` function:

```
val lowerBytes = byteArrayOf(0x48, 0x65, 0x6c)  // Specifies that the attribute value
val upperBytes = byteArrayOf(0x6c, 0x6f, 0x21)  // `foo` is between the values
attr("foo").isBetween(lowerBytes, upperBytes)   // `0x48656c` and `0x6c6f21`

attr("foo").isBetween(attr("bar"), attr("baz")) // Specifies that the attribute value
                                                // `foo` is between the values of
                                                // attributes `bar` and `baz`.
```

### Boolean logic
<a name="ddb-mapper-expressions-dsl-boolean"></a>

You can combine individual conditions or altered using boolean logic by using the following functions:
+ `and`: every condition must be true (equivalent to` &&`)
+ `or`: at least one condition must be true (equivalent to `||`)
+ `not`: the given condition must be false (equivalent to `!`)

The follow examples show each function:

```
and(                           // Both conditions must be met:
    attr("foo") eq "banana",   // * attribute value `foo` must equal `banana`
    attr("bar") isIn 0..99,    // * attribute value `bar` must be between
)                              //   0 and 99 (inclusive)

or(                            // At least one condition must be met:
    attr("foo") eq "cherry",   // * attribute value `foo` must equal `cherry`
    attr("bar") isIn 100..199, // * attribute value `bar` must be between
)                              //   100 and 199 (inclusive)

not(                           // The attribute value `foo` must *not* be
    attr("baz") isIn setOf(    // one of `apple`, `banana`, or `cherry`.
        "apple",               // Stated another way, the attribute value
        "banana",              // must be *anything except* `apple`, `banana`,
        "cherry",              // or `cherry`--including potentially a
    ),                         // non-string value or no value at all.
)
```

You can further combine boolean conditions by boolean functions to create nested logic as shown in the following expression:

```
or(
    and(
        attr("foo") eq 123,
        attr("bar") eq "abc",
    ),
    and(
        attr("foo") eq 234,
        attr("bar") eq "bcd",
    ),
)
```

The previous expression filters results to those that meet either of these conditions: 
+  Both of these conditions are true:
  + `foo` attribute value is 123 *-AND-*
  + `bar` attribute value is "abc" 
+ Both of these conditions are true:
  + `foo` attribute value is 234 *-AND-*
  + `bar` attribute value is "bcd" 

This is equivalent to the following Kotlin boolean expression:

```
(foo == 123 && bar == "abc") || (foo == 234 && bar == "bcd")
```

### Functions and properties
<a name="ddb-mapper-expressions-dsl-functions"></a>

The following functions and properties provide additional expression capabilities:
+ `[contains](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/contains.html)`: checks if a string/list attribute value contains a given value
+ `[exists](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/exists.html)`: checks if an attribute is defined and holds any value (including `null`)
+ `[notExists](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/not-exists.html)`: checks if an attribute is undefined
+ `[isOfType](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/is-of-type.html)`: checks if an attribute value is of a given type, such as string, number, boolean, and so on
+ `[size](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/size.html)`: gets the size of an attribute, such as the number of elements in a collection or the length of a string
+ `[startsWith](https://docs.aws.amazon.com/sdk-for-kotlin/api/latest/dynamodb-mapper/aws.sdk.kotlin.hll.dynamodbmapper.expressions/-filter/starts-with.html)`: checks if a string attribute value starts with a given substring

The following examples show use of additional functions and properties that you can use in expressions:

```
attr("foo") contains "apple" // Specifies that the attribute value `foo` must be
                             // a list that contains an `apple` element or a string
                             // which contains the substring `apple`.

attr("bar").exists()         // Specifies that the `bar` must exist and have a
                             // value (including potentially `null`).

attr("baz").size lt 100      // Specifies that the attribute value `baz` must have
                             // a size of less than 100.

attr("qux") isOfType AttributeType.String // Specifies that the attribute `qux`
                                          // must have a string value.
```

### Sort key filters
<a name="ddb-mapper-expressions-dsl-sort-key"></a>

Filter expressions on sort keys (such as in the `query` operation’s `keyCondition` parameter) do not use named attribute values. To use a sort key in a filter, you must use the keyword `sortKey` in all comparisons. The `sortKey` keyword replaces `attr("<sort key name>")` as shown in the following examples:

```
sortKey startsWith "abc" // The sort key attribute value must begin with the
                         // substring `abc`.

sortKey isIn 0..99       // The sort key attribute value must be between 0
                         // and 99 (inclusive).
```

You cannot combine sort key filters with boolean logic and they support only a subset of the comparisons described above:
+ [Equalities and inequalities](#ddb-mapper-expressions-dsl-eq-and-ineq): all comparisons supported
+ [Ranges and sets](#ddb-mapper-expressions-dsl-ranges-sets): all comparisons supported
+ [Boolean logic](#ddb-mapper-expressions-dsl-boolean): not supported
+ [Functions and properties](#ddb-mapper-expressions-dsl-functions): only `startsWith` is supported