

After careful consideration, we decided to end support for Amazon FinSpace, effective October 7, 2026. Amazon FinSpace will no longer accept new customers beginning October 7, 2025. As an existing customer with an Amazon FinSpace environment created before October 7, 2025, you can continue to use the service as normal. After October 7, 2026, you will no longer be able to use Amazon FinSpace. For more information, see [Amazon FinSpace end of support](https://docs.aws.amazon.com/finspace/latest/userguide/amazon-finspace-end-of-support.html). 

# Creating a Managed kdb Insights cluster
<a name="create-kdb-clusters"></a>

You can either use the console or the [CreateKxCluster](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxCluster) API to create a cluster. When you create a cluster from the console, you choose one of the following cluster types available in FinSpace – [General purpose](kdb-cluster-types.md#kdb-clusters-gp), [Tickerplant](kdb-cluster-types.md#kdb-clusters-tp), [HDB](kdb-cluster-types.md#kdb-clusters-hdb), [RDB](kdb-cluster-types.md#kdb-clusters-rdb), and [Gateway](kdb-cluster-types.md#kdb-clusters-gw). The create cluster workflow includes a step-wise wizard, where you will add various details based on the cluster type you choose. The fields on each page can differ based on various selections throughout the cluster creation process. 

## Prerequisites
<a name="create-cluster-prereq"></a>

Before you proceed, complete the following prerequisites: 
+ If you want to run clusters on a scaling group, [create a scaling group](create-scaling-groups.md).
+ If you want to run a TP, GP, or RDB cluster on e scaling group [create a volume](create-volumes.md).
+ If you want to run an HDB type cluster on a scaling group, [create a dataview](managing-kdb-dataviews.md#create-kdb-dataview).

** **Topics** **
+ [

## Prerequisites
](#create-cluster-prereq)
+ [

# Opening the cluster wizard
](create-cluster-tab.md)
+ [

# Step 1: Add cluster details
](create-cluster-step1.md)
+ [

# Step 2: Add code
](create-cluster-step2.md)
+ [

# Step 3: Configure VPC settings
](create-cluster-step3.md)
+ [

# Step 4: Configure data and storage
](create-cluster-step4.md)
+ [

# Step 5: Review and create
](create-cluster-step5.md)

# Opening the cluster wizard
<a name="create-cluster-tab"></a>

**To open the create cluster wizard**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. In the left pane, under **Managed kdb Insights**, choose **Kdb environments**.

1. In the kdb environments table, choose the name of the environment.

1. On the environment details page, choose **Clusters** tab.

1. Choose **Create cluster**. A step-wise wizard to create a cluster opens.

# Step 1: Add cluster details
<a name="create-cluster-step1"></a>

Specify details for each of the following sections on **Add cluster details** page.

## Cluster details
<a name="create-cluster-step1-clusterdetails"></a>

1. Choose from one of the following types of clusters that you want to add.
   + **(HDB) Historical Database**
   + **(RDB) Realtime Database**
   + **Gateway**
   + **General purpose**
   + **Tickerplant** 

   For more information about cluster types, see [Managed kdb Insights clusters](finspace-managed-kdb-clusters.md).
**Note**  
Currently, you can only create dedicated HDB clusters and all cluster types that are running on a scaling group directly from the console. To create other types of clusters, you need to first create a [Support case](https://aws.amazon.com/contact-us/), and then proceed with steps in this tutorial.
The parameters that are displayed on the **Step 5: Configure data and storage** page will change based on the cluster type and running mode that you select in this step.

1. Add a unique name and a brief description for your cluster.

1. For `Release label`, choose the package version to run in the cluster.

1. (Optional) Choose the IAM role that defines a set of permissions associated with this cluster. This is an execution role that will be associated with the cluster. You can use this role to control access to other clusters in your Managed kdb environment.

## Cluster running mode
<a name="create-cluster-step1-runmode"></a>

1. Choose if you want to add this cluster as a dedicated cluster or as a part of scaling groups. 
   + **Run on kdb scaling group** – Allows you to share a single set of compute with multiple clusters.
   + **Run as a dedicated cluster** – Allows you to run each process on its own compute hose.

1. If you choose **Run as a dedicated cluster**, you also need to provide the Availability Zones where you want to create a cluster.

   1. Choose **AZ mode** to specify the number of Availability Zones where you want to create a cluster. You can choose from one of the following options:
      + **Single** – Allows you to create a cluster in one Availability Zone that you select. If you choose this option, you must specify only one Availability Zone value and only one subnet in the next step. The subnet must reside in one of the three AZs that your kdb environment uses, and the Availability Zone must align with one of the three AZs.
      + **Multiple** – Allows you to create a cluster with nodes automatically allocated across all the Availability Zones that are used by your Managed kdb environment. This option provides resiliency for node or cache failures in a Single-AZ. If you choose this option, you must specify three subnets, one in each of the three AZs that your kdb environment uses.
**Note**  
For the **General purpose** and **Tickerplant** type cluster, you can only choose Single-AZ.

   1. Choose the Availability Zone IDs that include the subnets you want to add.

## Scaling group details
<a name="create-cluster-step1-sgdetails"></a>

**Note**  
This section is only available when you choose to add cluster as a part of scaling groups.

Choose the name of the scaling group where you want to create this cluster. The drop down shows the metadata for each scaling group along with their names to help you decide which one to pick. If a scaling group is not available, choose **Create kdb scaling group** to add a new one. For more information, see [Creating a Managed kdb scaling group](create-scaling-groups.md).

## Node details
<a name="create-cluster-step1-nodedetails"></a>

In this section, you can choose the capacity configuration for your clusters. The fields in this section vary for dedicated and scaling group clusters.

------
#### [ Scaling group cluster ]

You can decide the memory and CPU usage that will be shared with the instances for your scaling group clusters by providing the following information. 

1. Under **Node details**, for **Node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. Enter the memory reservation and limits per node. Specifying the memory limit is optional. The memory limit should be equal to or greater than the memory reservation.

1. (Optional) Enter the number of vCPUs that you want to reserve for each node of this scaling group cluster.

------
#### [ Dedicated cluster ]

For a dedicated cluster you can provide an initial node count and choose the capacity configuration from a pre-defined list of node types. For example, the node type `kx.s.large` allows you to use two vCPUs and 12 GiB of memory for your instance.

1. Under **Node details**, for **Initial node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. For **Node type**, choose the memory and storage capabilities for your cluster instance. You can choose from one of the following options:
   + `kx.s.large` – The node type with a configuration of 12 GiB memory and 2 vCPUs.
   + `kx.s.xlarge` – The node type with a configuration of 27 GiB memory and 4 vCPUs.
   + `kx.s.2xlarge` – The node type with a configuration of 54 GiB memory and 8 vCPUs.
   + `kx.s.4xlarge` – The node type with a configuration of 108 GiB memory and 16 vCPUs.
   + `kx.s.8xlarge` – The node type with a configuration of 216 GiB memory and 32 vCPUs.
   + `kx.s.16xlarge` – The node type with a configuration of 432 GiB memory and 64 vCPUs.
   + `kx.s.32xlarge` – The node type with a configuration of 864 GiB memory and 128 vCPUs.

------

## Auto-scaling
<a name="create-cluster-step1-autoscalingdetails"></a>

**Note**  
This section is only available when you add an HDB cluster type as a dedicated cluster.

Specify details to scale in or scale out the based on service utilization. For more information, see [Auto scaling](kdb-cluster-types.md#kdb-cluster-hdb-autoscaling).

1. Enter a minimum node count. Valid numbers: 1–5.

1. Enter a maximum node count. Valid numbers: 1–5.

1. Choose the metrics to auto scale your cluster. Currently, FinSpace only supports CPU utilization.

1. Enter the cooldown time before initiating another scaling event.

## Tags
<a name="create-cluster-step1-tags"></a>

1. (Optional) Add a new tag to assign to your kdb cluster. For more information, see [AWS tags](https://docs.aws.amazon.com/finspace/latest/userguide/create-an-amazon-finspace-environment.html#aws-tags). 
**Note**  
You can only add up to 50 tags to your cluster.

1. Choose **Next** for next step of the wizard.

# Step 2: Add code
<a name="create-cluster-step2"></a>

You can load q code onto your kdb cluster so that you can run it when the cluster is running. Additionally, you can configure your cluster to automatically run a particular q command script on cluster startup. By default, q writes files uncompressed. You can pass command line arguments to set compression defaults `.z.d` at the time of creating a cluster from the console or through CLI, which can be updated later.

**Note**  
This step is required for the **Gateway** and **Tickerplant** cluster type.

 On the **Add code** page, add the following details of your custom code that you want to use when analyzing the data in the cluster. 

1. (Optional) Specify the **S3 URI** and the **Object version**. You can choose the *.zip* file that contains code that should be available on the cluster.

1. (Optional) For **Initialization script**, enter the relative path that contains a q program script that will run at the launch of a cluster. If you choose to load the database by using the initialization script, it will autoload on startup. If you add a changeset that has a missing sym file, the cluster creation fails. 
**Note**  
This step is optional. If you choose to enter the initialization script, you must also provide the S3 URI.

1. (Optional) Enter key-value pairs as command-line arguments to configure the behavior of clusters. You can use the command-line arguments to set [zip defaults](https://code.kx.com/q/ref/dotz/#zzd-zip-defaults) for your clusters. For this, pass the following key-value pair:
   + **Key**: `AWS_ZIP_DEFAULT` 
   + **Value**: `17,2,6`

     The value consists of comma separated three numbers that represent logical block size, algorithm, and compression level respectively. For more information, see [compression parameters](https://code.kx.com/q/kb/file-compression/#compression-parameters). You can also add the key-value pair when you [update code configuration on a cluster](update-cluster-code.md).
**Note**  
You can only add up to 50 key-value pairs.

   To set compression default using AWS CLI, use the following command:

   ```
   aws finspace create-kx-cluster \
       ...
       --command-line-arguments '[{"key": "AWS_ZIP_DEFAULT", "value":"17,2,6"}]' \
       ...
   ```

1. Choose **Next**.

**Note**  
In case of failure, to stop cluster creation from an initialization script, use the `.aws.stop_current_kx_cluster_creation` function in the script.

# Step 3: Configure VPC settings
<a name="create-cluster-step3"></a>

You connect to your cluster using q IPC through an AWS PrivateLink VPC endpoint. The endpoint resides in a subnet that you specify in the AWS account where you created your Managed kdb environment. Each cluster that you create has its own AWS PrivateLink endpoint, with an elastic network interface that resides in the subnet you specify. You can specify a security group to be applied to the VPC endpoint.

Connect a cluster to a VPC in your account. On the **Configure VPC settings** page, do the following: 

1. Choose the VPC that you want to access.

1. Choose the VPC subnets that the cluster will use to set up your VPC configuration.

1. Choose the security group.

1. Choose **Next**.

# Step 4: Configure data and storage
<a name="create-cluster-step4"></a>

Choose data and storage configurations that will be used for the cluster. 

The parameters on this page are displayed according to the cluster type that you selected in *Step 1: Add cluster details*.

**Note**  
If you choose to add both the **Read data configuration** and **Savedown storage configuration**, the database name must be the same for both the configurations.

## For HDB cluster
<a name="create-cluster-step4-hdb"></a>

**Note**  
When you create a cluster with a database that has a changeset, it will autoload the database when you launch a cluster.

If you choose **Cluster type** as *HDB*, you can specify the database and cache configurations as following:

------
#### [ Scaling group cluster ]

1. Choose the name of the database.

1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. Choose the name of the database. This database must have a changeset added to it.

1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

   1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
      + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
      + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
      + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

   1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. Choose **Next**. The **Review and create** page opens.

------

## For RDB cluster
<a name="create-cluster-step4-rdb"></a>

If you choose **Cluster type** as *RDB*, you can specify the savedown storage configurations for your cluster as following:

------
#### [ Scaling group cluster ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Gateway cluster
<a name="create-cluster-step4-gw"></a>

If you choose **Cluster type** as **Gateway**, you do not need to attach databases, cache configurations, or local storage in this step.

## For General purpose cluster
<a name="create-cluster-step4-gp"></a>

If you choose **Cluster type** as *General purpose*, you can specify the database and cache configurations and savedown storage configurations as following:

------
#### [ Scaling group cluster ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database.

   1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database. This database must have a changeset added to it.

   1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

   1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

      1. Specify paths within the database directory where you want to cache data.

      1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
         + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
         + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
         + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

      1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Tickerplant cluster
<a name="create-cluster-step4-tp"></a>

For both scaling groups clusters and dedicated clusters, you can choose a volume where you want to store the tickerplant data.

1. **Tickerplant log configuration**

   Choose a **Volume name** to store the tickerplant logs.

1. Choose **Next**. The **Review and create** page opens.

# Step 5: Review and create
<a name="create-cluster-step5"></a>

1. On the **Review and create** page, review the details that you provided. You can modify details for any step when you choose **Edit** on this page.

1. Choose **Create cluster**. The cluster details page opens where you can view the status of cluster creation.