

After careful consideration, we decided to end support for Amazon FinSpace, effective October 7, 2026. Amazon FinSpace will no longer accept new customers beginning October 7, 2025. As an existing customer with an Amazon FinSpace environment created before October 7, 2025, you can continue to use the service as normal. After October 7, 2026, you will no longer be able to use Amazon FinSpace. For more information, see [Amazon FinSpace end of support](https://docs.aws.amazon.com/finspace/latest/userguide/amazon-finspace-end-of-support.html). 

# Managing kdb clusters


The following sections provide a detailed overview of the operations that you can perform by using Managed kdb clusters.

** **Topics** **
+ [

# Activating your Managed kdb Insights license
](kdb-licensing.md)
+ [

# Managed kdb Insights cluster software bundles
](kdb-software-bundles.md)
+ [

# Maintaining a Managed kdb Insights cluster
](maintaining-kdb-clusters.md)
+ [

# Creating a Managed kdb Insights cluster
](create-kdb-clusters.md)
+ [

# Viewing kdb cluster detail
](view-kdb-clusters.md)
+ [

# Updating code configurations on a running cluster
](update-cluster-code.md)
+ [

# Updating a kdb cluster database
](update-kdb-clusters-databases.md)
+ [

# Deleting a kdb cluster
](delete-kdb-clusters.md)

# Activating your Managed kdb Insights license
Activating your kdb license

To run Managed kdb Insights clusters, you must first have an existing kdb Insights license from KX. That kdb Insights license needs to be activated for your Managed kdb Insights environment(s). You're responsible for working directly with KX (KX Systems, Inc., a subsidiary of FD Technologies plc) to obtain this.

To activate an existing kdb Insights license for your Managed kdb Insights environment(s), do the following: 
+ Contact your KX account manager or KX sales representative and provide them with the AWS account number for all the accounts where you want to use your Managed kdb Insights environment(s).
+ Once arranged with KX, the kdb Insights license will be automatically applied to your Managed kdb Insights environment(s).
**Note**  
If you do not have an existing kdb Insights license, you can request a 30-day trial license from KX [here](https://kx.com/amazon-finspace-with-managed-kdb-insights/). KX will then activate a 30-day trial license for you.
+ You will receive an activation email and your 30 day trial license will be automatically applied to your Managed kdb Insights environment.

**Note**  
If KX has already enabled a kdb license for use with Managed kdb Insights in your AWS account and the license has not expired, you can start using clusters in your environment as soon as it is created. You do not need to request a new license.

# Managed kdb Insights cluster software bundles
Cluster software bundles

When you launch a cluster, you can choose the software versions that will run on your cluster. This allows you to test and use application versions that fit your compatibility requirements. 

You can specify the release version using the `Release Label`. Release labels are in the form `x.x`.

The following table lists the software versions that each release label includes. Currently it only includes the kdb Insights core.


| Managed kdb Insights release label | Kdb Insights core version | 
| --- | --- | 
| 1.0 | 4.0.3 | 

# Maintaining a Managed kdb Insights cluster


Maintaining a kdb cluster involves updates to the cluster's underlying operating system or to the container hosting the Managed kdb Insights software. FinSpace manages and applies all such updates.

Some maintenance may require FinSpace to take your Managed kdb cluster offline for a short time. This includes, installing or upgrading required operating system or database patches. This maintenance is automatically scheduled for patches that are related to security and instance reliability.

The maintenance window determines when pending operations start, but it doesn't limit the total execution time of these operations. Maintenance operations that don't finish before the maintenance window ends can continue beyond the specified end time.

## Managed kdb Insights maintenance window


Every Manged kdb environment has a weekly maintenance window during which system changes are applied. You can control when modifications and software patches occurs during a maintenance window. If a maintenance event is scheduled for a given week, it is initiated during the maintenance window. 


| AWS Region name | Time block | 
| --- | --- | 
| Canada (Central) | 15:00–16:30 UTC | 
| US West (N. California) | 18:00–19:30 UTC | 
| US West (Oregon) | 18:00–19:30 UTC | 
| US East (N. Virginia) | 15:00–16:30 UTC | 
| US East (Ohio) | 15:00–16:30 UTC | 
| Europe (Ireland) | 10:00–11:30 UTC | 
|  Europe (London)  | 09:00–10:30 UTC | 
|  Europe (Frankfurt)  | 08:00–09:30 UTC | 
|  Asia Pacific (Singapore)  | 02:00–03:30 UTC | 
|  Asia Pacific (Sydney)  | 23:00–12:30 UTC | 
|  Asia Pacific (Tokyo)  |  01:00–02:30 UTC  | 

# Creating a Managed kdb Insights cluster
Creating a cluster

You can either use the console or the [CreateKxCluster](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxCluster) API to create a cluster. When you create a cluster from the console, you choose one of the following cluster types available in FinSpace – [General purpose](kdb-cluster-types.md#kdb-clusters-gp), [Tickerplant](kdb-cluster-types.md#kdb-clusters-tp), [HDB](kdb-cluster-types.md#kdb-clusters-hdb), [RDB](kdb-cluster-types.md#kdb-clusters-rdb), and [Gateway](kdb-cluster-types.md#kdb-clusters-gw). The create cluster workflow includes a step-wise wizard, where you will add various details based on the cluster type you choose. The fields on each page can differ based on various selections throughout the cluster creation process. 

## Prerequisites


Before you proceed, complete the following prerequisites: 
+ If you want to run clusters on a scaling group, [create a scaling group](create-scaling-groups.md).
+ If you want to run a TP, GP, or RDB cluster on e scaling group [create a volume](create-volumes.md).
+ If you want to run an HDB type cluster on a scaling group, [create a dataview](managing-kdb-dataviews.md#create-kdb-dataview).

** **Topics** **
+ [

## Prerequisites
](#create-cluster-prereq)
+ [

# Opening the cluster wizard
](create-cluster-tab.md)
+ [

# Step 1: Add cluster details
](create-cluster-step1.md)
+ [

# Step 2: Add code
](create-cluster-step2.md)
+ [

# Step 3: Configure VPC settings
](create-cluster-step3.md)
+ [

# Step 4: Configure data and storage
](create-cluster-step4.md)
+ [

# Step 5: Review and create
](create-cluster-step5.md)

# Opening the cluster wizard


**To open the create cluster wizard**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. In the left pane, under **Managed kdb Insights**, choose **Kdb environments**.

1. In the kdb environments table, choose the name of the environment.

1. On the environment details page, choose **Clusters** tab.

1. Choose **Create cluster**. A step-wise wizard to create a cluster opens.

# Step 1: Add cluster details


Specify details for each of the following sections on **Add cluster details** page.

## Cluster details


1. Choose from one of the following types of clusters that you want to add.
   + **(HDB) Historical Database**
   + **(RDB) Realtime Database**
   + **Gateway**
   + **General purpose**
   + **Tickerplant** 

   For more information about cluster types, see [Managed kdb Insights clusters](finspace-managed-kdb-clusters.md).
**Note**  
Currently, you can only create dedicated HDB clusters and all cluster types that are running on a scaling group directly from the console. To create other types of clusters, you need to first create a [Support case](https://aws.amazon.com/contact-us/), and then proceed with steps in this tutorial.
The parameters that are displayed on the **Step 5: Configure data and storage** page will change based on the cluster type and running mode that you select in this step.

1. Add a unique name and a brief description for your cluster.

1. For `Release label`, choose the package version to run in the cluster.

1. (Optional) Choose the IAM role that defines a set of permissions associated with this cluster. This is an execution role that will be associated with the cluster. You can use this role to control access to other clusters in your Managed kdb environment.

## Cluster running mode


1. Choose if you want to add this cluster as a dedicated cluster or as a part of scaling groups. 
   + **Run on kdb scaling group** – Allows you to share a single set of compute with multiple clusters.
   + **Run as a dedicated cluster** – Allows you to run each process on its own compute hose.

1. If you choose **Run as a dedicated cluster**, you also need to provide the Availability Zones where you want to create a cluster.

   1. Choose **AZ mode** to specify the number of Availability Zones where you want to create a cluster. You can choose from one of the following options:
      + **Single** – Allows you to create a cluster in one Availability Zone that you select. If you choose this option, you must specify only one Availability Zone value and only one subnet in the next step. The subnet must reside in one of the three AZs that your kdb environment uses, and the Availability Zone must align with one of the three AZs.
      + **Multiple** – Allows you to create a cluster with nodes automatically allocated across all the Availability Zones that are used by your Managed kdb environment. This option provides resiliency for node or cache failures in a Single-AZ. If you choose this option, you must specify three subnets, one in each of the three AZs that your kdb environment uses.
**Note**  
For the **General purpose** and **Tickerplant** type cluster, you can only choose Single-AZ.

   1. Choose the Availability Zone IDs that include the subnets you want to add.

## Scaling group details


**Note**  
This section is only available when you choose to add cluster as a part of scaling groups.

Choose the name of the scaling group where you want to create this cluster. The drop down shows the metadata for each scaling group along with their names to help you decide which one to pick. If a scaling group is not available, choose **Create kdb scaling group** to add a new one. For more information, see [Creating a Managed kdb scaling group](create-scaling-groups.md).

## Node details


In this section, you can choose the capacity configuration for your clusters. The fields in this section vary for dedicated and scaling group clusters.

------
#### [ Scaling group cluster ]

You can decide the memory and CPU usage that will be shared with the instances for your scaling group clusters by providing the following information. 

1. Under **Node details**, for **Node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. Enter the memory reservation and limits per node. Specifying the memory limit is optional. The memory limit should be equal to or greater than the memory reservation.

1. (Optional) Enter the number of vCPUs that you want to reserve for each node of this scaling group cluster.

------
#### [ Dedicated cluster ]

For a dedicated cluster you can provide an initial node count and choose the capacity configuration from a pre-defined list of node types. For example, the node type `kx.s.large` allows you to use two vCPUs and 12 GiB of memory for your instance.

1. Under **Node details**, for **Initial node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. For **Node type**, choose the memory and storage capabilities for your cluster instance. You can choose from one of the following options:
   + `kx.s.large` – The node type with a configuration of 12 GiB memory and 2 vCPUs.
   + `kx.s.xlarge` – The node type with a configuration of 27 GiB memory and 4 vCPUs.
   + `kx.s.2xlarge` – The node type with a configuration of 54 GiB memory and 8 vCPUs.
   + `kx.s.4xlarge` – The node type with a configuration of 108 GiB memory and 16 vCPUs.
   + `kx.s.8xlarge` – The node type with a configuration of 216 GiB memory and 32 vCPUs.
   + `kx.s.16xlarge` – The node type with a configuration of 432 GiB memory and 64 vCPUs.
   + `kx.s.32xlarge` – The node type with a configuration of 864 GiB memory and 128 vCPUs.

------

## Auto-scaling


**Note**  
This section is only available when you add an HDB cluster type as a dedicated cluster.

Specify details to scale in or scale out the based on service utilization. For more information, see [Auto scaling](kdb-cluster-types.md#kdb-cluster-hdb-autoscaling).

1. Enter a minimum node count. Valid numbers: 1–5.

1. Enter a maximum node count. Valid numbers: 1–5.

1. Choose the metrics to auto scale your cluster. Currently, FinSpace only supports CPU utilization.

1. Enter the cooldown time before initiating another scaling event.

## Tags


1. (Optional) Add a new tag to assign to your kdb cluster. For more information, see [AWS tags](https://docs.aws.amazon.com/finspace/latest/userguide/create-an-amazon-finspace-environment.html#aws-tags). 
**Note**  
You can only add up to 50 tags to your cluster.

1. Choose **Next** for next step of the wizard.

# Step 2: Add code


You can load q code onto your kdb cluster so that you can run it when the cluster is running. Additionally, you can configure your cluster to automatically run a particular q command script on cluster startup. By default, q writes files uncompressed. You can pass command line arguments to set compression defaults `.z.d` at the time of creating a cluster from the console or through CLI, which can be updated later.

**Note**  
This step is required for the **Gateway** and **Tickerplant** cluster type.

 On the **Add code** page, add the following details of your custom code that you want to use when analyzing the data in the cluster. 

1. (Optional) Specify the **S3 URI** and the **Object version**. You can choose the *.zip* file that contains code that should be available on the cluster.

1. (Optional) For **Initialization script**, enter the relative path that contains a q program script that will run at the launch of a cluster. If you choose to load the database by using the initialization script, it will autoload on startup. If you add a changeset that has a missing sym file, the cluster creation fails. 
**Note**  
This step is optional. If you choose to enter the initialization script, you must also provide the S3 URI.

1. (Optional) Enter key-value pairs as command-line arguments to configure the behavior of clusters. You can use the command-line arguments to set [zip defaults](https://code.kx.com/q/ref/dotz/#zzd-zip-defaults) for your clusters. For this, pass the following key-value pair:
   + **Key**: `AWS_ZIP_DEFAULT` 
   + **Value**: `17,2,6`

     The value consists of comma separated three numbers that represent logical block size, algorithm, and compression level respectively. For more information, see [compression parameters](https://code.kx.com/q/kb/file-compression/#compression-parameters). You can also add the key-value pair when you [update code configuration on a cluster](update-cluster-code.md).
**Note**  
You can only add up to 50 key-value pairs.

   To set compression default using AWS CLI, use the following command:

   ```
   aws finspace create-kx-cluster \
       ...
       --command-line-arguments '[{"key": "AWS_ZIP_DEFAULT", "value":"17,2,6"}]' \
       ...
   ```

1. Choose **Next**.

**Note**  
In case of failure, to stop cluster creation from an initialization script, use the `.aws.stop_current_kx_cluster_creation` function in the script.

# Step 3: Configure VPC settings


You connect to your cluster using q IPC through an AWS PrivateLink VPC endpoint. The endpoint resides in a subnet that you specify in the AWS account where you created your Managed kdb environment. Each cluster that you create has its own AWS PrivateLink endpoint, with an elastic network interface that resides in the subnet you specify. You can specify a security group to be applied to the VPC endpoint.

Connect a cluster to a VPC in your account. On the **Configure VPC settings** page, do the following: 

1. Choose the VPC that you want to access.

1. Choose the VPC subnets that the cluster will use to set up your VPC configuration.

1. Choose the security group.

1. Choose **Next**.

# Step 4: Configure data and storage


Choose data and storage configurations that will be used for the cluster. 

The parameters on this page are displayed according to the cluster type that you selected in *Step 1: Add cluster details*.

**Note**  
If you choose to add both the **Read data configuration** and **Savedown storage configuration**, the database name must be the same for both the configurations.

## For HDB cluster


**Note**  
When you create a cluster with a database that has a changeset, it will autoload the database when you launch a cluster.

If you choose **Cluster type** as *HDB*, you can specify the database and cache configurations as following:

------
#### [ Scaling group cluster ]

1. Choose the name of the database.

1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. Choose the name of the database. This database must have a changeset added to it.

1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

   1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
      + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
      + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
      + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

   1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. Choose **Next**. The **Review and create** page opens.

------

## For RDB cluster


If you choose **Cluster type** as *RDB*, you can specify the savedown storage configurations for your cluster as following:

------
#### [ Scaling group cluster ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Gateway cluster


If you choose **Cluster type** as **Gateway**, you do not need to attach databases, cache configurations, or local storage in this step.

## For General purpose cluster


If you choose **Cluster type** as *General purpose*, you can specify the database and cache configurations and savedown storage configurations as following:

------
#### [ Scaling group cluster ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database.

   1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database. This database must have a changeset added to it.

   1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

   1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

      1. Specify paths within the database directory where you want to cache data.

      1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
         + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
         + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
         + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

      1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Tickerplant cluster


For both scaling groups clusters and dedicated clusters, you can choose a volume where you want to store the tickerplant data.

1. **Tickerplant log configuration**

   Choose a **Volume name** to store the tickerplant logs.

1. Choose **Next**. The **Review and create** page opens.

# Step 5: Review and create


1. On the **Review and create** page, review the details that you provided. You can modify details for any step when you choose **Edit** on this page.

1. Choose **Create cluster**. The cluster details page opens where you can view the status of cluster creation.

# Viewing kdb cluster detail
Viewing a cluster

**To view and get details of a kdb cluster**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. In the left pane, under **Managed kdb Insights**, choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Clusters** tab. The table under this tab displays a list of clusters.

1. Choose a cluster name to view its details. The cluster details page opens where you can view the cluster details and the following tabs.
   + **Configuration** tab – Displays the cluster configuration details like the node details, code, availability zones, savedown database configuration etc.
   + **Monitoring** tab – Displays the dashboard of cluster metrics. 
   + **Nodes** tab – Displays a list of nodes in this cluster along with their status. All the nodes that are active will have a **Running** status and nodes that are being prepared or stuck due to lack of resources have the status as **Provisioning**. From here you could also delete a node. For this, select a node and choose **Delete**.
   + **Logs** section – Displays the activity logs for your clusters. 
   + **Tags** tab – Displays a list of key-value pairs associated with the clusters. If you did not provide tags during cluster creation, choose **Manage tags** to add new tags.

# Updating code configurations on a running cluster
Updating code configurations

Amazon FinSpace allows you to update code configurations on a running cluster. You can either use the console or the [UpdateKxClusterCodeConfiguration](https://docs.aws.amazon.com/finspace/latest/management-api/API_UpdateKxClusterCodeConfiguration) API to update the code. Both console and API allow you to choose how you want to update the code on a cluster by using different deployment modes. Based on the option you choose, you can reduce the time it takes to update the code on to a cluster. You can also add or delete default compression parameters for your files by using command-line arguments.

**Note**  
The configuration that you update will override any existing configurations on the cluster. 

**To update code configurations on a cluster by using the console**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the list of environments, choose a kdb environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one where you want to update the code. The cluster details page opens.

1. On the cluster details page, choose the **Details** tab.

1. Under **Code** section, choose **Edit**.
**Note**  
This button is only available for an **Active** environment and when the cluster is in a **Running** state.

1. On the **Edit code configuration** page, choose how you want to update a cluster by choosing a deployment mode. The following options are available.
   + **Rolling** – (Default) Loads the code by stopping the exiting q process and starting a new q process with updated configuration.
   + **Quick** – Loads the code by stopping all the running nodes immediately. 

1. Specify the **S3 URI** and the **Object version**. This allows you to choose the *.zip* file containing code that should be available on the cluster.

1. For **Initialization script**, enter the relative path that contains a q program script that will run at the launch of a cluster.

1. (Optional) Add or update the key-value pairs as command line arguments to configure the behavior of clusters. 

   You can use the command-line arguments to set [zip defaults](https://code.kx.com/q/ref/dotz/#zzd-zip-defaults) for your cluster. The cluster has to be restarted for the changes to take effect. For this, pass the following key-value pair:
   + **Key**: `AWS_ZIP_DEFAULT` 
   + **Value**: `17,2,6`

     The value consists of comma separated three numbers that represent logical block size, algorithm, and compression level respectively. For more information, see [compression parameters](https://code.kx.com/q/kb/file-compression/#compression-parameters). 

     To update the compression default using AWS CLI, use the following command:

     ```
     aws finspace update-kx-cluster-code-configuration \
         ...
         --command-line-arguments '[{"key": "AWS_ZIP_DEFAULT", "value":"17,3,0"}]' \
         --deploymentConfiguration deploymentStrategy=ROLLING|FORCE
         ...
     ```

1. Choose **Save changes**. The cluster details page opens and the updated code configuration is displayed once the cluster updates successfully.

# Updating a kdb cluster database
Updating a cluster database

You can update the databases mounted on a kdb cluster using the console. This feature is only available for HDB clusters types. With this feature, you can update the data in a cluster by selecting a changeset. You can also update the cache by providing database paths. You can't change a database name or add a new database if you created a cluster without one. 

You can also choose how you want to update the databases on the cluster by selecting a deployment mode. 

**To update a kdb cluster database**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the list of environments, choose a kdb environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one where you want to update the database. The cluster details page opens.

1. On the cluster details page, choose the **Details** tab.

1. Under **Data management and storage** section, choose **Edit**.
**Note**  
This button is not available for *RDB* and *Gateway* type clusters.

1. On the edit page, modify the changeset that you want to cache as needed.

1. Choose a deployment mode from one of the following options.
   + **Rolling** – (Default) To update the database, this option stops the existing q process and starts a new q process with the updated database configuration. The initialization script re-runs when the new q process starts. 
   + **No restart** – This option updates the database but doesn't stop the existing q process. **No restart** is often quicker than the other deployment modes because it reduces the turnaround time to update the changeset configuration for a kdb database on your cluster. This option doesn't re-run the initialization script.
**Note**  
After the update completes, you must re-load the updated database. However, if you use a historical database (HDB) cluster with a single database in a rolling deployment, FinSpace autoloads the database after an update.

1. Choose **Save changes**. The cluster details page opens and the updated information is displayed once the cluster updates successfully.

# Deleting a kdb cluster
Deleting a cluster

**Note**  
This action is irreversible. Deleting a kdb cluster will delete all of its data from the local storage.

**To delete a kdb cluster**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one that you want to delete. The cluster details page opens.

1. On the cluster details page, choose **Delete**.

1. On the confirmation dialog box, enter *confirm*.

1. Choose **Delete**.

## Deleting a cluster on scaling groups


