

After careful consideration, we decided to end support for Amazon FinSpace, effective October 7, 2026. Amazon FinSpace will no longer accept new customers beginning October 7, 2025. As an existing customer with an Amazon FinSpace environment created before October 7, 2025, you can continue to use the service as normal. After October 7, 2026, you will no longer be able to use Amazon FinSpace. For more information, see [Amazon FinSpace end of support](https://docs.aws.amazon.com/finspace/latest/userguide/amazon-finspace-end-of-support.html). 

# Managed kdb Insights clusters
Managed kdb clusters

A FinSpace Managed kdb Insights cluster is a set of compute resources that run kdb processes in a FinSpace Managed kdb environment. By using FinSpace Managed kdb clusters, you can easily set up your own private managed data processing and analytics hub for capital markets data. This provides access to real-time and historical data along with high-performance analytics.

** **Topics** **
+ [

# Running a clusters on scaling groups vs as a dedicated cluster
](kdb-clusters-running-clusters-comparison.md)
+ [

# Cluster types
](kdb-cluster-types.md)
+ [

# Managing kdb clusters
](managing-kdb-clusters.md)
+ [

# Using Managed kdb Insights clusters
](using-kdb-clusters.md)

# Running a clusters on scaling groups vs as a dedicated cluster
Scaling groups cluster vs dedicated cluster

The original Amazon FinSpace Managed kdb cluster launch configuration is now referred to as a dedicated cluster. In a dedicated cluster, each node or kdb process in the cluster runs on its own dedicated compute host. 

![\[A diagram that shows dedicated cluster.\]](http://docs.aws.amazon.com/finspace/latest/userguide/images/11-managed-kx/finspace-cluster-coparison-image1.png)


This configuration provides strong workload isolation between clusters and nodes in a single cluster at the expense of needing a fixed amount of compute per node. In contrast, with a cluster on scaling group a single set of compute is shared by multiple workloads (clusters) running on shared compute, allowing you to share a fixed amount of compute.

![\[A diagram that shows shared compute.\]](http://docs.aws.amazon.com/finspace/latest/userguide/images/11-managed-kx/finspace-cluster-coparison-image2.png)


**Considerations**
+ Currently, a kdb scaling group is limited to only one host residing in one Availability Zone.
+  The [HDB clusters](kdb-cluster-types.md#kdb-clusters-hdb) running on kdb scaling groups must use dataviews instead of cluster-specific [disk cache](kdb-cluster-types.md#kdb-cluster-cache-config) to store database data for high-performance read access. 
+ RDB and General Purpose clusters running on scaling groups must use a [kdb volume](finspace-managed-kdb-volumes.md) for their savedown storage configuration. 

# Cluster types


Amazon FinSpace supports a variety of kdb clusters that you can use for different uses cases such as to implement a standard kdb tick architecture.

## General purpose


You can use a *general purpose *cluster if your kdb application doesn't require any specific features that are available on more specialized clusters—like the multi-node, Multi-AZ read-only query of an HDB cluster or the multi-node, Multi-AZ gateways. 

With a general purpose cluster, you can mount a kdb Insights database for read-only access, as well as storage ([savedown storage](#kdb-cluster-savedown-storage)) for writing. This ability to read a database and write contents from a single cluster makes general purpose clusters suitable for various maintenance tasks. For example, you can use a general purpose cluster for tasks that require the ability to read and write data, and for creating derived datasets from an HDB cluster, in support of use cases such as one-time analysis by quantitative analysts (quants).

**Features of a general purpose cluster**

The following are the features of a general purpose cluster.
+ The node count for this cluster type is fixed at 1.
+ It only supports Single-AZ mode.
+ It can mount a kdb Insights database for read-only access to data.
+ You can conﬁgure savedown storage at the time of creating the cluster. You can use this space for writing savedown files before loading into a FinSpace database, or as a writeable space of other temporary files. For dedicated clusters, the savedown storage becomes unavailable when the cluster node is deleted. 
+ For clusters running on a scaling group, the savedown storage location will use a shared volume. This volume exists even after you delete the cluster and can be used by other clusters. You can remove the data on the volume before deleting the cluster or it remains available for use by other clusters.
+ It can update databases and cache with the [UpdateKxClusterDatabase](https://docs.aws.amazon.com/finspace/latest/management-api/API_UpdateKxClusterDatabases.html) operation.

## Tickerplant


A tickerplant (TP) acts as a message bus that subscribes to data or gets data pushed to it by Feed Handlers and then publishes it to one or more consumers, typically a realtime database (RDB). It persists a copy of each message received to a durable log of messages that are called the TP Log, so that downstream subscribers can request a replay of messages if needed. The following diagram explains that you can configure a TP cluster to save logs to a volume in Managed kdb Insights, from where you can replay the logs from an RDB type cluster. 

![\[A diagram that shows how ticker plant works.\]](http://docs.aws.amazon.com/finspace/latest/userguide/images/11-managed-kx/finspace-tp-diagram.png)


**Features of a tickerplant cluster**

Following are the features of a tickerplant type cluster: 
+ It supports only single-node that is only one kdb process.
+ It shares storage with RDB clusters.
+ It does not support the Multi-AZ mode. If you need Multi-AZ redundancy, run two TP type clusters in parallel.

## Gateway


In the vast majority of kdb\$1 systems, data is stored across several processes, which results in the need to access data across these processes. You do this by using a gateway to act as a single interface point that separates the end user from the configuration of underlying databases or services. With a gateway, you don't need to know where data is stored, and you don't need to make multiple requests to retrieve it.

To support running your custom gateway logic, Managed kdb Insights provides a *gateway* cluster type. You can deploy your own routing logic using the initialization scripts and custom code. You can configure gateways to a multi-node, Multi-AZ deployment for resiliency.

**Features of a gateway cluster**

The following are the features of a gateway type cluster: 
+ It provides support to run gateways with your custom allocation hosted inside of a Managed kdb environment.
+ It provides support for hosting code with custom allocation logic for allocating load across different kdb clusters or nodes. 
+ It integrates with the discovery service to understand available clusters, monitor their health status, and provide an endpoint for the cluster.
+ It provides a network path from your custom code running on the gateway to the cluster supporting IPC connections.

## Real-time database (RDB)


You can use a *real-time database* cluster to capture all the data from another kdb process, such as a ticker plant, and store it in memory for query or real-time processing. Because the data volume can eventually exceed the amount of available memory, kdb customers typically move the data from the RDB to a historical database (HDB) using a process called *savedown*. This process typically occurs at the end of a business day. 

You can create, list, and delete RDB clusters with single or multiple nodes through both console and FinSpace API operations.

### Savedown storage


RDB clusters require local space for temporary storage of data during the savedown process. This temporary storage is used to hold data for the period between when a cluster has flushed it from memory and when it is successfully loaded into a kdb Insights database. To support this, RDB clusters have writeable disk that is used as storage space for savedown data. You can use the data saved down to the FinSpace database from and RDB by creating an HDB cluster that points to the database. 

**Considerations**

The following are some considerations related to savedown storage:
+ You can conﬁgure savedown storage at the time of creating the cluster. You can use this space to write savedown files before loading into a FinSpace database, or as a writeable space for other temporary files.
+ For dedicated clusters, the savedown storage becomes unavailable when you delete a cluster node. 
+ For clusters running on a scaling group, the savedown storage location will use a shared volume. This volume exists even after you delete the cluster and can be used by other clusters. You can remove the data on the volume before deleting the cluster or it remains available for use by other clusters.

## Historical database (HDB)


A historical database holds data from a day before the current day. Each day, new records are added to the HDB at the end of day. To access data in Managed kdb databases from an HDB cluster, you must attach the databases you want to access as an option when launching the cluster. You can do this at the time of creating a cluster through the console, or by using the [create cluster API operation](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxCluster.html) in the *Amazon FinSpace Management API Reference*. The HDB cluster can access this data in a read-only mode.

### Cache configuration


 When you attach a database to a cluster for access, by default, the read operations are performed directly against the object store that the database data is stored in. Alternatively, you can also define a file cache, in which you can load data for faster performance. You do this by specifying cache configuration when you associate the database with the cluster. You can specify a certain amount of cache, and then separately specify the contents of the database that you want to cache. 

FinSpace supports the following cache types: 
+ **CACHE\$11000** – This type allows a throughput of 1000 MB/s per unit storage (TiB).
+ **CACHE\$1250** – This type allows a throughput of 250 MB/s per unit storage (TiB).
+ **CACHE\$112** – This type allows a throughput of 12 MB/s per unit storage (TiB).

**Considerations**

The following are some considerations related to storage and billing:
+ Caching is only available on dedicated clusters. For clusters running on a scaling group, use [dataviews](finspace-managed-kdb-dataviews.md).
+ You can only configure initial cache size at the time of cluster creation. To run a cluster with a different sized cache, you need to terminate the cluster and launch a new one with smaller database cache size.
+ Billing for cache storage starts when storage is available for use by the cluster and stops when the cluster is terminated.

### Auto scaling


With the HDB auto scaling feature, you can take away some nodes to save costs when the usage is low, and add more nodes to improve availability and performance when the usage is high. For auto scaling HDB clusters, you specify the CPU utilization targets for your scaling policy. You can auto scale an HDB cluster at the time of [cluster creation](create-kdb-clusters.md) in two ways. You can use the console or use the [createKxCluster](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxCluster.html) API operation, where you provide minimum and maximum node count, the metric policy, and a target utilization percentage. As a result, FinSpace scales in or scales out the clusters based on service utilization that's determined by CPU consumed by the kdb\$1 node. 

**Note**  
Auto scaling is only available for dedicated clusters and is not supported for clusters running on scaling groups.

## Summary of capabilities by cluster type



| Capability | General purpose | Gateway | RDB | TP | HDB | 
| --- | --- | --- | --- | --- | --- | 
| Attaches a Managed kdb Insights database for read-only access | Yes | No | No | No | Yes | 
|  Attaches writable local (savedown) storage to a node  | Yes | No | Yes | No | No | 
| Number of nodes supported | Single | Multi | Multi | Single | Multi | 
| Supports AZ configurations (for dedicated clusters) | Single | Single or Multi | Single or Multi | Single | Single or Multi | 

# Managing kdb clusters


The following sections provide a detailed overview of the operations that you can perform by using Managed kdb clusters.

** **Topics** **
+ [

# Activating your Managed kdb Insights license
](kdb-licensing.md)
+ [

# Managed kdb Insights cluster software bundles
](kdb-software-bundles.md)
+ [

# Maintaining a Managed kdb Insights cluster
](maintaining-kdb-clusters.md)
+ [

# Creating a Managed kdb Insights cluster
](create-kdb-clusters.md)
+ [

# Viewing kdb cluster detail
](view-kdb-clusters.md)
+ [

# Updating code configurations on a running cluster
](update-cluster-code.md)
+ [

# Updating a kdb cluster database
](update-kdb-clusters-databases.md)
+ [

# Deleting a kdb cluster
](delete-kdb-clusters.md)

# Activating your Managed kdb Insights license
Activating your kdb license

To run Managed kdb Insights clusters, you must first have an existing kdb Insights license from KX. That kdb Insights license needs to be activated for your Managed kdb Insights environment(s). You're responsible for working directly with KX (KX Systems, Inc., a subsidiary of FD Technologies plc) to obtain this.

To activate an existing kdb Insights license for your Managed kdb Insights environment(s), do the following: 
+ Contact your KX account manager or KX sales representative and provide them with the AWS account number for all the accounts where you want to use your Managed kdb Insights environment(s).
+ Once arranged with KX, the kdb Insights license will be automatically applied to your Managed kdb Insights environment(s).
**Note**  
If you do not have an existing kdb Insights license, you can request a 30-day trial license from KX [here](https://kx.com/amazon-finspace-with-managed-kdb-insights/). KX will then activate a 30-day trial license for you.
+ You will receive an activation email and your 30 day trial license will be automatically applied to your Managed kdb Insights environment.

**Note**  
If KX has already enabled a kdb license for use with Managed kdb Insights in your AWS account and the license has not expired, you can start using clusters in your environment as soon as it is created. You do not need to request a new license.

# Managed kdb Insights cluster software bundles
Cluster software bundles

When you launch a cluster, you can choose the software versions that will run on your cluster. This allows you to test and use application versions that fit your compatibility requirements. 

You can specify the release version using the `Release Label`. Release labels are in the form `x.x`.

The following table lists the software versions that each release label includes. Currently it only includes the kdb Insights core.


| Managed kdb Insights release label | Kdb Insights core version | 
| --- | --- | 
| 1.0 | 4.0.3 | 

# Maintaining a Managed kdb Insights cluster


Maintaining a kdb cluster involves updates to the cluster's underlying operating system or to the container hosting the Managed kdb Insights software. FinSpace manages and applies all such updates.

Some maintenance may require FinSpace to take your Managed kdb cluster offline for a short time. This includes, installing or upgrading required operating system or database patches. This maintenance is automatically scheduled for patches that are related to security and instance reliability.

The maintenance window determines when pending operations start, but it doesn't limit the total execution time of these operations. Maintenance operations that don't finish before the maintenance window ends can continue beyond the specified end time.

## Managed kdb Insights maintenance window


Every Manged kdb environment has a weekly maintenance window during which system changes are applied. You can control when modifications and software patches occurs during a maintenance window. If a maintenance event is scheduled for a given week, it is initiated during the maintenance window. 


| AWS Region name | Time block | 
| --- | --- | 
| Canada (Central) | 15:00–16:30 UTC | 
| US West (N. California) | 18:00–19:30 UTC | 
| US West (Oregon) | 18:00–19:30 UTC | 
| US East (N. Virginia) | 15:00–16:30 UTC | 
| US East (Ohio) | 15:00–16:30 UTC | 
| Europe (Ireland) | 10:00–11:30 UTC | 
|  Europe (London)  | 09:00–10:30 UTC | 
|  Europe (Frankfurt)  | 08:00–09:30 UTC | 
|  Asia Pacific (Singapore)  | 02:00–03:30 UTC | 
|  Asia Pacific (Sydney)  | 23:00–12:30 UTC | 
|  Asia Pacific (Tokyo)  |  01:00–02:30 UTC  | 

# Creating a Managed kdb Insights cluster
Creating a cluster

You can either use the console or the [CreateKxCluster](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxCluster) API to create a cluster. When you create a cluster from the console, you choose one of the following cluster types available in FinSpace – [General purpose](kdb-cluster-types.md#kdb-clusters-gp), [Tickerplant](kdb-cluster-types.md#kdb-clusters-tp), [HDB](kdb-cluster-types.md#kdb-clusters-hdb), [RDB](kdb-cluster-types.md#kdb-clusters-rdb), and [Gateway](kdb-cluster-types.md#kdb-clusters-gw). The create cluster workflow includes a step-wise wizard, where you will add various details based on the cluster type you choose. The fields on each page can differ based on various selections throughout the cluster creation process. 

## Prerequisites


Before you proceed, complete the following prerequisites: 
+ If you want to run clusters on a scaling group, [create a scaling group](create-scaling-groups.md).
+ If you want to run a TP, GP, or RDB cluster on e scaling group [create a volume](create-volumes.md).
+ If you want to run an HDB type cluster on a scaling group, [create a dataview](managing-kdb-dataviews.md#create-kdb-dataview).

** **Topics** **
+ [

## Prerequisites
](#create-cluster-prereq)
+ [

# Opening the cluster wizard
](create-cluster-tab.md)
+ [

# Step 1: Add cluster details
](create-cluster-step1.md)
+ [

# Step 2: Add code
](create-cluster-step2.md)
+ [

# Step 3: Configure VPC settings
](create-cluster-step3.md)
+ [

# Step 4: Configure data and storage
](create-cluster-step4.md)
+ [

# Step 5: Review and create
](create-cluster-step5.md)

# Opening the cluster wizard


**To open the create cluster wizard**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. In the left pane, under **Managed kdb Insights**, choose **Kdb environments**.

1. In the kdb environments table, choose the name of the environment.

1. On the environment details page, choose **Clusters** tab.

1. Choose **Create cluster**. A step-wise wizard to create a cluster opens.

# Step 1: Add cluster details


Specify details for each of the following sections on **Add cluster details** page.

## Cluster details


1. Choose from one of the following types of clusters that you want to add.
   + **(HDB) Historical Database**
   + **(RDB) Realtime Database**
   + **Gateway**
   + **General purpose**
   + **Tickerplant** 

   For more information about cluster types, see [Managed kdb Insights clusters](finspace-managed-kdb-clusters.md).
**Note**  
Currently, you can only create dedicated HDB clusters and all cluster types that are running on a scaling group directly from the console. To create other types of clusters, you need to first create a [Support case](https://aws.amazon.com/contact-us/), and then proceed with steps in this tutorial.
The parameters that are displayed on the **Step 5: Configure data and storage** page will change based on the cluster type and running mode that you select in this step.

1. Add a unique name and a brief description for your cluster.

1. For `Release label`, choose the package version to run in the cluster.

1. (Optional) Choose the IAM role that defines a set of permissions associated with this cluster. This is an execution role that will be associated with the cluster. You can use this role to control access to other clusters in your Managed kdb environment.

## Cluster running mode


1. Choose if you want to add this cluster as a dedicated cluster or as a part of scaling groups. 
   + **Run on kdb scaling group** – Allows you to share a single set of compute with multiple clusters.
   + **Run as a dedicated cluster** – Allows you to run each process on its own compute hose.

1. If you choose **Run as a dedicated cluster**, you also need to provide the Availability Zones where you want to create a cluster.

   1. Choose **AZ mode** to specify the number of Availability Zones where you want to create a cluster. You can choose from one of the following options:
      + **Single** – Allows you to create a cluster in one Availability Zone that you select. If you choose this option, you must specify only one Availability Zone value and only one subnet in the next step. The subnet must reside in one of the three AZs that your kdb environment uses, and the Availability Zone must align with one of the three AZs.
      + **Multiple** – Allows you to create a cluster with nodes automatically allocated across all the Availability Zones that are used by your Managed kdb environment. This option provides resiliency for node or cache failures in a Single-AZ. If you choose this option, you must specify three subnets, one in each of the three AZs that your kdb environment uses.
**Note**  
For the **General purpose** and **Tickerplant** type cluster, you can only choose Single-AZ.

   1. Choose the Availability Zone IDs that include the subnets you want to add.

## Scaling group details


**Note**  
This section is only available when you choose to add cluster as a part of scaling groups.

Choose the name of the scaling group where you want to create this cluster. The drop down shows the metadata for each scaling group along with their names to help you decide which one to pick. If a scaling group is not available, choose **Create kdb scaling group** to add a new one. For more information, see [Creating a Managed kdb scaling group](create-scaling-groups.md).

## Node details


In this section, you can choose the capacity configuration for your clusters. The fields in this section vary for dedicated and scaling group clusters.

------
#### [ Scaling group cluster ]

You can decide the memory and CPU usage that will be shared with the instances for your scaling group clusters by providing the following information. 

1. Under **Node details**, for **Node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. Enter the memory reservation and limits per node. Specifying the memory limit is optional. The memory limit should be equal to or greater than the memory reservation.

1. (Optional) Enter the number of vCPUs that you want to reserve for each node of this scaling group cluster.

------
#### [ Dedicated cluster ]

For a dedicated cluster you can provide an initial node count and choose the capacity configuration from a pre-defined list of node types. For example, the node type `kx.s.large` allows you to use two vCPUs and 12 GiB of memory for your instance.

1. Under **Node details**, for **Initial node count**, enter the number of instances in a cluster. 
**Note**  
For a **General purpose** and **Tickerplant** type cluster, the node count is fixed at 1.

1. For **Node type**, choose the memory and storage capabilities for your cluster instance. You can choose from one of the following options:
   + `kx.s.large` – The node type with a configuration of 12 GiB memory and 2 vCPUs.
   + `kx.s.xlarge` – The node type with a configuration of 27 GiB memory and 4 vCPUs.
   + `kx.s.2xlarge` – The node type with a configuration of 54 GiB memory and 8 vCPUs.
   + `kx.s.4xlarge` – The node type with a configuration of 108 GiB memory and 16 vCPUs.
   + `kx.s.8xlarge` – The node type with a configuration of 216 GiB memory and 32 vCPUs.
   + `kx.s.16xlarge` – The node type with a configuration of 432 GiB memory and 64 vCPUs.
   + `kx.s.32xlarge` – The node type with a configuration of 864 GiB memory and 128 vCPUs.

------

## Auto-scaling


**Note**  
This section is only available when you add an HDB cluster type as a dedicated cluster.

Specify details to scale in or scale out the based on service utilization. For more information, see [Auto scaling](kdb-cluster-types.md#kdb-cluster-hdb-autoscaling).

1. Enter a minimum node count. Valid numbers: 1–5.

1. Enter a maximum node count. Valid numbers: 1–5.

1. Choose the metrics to auto scale your cluster. Currently, FinSpace only supports CPU utilization.

1. Enter the cooldown time before initiating another scaling event.

## Tags


1. (Optional) Add a new tag to assign to your kdb cluster. For more information, see [AWS tags](https://docs.aws.amazon.com/finspace/latest/userguide/create-an-amazon-finspace-environment.html#aws-tags). 
**Note**  
You can only add up to 50 tags to your cluster.

1. Choose **Next** for next step of the wizard.

# Step 2: Add code


You can load q code onto your kdb cluster so that you can run it when the cluster is running. Additionally, you can configure your cluster to automatically run a particular q command script on cluster startup. By default, q writes files uncompressed. You can pass command line arguments to set compression defaults `.z.d` at the time of creating a cluster from the console or through CLI, which can be updated later.

**Note**  
This step is required for the **Gateway** and **Tickerplant** cluster type.

 On the **Add code** page, add the following details of your custom code that you want to use when analyzing the data in the cluster. 

1. (Optional) Specify the **S3 URI** and the **Object version**. You can choose the *.zip* file that contains code that should be available on the cluster.

1. (Optional) For **Initialization script**, enter the relative path that contains a q program script that will run at the launch of a cluster. If you choose to load the database by using the initialization script, it will autoload on startup. If you add a changeset that has a missing sym file, the cluster creation fails. 
**Note**  
This step is optional. If you choose to enter the initialization script, you must also provide the S3 URI.

1. (Optional) Enter key-value pairs as command-line arguments to configure the behavior of clusters. You can use the command-line arguments to set [zip defaults](https://code.kx.com/q/ref/dotz/#zzd-zip-defaults) for your clusters. For this, pass the following key-value pair:
   + **Key**: `AWS_ZIP_DEFAULT` 
   + **Value**: `17,2,6`

     The value consists of comma separated three numbers that represent logical block size, algorithm, and compression level respectively. For more information, see [compression parameters](https://code.kx.com/q/kb/file-compression/#compression-parameters). You can also add the key-value pair when you [update code configuration on a cluster](update-cluster-code.md).
**Note**  
You can only add up to 50 key-value pairs.

   To set compression default using AWS CLI, use the following command:

   ```
   aws finspace create-kx-cluster \
       ...
       --command-line-arguments '[{"key": "AWS_ZIP_DEFAULT", "value":"17,2,6"}]' \
       ...
   ```

1. Choose **Next**.

**Note**  
In case of failure, to stop cluster creation from an initialization script, use the `.aws.stop_current_kx_cluster_creation` function in the script.

# Step 3: Configure VPC settings


You connect to your cluster using q IPC through an AWS PrivateLink VPC endpoint. The endpoint resides in a subnet that you specify in the AWS account where you created your Managed kdb environment. Each cluster that you create has its own AWS PrivateLink endpoint, with an elastic network interface that resides in the subnet you specify. You can specify a security group to be applied to the VPC endpoint.

Connect a cluster to a VPC in your account. On the **Configure VPC settings** page, do the following: 

1. Choose the VPC that you want to access.

1. Choose the VPC subnets that the cluster will use to set up your VPC configuration.

1. Choose the security group.

1. Choose **Next**.

# Step 4: Configure data and storage


Choose data and storage configurations that will be used for the cluster. 

The parameters on this page are displayed according to the cluster type that you selected in *Step 1: Add cluster details*.

**Note**  
If you choose to add both the **Read data configuration** and **Savedown storage configuration**, the database name must be the same for both the configurations.

## For HDB cluster


**Note**  
When you create a cluster with a database that has a changeset, it will autoload the database when you launch a cluster.

If you choose **Cluster type** as *HDB*, you can specify the database and cache configurations as following:

------
#### [ Scaling group cluster ]

1. Choose the name of the database.

1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. Choose the name of the database. This database must have a changeset added to it.

1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

   1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
      + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
      + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
      + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

   1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. Choose **Next**. The **Review and create** page opens.

------

## For RDB cluster


If you choose **Cluster type** as *RDB*, you can specify the savedown storage configurations for your cluster as following:

------
#### [ Scaling group cluster ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Gateway cluster


If you choose **Cluster type** as **Gateway**, you do not need to attach databases, cache configurations, or local storage in this step.

## For General purpose cluster


If you choose **Cluster type** as *General purpose*, you can specify the database and cache configurations and savedown storage configurations as following:

------
#### [ Scaling group cluster ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database.

   1. Choose a dataview for the database you selected.
**Note**  
If a dataview is not available in the list, either choose **Create dataview** to create a new one for the database you selected or try changing the availability zone.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   Choose the name of the storage volume for your savedown files that you created in advance. If a volume name is not available, choose **Create volume** to create it.

1. **(Optional) Tickerplant log configuration**

   Choose a **Volume name** to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------
#### [ Dedicated clusters ]

1. **(Optional) Read data configuration**

   1. Choose the name of the database. This database must have a changeset added to it.

   1. Choose the changeset that you want to use. By default, this field displays the most recent changeset.

   1. Choose whether you want to cache your data from your database to this cluster. If you choose to enable caching, provide the following information: 

      1. Specify paths within the database directory where you want to cache data.

      1. Choose the cache type, which is a type of read-only storage for storing a subset of your database content for faster read performance. You can choose from one of the following options:
         + **CACHE\$11000** – Provides a throughput of 1000 MB/s per unit storage (TiB).
         + **CACHE\$1250** – Provides a throughput of 250 MB/s per unit storage (TiB).
         + **CACHE\$112** – Provides a throughput of 12 MB/s per unit storage (TiB).

      1. Choose the size of the cache. For cache type **CACHE\$11000** and **CACHE\$1250** you can select cache size as 1200 GB or increments of 2400 GB. For cache type **CACHE\$112** you can select the cache size in increments of 6000 GB.

1. **(Optional) Savedown database configuration**

   Choose the name of the database where you want to save your data.

1. **(Optional) Savedown storage configuration**

   1. Choose the writeable storage space type for temporarily storing your savedown data. Currently, only the **SDS01** storage type is available. This type represents 3000 IOPS and the Amazon EBS volume type `io2`. 

   1. Enter the size of the savedown storage that will be available to the cluster in GiB.

1. **Tickerplant log configuration**

   Choose one or more volume names to use the tickerplant logs from.

1. Choose **Next**. The **Review and create** page opens.

------

## For Tickerplant cluster


For both scaling groups clusters and dedicated clusters, you can choose a volume where you want to store the tickerplant data.

1. **Tickerplant log configuration**

   Choose a **Volume name** to store the tickerplant logs.

1. Choose **Next**. The **Review and create** page opens.

# Step 5: Review and create


1. On the **Review and create** page, review the details that you provided. You can modify details for any step when you choose **Edit** on this page.

1. Choose **Create cluster**. The cluster details page opens where you can view the status of cluster creation.

# Viewing kdb cluster detail
Viewing a cluster

**To view and get details of a kdb cluster**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. In the left pane, under **Managed kdb Insights**, choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Clusters** tab. The table under this tab displays a list of clusters.

1. Choose a cluster name to view its details. The cluster details page opens where you can view the cluster details and the following tabs.
   + **Configuration** tab – Displays the cluster configuration details like the node details, code, availability zones, savedown database configuration etc.
   + **Monitoring** tab – Displays the dashboard of cluster metrics. 
   + **Nodes** tab – Displays a list of nodes in this cluster along with their status. All the nodes that are active will have a **Running** status and nodes that are being prepared or stuck due to lack of resources have the status as **Provisioning**. From here you could also delete a node. For this, select a node and choose **Delete**.
   + **Logs** section – Displays the activity logs for your clusters. 
   + **Tags** tab – Displays a list of key-value pairs associated with the clusters. If you did not provide tags during cluster creation, choose **Manage tags** to add new tags.

# Updating code configurations on a running cluster
Updating code configurations

Amazon FinSpace allows you to update code configurations on a running cluster. You can either use the console or the [UpdateKxClusterCodeConfiguration](https://docs.aws.amazon.com/finspace/latest/management-api/API_UpdateKxClusterCodeConfiguration) API to update the code. Both console and API allow you to choose how you want to update the code on a cluster by using different deployment modes. Based on the option you choose, you can reduce the time it takes to update the code on to a cluster. You can also add or delete default compression parameters for your files by using command-line arguments.

**Note**  
The configuration that you update will override any existing configurations on the cluster. 

**To update code configurations on a cluster by using the console**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the list of environments, choose a kdb environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one where you want to update the code. The cluster details page opens.

1. On the cluster details page, choose the **Details** tab.

1. Under **Code** section, choose **Edit**.
**Note**  
This button is only available for an **Active** environment and when the cluster is in a **Running** state.

1. On the **Edit code configuration** page, choose how you want to update a cluster by choosing a deployment mode. The following options are available.
   + **Rolling** – (Default) Loads the code by stopping the exiting q process and starting a new q process with updated configuration.
   + **Quick** – Loads the code by stopping all the running nodes immediately. 

1. Specify the **S3 URI** and the **Object version**. This allows you to choose the *.zip* file containing code that should be available on the cluster.

1. For **Initialization script**, enter the relative path that contains a q program script that will run at the launch of a cluster.

1. (Optional) Add or update the key-value pairs as command line arguments to configure the behavior of clusters. 

   You can use the command-line arguments to set [zip defaults](https://code.kx.com/q/ref/dotz/#zzd-zip-defaults) for your cluster. The cluster has to be restarted for the changes to take effect. For this, pass the following key-value pair:
   + **Key**: `AWS_ZIP_DEFAULT` 
   + **Value**: `17,2,6`

     The value consists of comma separated three numbers that represent logical block size, algorithm, and compression level respectively. For more information, see [compression parameters](https://code.kx.com/q/kb/file-compression/#compression-parameters). 

     To update the compression default using AWS CLI, use the following command:

     ```
     aws finspace update-kx-cluster-code-configuration \
         ...
         --command-line-arguments '[{"key": "AWS_ZIP_DEFAULT", "value":"17,3,0"}]' \
         --deploymentConfiguration deploymentStrategy=ROLLING|FORCE
         ...
     ```

1. Choose **Save changes**. The cluster details page opens and the updated code configuration is displayed once the cluster updates successfully.

# Updating a kdb cluster database
Updating a cluster database

You can update the databases mounted on a kdb cluster using the console. This feature is only available for HDB clusters types. With this feature, you can update the data in a cluster by selecting a changeset. You can also update the cache by providing database paths. You can't change a database name or add a new database if you created a cluster without one. 

You can also choose how you want to update the databases on the cluster by selecting a deployment mode. 

**To update a kdb cluster database**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the list of environments, choose a kdb environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one where you want to update the database. The cluster details page opens.

1. On the cluster details page, choose the **Details** tab.

1. Under **Data management and storage** section, choose **Edit**.
**Note**  
This button is not available for *RDB* and *Gateway* type clusters.

1. On the edit page, modify the changeset that you want to cache as needed.

1. Choose a deployment mode from one of the following options.
   + **Rolling** – (Default) To update the database, this option stops the existing q process and starts a new q process with the updated database configuration. The initialization script re-runs when the new q process starts. 
   + **No restart** – This option updates the database but doesn't stop the existing q process. **No restart** is often quicker than the other deployment modes because it reduces the turnaround time to update the changeset configuration for a kdb database on your cluster. This option doesn't re-run the initialization script.
**Note**  
After the update completes, you must re-load the updated database. However, if you use a historical database (HDB) cluster with a single database in a rolling deployment, FinSpace autoloads the database after an update.

1. Choose **Save changes**. The cluster details page opens and the updated information is displayed once the cluster updates successfully.

# Deleting a kdb cluster
Deleting a cluster

**Note**  
This action is irreversible. Deleting a kdb cluster will delete all of its data from the local storage.

**To delete a kdb cluster**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one that you want to delete. The cluster details page opens.

1. On the cluster details page, choose **Delete**.

1. On the confirmation dialog box, enter *confirm*.

1. Choose **Delete**.

## Deleting a cluster on scaling groups




# Using Managed kdb Insights clusters
Using Managed kdb clusters

After you successfully create clusters in your kdb environment, you can use the clusters to do the following: 
+ **Monitor cluster metrics** – You can view the available cluster metrics in CloudWatch for your clusters. Using the **Monitoring** tab you can adjust the date and time range and refresh frequency as you need. You can also add the graphs to CloudWatch dashboards from this tab. For more information, see the [Monitoring Managed kdb cluster metrics](kdb-cluster-logging-monitoring.md#kdb-cluster-monitoring-metrics) section.
+ **View logs** – You can view KDB application logs from Managed kdb Insights clusters using the **Logs** tab. You can view data directly in CloudWatch using CloudWatch reporting or CloudWatch Insights. For more information, see the [Logging](kdb-cluster-logging-monitoring.md#kdb-cluster-logging) section. 
+ **Connect to clusters** – FinSpace provides you the ability to discover clusters in your dedicated account and connect to them. You can do this by using discovery API operations and q API operations. For more information on how to connect to a cluster, see the [Connecting to a cluster endpoint or node in a cluster](interacting-with-kdb-clusters.md#connect-kdb-clusters) section. 
+ **Load code on to a cluster** – You can run your own KDB code on the cluster and perform analytics or query data in a database. For this, FinSpace provides a set of q API operations that you can use to perform required functions. For more information, see the [Running code on a Managed kdb Insights cluster](interacting-with-kdb-loading-code.md) section. 

# Managing kdb users


The following sections provide a detailed overview of the operations that you can perform by using Managed kdb Insights users. A kdb user is required in order to establish a connection to a Managed kdb cluster. For more information, see [Interacting with a kdb cluster](interacting-with-kdb-clusters.md).

## Creating a kdb user


**To create a user**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Users** tab.

1. Choose **Add user**.

1. On the **Add user** page, a unique name for the user.

1. Choose an IAM role available in your account to associate it with the user. This role will be used later when you connect to a cluster.
**Note**  
The IAM role that you choose must have connect cluster permissions.

1. (Optional) Add a new tag to assign it to your kdb user. For more information, see [AWS tags](https://docs.aws.amazon.com/finspace/latest/userguide/create-an-amazon-finspace-environment.html#aws-tags). 
**Note**  
You can only add up to 50 tags to your user.

1. Choose **Add user**. The environment details page opens and the table under **Users** lists the newly added user.

## Updating a kdb user


**Note**  
You can only modify the IAM role associated with a user.

**To update a kdb user**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Users** tab.

1. From the list of users, choose the one that you want to update.

1. Choose **Edit**.

1. Choose a new IAM role to associate with this user.

1. Choose **Update user**.

## Deleting a kdb user


**Note**  
This action is irreversible.

**To delete a kdb user**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Users** tab.

1. From the list of users, choose the one that you want to delete.

1. Choose **Delete**.

1. On the confirmation dialog box, enter *confirm*.

1. Choose **Delete**.

# Interacting with a kdb cluster


To run commands on a Managed kdb Insights cluster, you must establish a q connection to a cluster endpoint or individual node in a cluster. If you don’t care which node in the cluster your connection is established with, use the cluster endpoint. The endpoint is an IP address (elastic network interface) that resides in your account. This provides a simple way to connect for a single-node cluster and for other scenarios. 

Alternately, from client code residing on a cluster node running with Managed kdb, you can also make a direct connection to an individual node. This gives you full control of which node in a cluster to use. This might be useful if you have custom allocation logic in your client code. You can use the Managed kdb list clusters and list node functionality to see what cluster and node resources are available in your environment. Then, you can use the cluster connection functionality to obtain a connection string that you can use to establish a q IPC connection to a cluster or node. 

As a part of cluster discovery, FinSpace provides you the following capabilities:
+ Listing all clusters running in your Managed kdb environment.
+ Listing all nodes running in a kdb cluster. For more information on this, see [Listing clusters and cluster nodes](#finding-kdb-clusters).
+ Connect to the underlying node from an existing cluster. For more information on this, see [Connecting to a cluster endpoint or node in a cluster](#connect-kdb-clusters).

## Listing clusters and cluster nodes


There are three ways to view a list of clusters and nodes running in a cluster:
+ **FinSpace API operations** – You can call the `ListKxClusterNodes` API operation to get a list of nodes in a cluster. For more information, see the [ListKxClusterNodes](https://docs.aws.amazon.com/finspace/latest/management-api/API_ListKxClusterNodes.html) in the *Management API Reference Guide*.
+ **q API operations** – You can use the `.aws.list_kx_cluster_nodes()` and `.aws.list_kx_clusters()` API operations to get a list of nodes or clusters. For more information, see [Discovery APIs](interacting-with-kdb-q-apis.md#q-apis-discovery).
+ **Console** – You can view a list of nodes from the cluster details page in the FinSpace console.

**To view a list of clusters by using the console**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the kdb environments table, choose the name of the environment.

1. On the environment details page, choose the **Clusters** tab.

1. Choose a cluster name to view its details. On the cluster details page, you can see details about a cluster.

**To view a list of nodes available in a cluster by using the console**

1. Sign in to the AWS Management Console and open the Amazon FinSpace console at [https://console.aws.amazon.com/finspace](https://console.aws.amazon.com/finspace/landing).

1. Choose **Kdb environments**.

1. From the list of environments, choose a kdb environment.

1. On the environment details page, choose the **Clusters** tab.

1. From the list of clusters, choose the one where you want to view nodes.

1. On the cluster details page, choose **Nodes** tab. All the nodes running in the cluster are displayed along with the information about the node ID , the Availability Zone ID where the node is running, and the time when the node was started. You can use the `nodeId` to call the `GetKxConnectionString` API operation, which returns a signed connection string.

## Connecting to a cluster endpoint or node in a cluster


Amazon FinSpace uses the model based on AWS Identity and Access Management that allows users to control access to clusters and their associated kdb databases using IAM roles and policies. 

Administrators create users in the FinSpace kdb environment using the existing `CreateKxUser` API operation, and associate these users with an IAM principal. Only users that will be connecting to a kdb cluster need to be registered as a FinSpace user. 

Next, using their IAM credentials, connecting users will request a SigV4 signed authentication token to connect to the cluster. Additionally, each cluster can be associated with an IAM execution role in the customer account when a cluster is created. This role will be used when a cluster connects to other clusters, or makes requests to other AWS resources in the customer’s account.

**To connect to a cluster endpoint or cluster node**

1. **Create IAM role for a new user.**

   1. Sign in to AWS Management Console, and open IAM Identity Center.

   1. Create an IAM role.

   1. Assign the following policy to the IAM role that you created.

      In the following example, replace each **user input placeholder** with your own values.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "finspace:ConnectKxCluster",
                  "Resource": "arn:aws:finspace:us-east-1:111122223333:kxEnvironment/sdb3moagybykax4oexvsq4/kxCluster/testhdb-cluster"
              },
              {
                  "Effect": "Allow",
                  "Action": "finspace:GetKxConnectionString",
                  "Resource": "arn:aws:finspace:us-east-1:111122223333:kxEnvironment/sdb3moagybykax4oexvsq4/kxCluster/testhdb-cluster"
              }
          ]
      }
      ```

------

   1. Associate the IAM role to the following trust policy that allows FinSpace to assume the role, as well as the account itself.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "finspace.amazonaws.com",
                      "AWS": "arn:aws:iam::111122223333:root"
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }
      ```

------

1. **Create a kdb user with the environment id, username, and the IAM role that you created in the previous step.**

   ```
   aws finspace create-kx-user
       --environment-id "sdb3moagybykax4oexvsq4"
       --user-name alice
       --iam-role arn:aws:iam::111122223333:role/user-alice
       [--tags {tags}]
   ```

1. **Federate the user that you created into its user role.**

   1. To get a kdb connection string for a user, you must first federated into the role associated with the user. How you assume this role depends on what federation tool you use. If you use AWS Security Token Service, you could run the following command and use the credentials of the customer account. 

      ```
      export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
      $(aws sts assume-role \
      --role-arn arn:aws:iam::111122223333:role/user-alice \
      --role-session-name "alice-connect-to-testhdb" \
      --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
      --output text))
      ```

   1. Verify that the role has been assumed.

      ```
      aws sts get-caller-identity | cat
      ```

1. **Get connection string for the user.**

   Get signed connection strings for connecting to kdb clusters or nodes. These connection strings are valid only for 60 minutes. To connect to a cluster endpoint, use `get-kx-connetion-string` to obtain a connection string. 

   ```
   aws finspace get-kx-connection-string
       --environment-id "sdb3moax4oexvsq4"
       --user-arn arn:aws:finspace:us-east-1:111122223333:kxEnvironment/sdb3moax4oexvsq4/kxUser/alice
       --cluster-name "testhdb-cluster"
       --region us-east-1
   ```

   Example of the signed connection string that you get.

   `:tcps://vpce-06259327736e61c9d-uczv1va3.vpce-svc-0938de45abc1ce4d8.us-east-1.vpce.amazonaws.com:443:testuser:Host=vpce-06259327736e61c9d-uczv1va3.vpce-svc-0938de45abc1ce4d8.us-east-1.vpce.amazonaws.com&Port=5000&User=testuser&Action=finspace%3AConnectKxCluster&X-Amz-Security-Token=IQoJb3JpZ2luX2Vj&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230524T150227Z&X-Amz-SignedHeaders=host&X-Amz-Expires=900&X-Amz-Credential=ASIAR2V4%2Fus-east-1%2Ffinspace-apricot%2Faws4_request&X-Amz-Signature=28854cc2f97f8f77009928fcdf15480dd10b43c61dda22b0af5f0985d38e7114`

1. Connect to a cluster using the signed connection string.

   ```
   hopen :tcps://vpce-06259327736e61c9d-uczv1va3.vpce-svc-0938de45abc1ce4d8.us-east-1.vpce.amazonaws.com:443:testuser:Host=vpce-06259327736e61c9d-uczv1va3.vpce-svc-0938de45abc1ce4d8.us-east-1.vpce.amazonaws.com&Port=5000&User=testuser&Action=finspace%3AConnectKxCluster&X-Amz-Security-Token=IQoJb3JpZ2luX2Vj&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230524T150227Z&X-Amz-SignedHeaders=host&X-Amz-Expires=900&X-Amz-Credential=ASIAR2V4%2Fus-east-1%2Ffinspace-apricot%2Faws4_request&X-Amz-Signature=28854cc2f97f8f77009928fcdf15480dd10b43c61dda22b0af5f0985d38e7114
   ```

**Note**  
The connection handles to the cluster VPC endpoint have an idle timeout period of 350 seconds. If you don't send commands or data by the time that the idle timeout period elapses, the connection closes and you will need to reopen it.  
To keep the connection open, use a timer that periodically sends a ping message to the cluster through an active handle. For this, you can run the following code.  

```
.aws.start_keepalive:
{\t 10000;.z.ts:{con "-1 \"Ping\"";}}
.aws.stop_keepalive:
{\t 0;.z.ts:{}}
```

# Running code on a Managed kdb Insights cluster


Q is the programming system for working with kdb\$1. This corresponds to SQL for traditional databases, but unlike SQL, q is a powerful interpreted programming language in its own right. Q expressions can be entered and run in the q console, or loaded from a q script, which is a text file with extension `.q`. For more information, see the [q documentation](https://code.kx.com/q/learn/startingkdb/language/). You can run your own kdb code on a Managed kdb cluster to perform analytics or query data in a database. 

The following sections describe how you can use q in FinSpace.

** **Topics** **
+ [

# .z namespace override
](interacting-with-kdb-z-namespace.md)
+ [

# Supported system commands
](interacting-with-kdb-system-commands.md)
+ [

# FinSpace q API reference
](interacting-with-kdb-q-apis.md)

# .z namespace override


KX uses the `.z` namespace that contains environment variables and functions, and hooks for callbacks. A FinSpace Managed kdb Insights cluster doesn't support direct assignment for `.z` namespace callbacks because of security concerns. For example, the system denies access to the following direct `.z.ts` assignment.

```
q)con".z.ts:{[x]}" / con is the hopen filehandle
'access
[0]  con".z.ts:{[x]}"
```

Because some of the assignments for `.z` namespace callbacks are critical for business logic, FinSpace provides a reserved namespace `.awscust.z` for you to override functions within the `.z` namespace. 

By overriding the functions within the new `.awscust.z` namespace, you can achieve the same effect as if you were directly overriding allowlisted `.z` functions. 

For example, if you need to override the `.z.ts` function, you can set a value for `.awscust.z.ts`. The FinSpace Managed kdb cluster invokes the `.awscust.z.ts` function whenever you invoke the `.z.ts` function, which provides a safety wrapper.

The following list shows the allowlisted callbacks for the `.awscust.z` namespace.

```
.awscust.z.ts 
.awscust.z.pg
.awscust.z.ps
.awscust.z.po
.awscust.z.pc
.awscust.z.ws
.awscust.z.wo
.awscust.z.wc
.awscust.z.pd
.awscust.z.ph
.awscust.z.pi:
.awscust.z.exit
```

If you override `.z` callbacks that aren't on the preceding list, you won't have any effects on the `.z` namespace callbacks.

# Supported system commands


 System commands control the q environment. The following table shows a list of system commands that FinSpace supports.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/finspace/latest/userguide/interacting-with-kdb-system-commands.html)

## Helper environment variables


You can quickly access user directories through the following environment variables that return a string of the folder path. 


| Helper environment variables  | Use for  | Directory | 
| --- | --- | --- | 
| .aws.akcp | Primary user code path. | /opt/kx/app/code | 
| .aws.akcsp |  Secondary user code path that's available only for **General purpose** cluster.  | /opt/kx/app/code\$1scratch | 
| .aws.akscp |  Primarily used for handling savedown functionality with an RDB cluster.  | /opt/kx/app/scratch | 

## Loading databases relative to code directory


We have added a symlink to the code directory to allow loading of database relative to the code path. For example, if the database is labeled as *kxDatabase* and the current working directory is /`opt/kx/app/code` then the database can be loaded as `\l /kxDatabase`.

# FinSpace q API reference


FinSpace provides a set of q APIs that you can use to interact with resources in your Managed kdb environment. These APIs reside in the *.aws Q* namespace. 

## Ingestion APIs


**Function**: `.aws.create_changeset[db_name;change_requests]`

Creates a new changeset in the specified database.

------
#### [ Parameters ]

`db_name`  
**Description** – The name of the FinSpace kdb database where you can create Managed kdb Insights changesets. This must be the same database that you used when you created the RDB cluster.  
**Type** – String  
**Required** – Yes

`change_requests`  
**Description** – A q table representing the list of change requests for the Managed kdb Insights changesets. The table has 3 columns :   
+ `input_path` – The input path of the local file system directory or file to ingest as a Managed kdb changeset.
+ `database_path` – The target database destination path of the Managed kdb changeset. This column maps to the `databasePath` field of the [CreateKxChangeset](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxChangeset.html) API.
+ `change_type` – The type of the Managed kdb changeset. It can be either `PUT` or `DELETE`. This column maps to the `changeType` field of the [CreateKxChangeset](https://docs.aws.amazon.com/finspace/latest/management-api/API_CreateKxChangeset.html) API.
**Type** – Q table  
**Required** – Yes

------
#### [ Result ]

Returns the `changeset_id` of the created Managed kdb changeset, along with its current status.

------

**Function**: `.aws.get_changeset[db_name;changeset_id]`

Retrieves information about a specific changeset.

------
#### [ Parameters ]

`db_name`  
**Description** – The name of the FinSpace kdb database where you can create Managed kdb changesets. This must be the same database that you used when you created the RDB cluster.  
**Type** – String  
**Required** – Yes

`changeset_id`  
**Description** – The identifier of the Managed kdb changeset.  
**Type** – string  
**Required** – Yes

------
#### [ Result ]

Returns the `changeset_id` and the status of the Managed kdb changeset.

------

**Function**: `.aws.get_latest_sym_file[db_name;destination_path]`

Retrieves the latest sym file from the specified database.

------
#### [ Parameters ]

`db_name`  
**Description** – The name of the FinSpace kdb database where you can create Managed kdb changesets. This must be the same database that you used when you created the RDB cluster.  
**Type** – String  
**Required** – Yes

`destination_path`  
**Description** – The directory in the local filesystem scratch location where you want to download the symfile.  
**Type** – String  
**Required** – Yes

------
#### [ Result ]

Returns the destination path where the sym file was copied to.

------

**Function**: `.aws.s3.get_object[source_s3_path;destination_disk_path]`

Copies Amazon S3 object from your S3 bucket account into a local disk location in a kdb cluster.

------
#### [ Permissions ]

For this function, the `executionRole` of the cluster must have the `s3:GetObject` permission to access the object and `kms:Decrypt` permission for the key that you use to encrypt the S3 bucket.

------
#### [ Parameters ]

`source_s3_path`  
**Description** – The source path in the customer account from where you want to copy an S3 object. This can be S3 object ARN or S3 URI path.  
**Type** – String  
**Required** – Yes

`destination_disk_path`  
**Description** – The local disk location to copy the S3 object to.   
**Type** – String  
**Required** – Yes

------
#### [ Example ]

The following code is an example request to copy S3 object to a local disk.

```
 q) .aws.s3.get_object["s3://customer-bucket/reference_data.csv"; "/opt/kx/app/shared/VolumeName/common/"]
```

------
#### [ Result ]

Returns a table of S3 object path and local directory disk location.

Sample response

```
s3ObjectPath                                containerFileDestinationPath
--------------------------------------------------------------------------
s3://data-bucket/data.csv                   "/opt/kx/app/shared/test/common/data.csv"
```

Sample response for retrieving multiple files

```
.aws.copy_database_files["DATABASE_NAME"; "DESTINATION_PATH"; "PARTITION_NAME/*"; ""]
database_name| "DATABASE_NAME"
changeset_id | "CHANGESET_ID"
result_paths | ("DESTINATION_PATH/PARTITION_NAME/file1"; "DESTINATION_PATH/PARTITION_NAME/file2"...)
```

------

**Function**: `.aws.copy_database_files[database_name, destination_path, db_path, changeset_id]`

Retrieves a specific file from a specific version of the database. The `changeset_id` provides the version of the database from where you want to retrieve the file.

------
#### [ Parameters ]

`database_name`  
**Description** – The name of the FinSpace kdb database where you can create Managed kdb changesets. This must be the same database that you used when you created the RDB cluster.  
**Type** – String  
**Required** – Yes

`destination_path`  
**Description** – The directory in the local filesystem scratch location where you want to download one or more files.  
**Type** – String  
**Required** – Yes

`db_path`  
**Description** – The path within the database directory of the file you want to retrieve. This can be a single file or a path ending with the wildcard "\$1” to retrieve multiple files. Following are a few example values for `db_path`.  
+ `sym` retrieves the file named **sym** located in the root directory of the database.
+ `sym*` retrieves all files starting with **sym** for a database. For example, **sym1** and **sym2**.
+ `2022.01.02/*` retrieves all files within the directory **2022.01.02**. For example, **2022.01.02/col1**, **2022.01.02/col2**, etc. Alternatively, you can use `2022.01.02/` to achieve the same result.
+ `2022.05.*` retrieves all files from May 2022 within a date-partitioned database. For example, all files from **2022.05.01**, **2022.05.02**, etc.
**Type** – String  
**Required** – Yes

`changeset_id`  
**Description** – The identifier of the Managed kdb changeset. You can specify an empty string `""` to use the latest changeset.  
**Type** – String  
**Required** – Yes

------
#### [ Result ]

Returns the destination path where the files were copied to, along with the `database_name` and `changeset_id` used.

Sample response for retrieving a file

```
.aws.copy_database_files["DATABASE_NAME"; "DESTINATION_PATH"; "DB_FILE_PATH"; ""]
database_name| "DATABASE_NAME"
changeset_id | "CHANGESET_ID"
result_paths | ,"DESTINATION_PATH/DB_FILE_PATH"
```

Sample response for retrieving multiple files

```
.aws.copy_database_files["DATABASE_NAME"; "DESTINATION_PATH"; "PARTITION_NAME/*"; ""]
database_name| "DATABASE_NAME"
changeset_id | "CHANGESET_ID"
result_paths | ("DESTINATION_PATH/PARTITION_NAME/file1"; "DESTINATION_PATH/PARTITION_NAME/file2"...)
```

------

**Function**: `.aws.get_kx_dataview[db_name;dataview_name]`

Retrieves information about a specific dataview. This operation is helpful especially when you update a dataview with `update_kx_dataview`, as it retrieves the latest status and reflects the updated `changeset_id`, `segment_configurations`, and `active_versions`. 

------
#### [ Permissions ]

For this function, the `executionRole` must have the `finspace:GetKxDataview` permission.

------
#### [ Parameters ]

`db_name`  
**Description** – The name of the FinSpace kdb database where the specified dataview exists. This must be same as the database you used when you created a cluster.  
**Type** – String  
**Required** – Yes

`dataview_name`  
**Description** – The name of the Managed kdb dataview you want to retrieve.  
**Type** – String  
**Required** – Yes

------
#### [ Result ]

Returns the details of specified dataview, including its status, changeset id, and segment configurations.

```
dataview_name          | "example-dataview-name" 
database_name          | "example-db" 
status                 | "ACTIVE" 
changeset_id           | "example-changeset-id" 
segment_configurations | +`db_paths`volume_name!(,,"/*";,"example-volume") 
availability_zone_id   | "use1-az2" 
az_mode                | "SINGLE" 
auto_update            | 0b 
read_write             | 0b
active_versions        | +`changeset_id`segment_configurations`attached_clusters`created_timestamp`version_id!(("example-changeset-id";"prior-changeset-id");(+`db_paths`volume_name!(,,"/*";,"example-volume");+`db_paths`volume_name!(,,"/*";,"example-volume"));(();,"example-cluster");1.717532e+09 1.716324e+09;("kMfybotBQNQl5LBLhDnAEA";"XMfOcGisErAFO9i1XRTdYQ"))
create_timestamp       | 1.716324e+09 
last_modified_timestamp| 1.717779e+09
```

------

**Function**: `.aws.update_kx_dataview[db_name;dataview_name;changeset_id;segment_configurations]`

Updates the changeset id and/or segment configurations of the specified dataview. Each update of the dataview creates a new version, with its own changeset details and cache configurations. If a dataview is created with auto-update set to false, when new changesets are ingested, this operation must be run to update the dataview with the latest changeset. This operation can also be used to update the segment configurations, which define which database paths are placed on each selected volume.

------
#### [ Permissions ]

For this function, the `executionRole` must have the `finspace:UpdateKxDataview` permission.

------
#### [ Parameters ]

`db_name`  
**Description** – The name of the Managed kdb database where the specified dataview exists. This must be the same database that you used when you created a cluster.  
**Type** – String  
**Required** – Yes

`dataview_name`  
**Description** – The name of the Managed kdb dataview you want to update.  
**Type** – String  
**Required** – Yes

`changeset_id`  
**Description** – The identifier of the Managed kdb changeset that the dataview should use.  
**Type** – String  
**Required** – Yes

`segment_configurations`  
**Description** – The output of the `.aws.sgmtcfgs` function.  
**Required** – Yes

------
#### [ Example ]

The following code is an example request to update the dataview.

```
.aws.update_kx_dataview["example-db"; "example-dataview-name"; "example-changeset-id"; .aws.sgmtcfgs[.aws.sgmtcfg[("/*");"example-volume"]]]
```

------
#### [ Result ]

This function does not return any value.

------

**Function**: `.aws.sgmtcfgs[segment_configurations]`

This is a helper function to construct arguments for .`aws.update_kx_dataview`, defining the list of segment configurations for the dataview. 

------
#### [ Parameters ]

`segment_configurations`  
**Description** – Either a single output of `.aws.sgmtcfg` or a list of .`aws.sgmtcfg` outputs.  
**Required** – Yes

------
#### [ Example ]

The following example shows how this function can take a single segment configuration.

```
.aws.sgmtcfgs[.aws.sgmtcfg[("/*");"example-volume"]]
```

Alternatively, you can use this function with multiple segment configurations as following.

```
.aws.sgmtcfgs[(.aws.sgmtcfg[("/2020.02.01/*");"example-volume-1"];.aws.sgmtcfg[("/2020.02.02/*");"example-volume-2"])]
```

------
#### [ Result ]

The output of this function is used as input for `.aws.update_kx_dataview`.

------

**Function**: `.aws.sgmtcfg[db_paths;volume_name]`

This is a helper function to construct arguments for `.aws.sgmtcfgs`, defining a single segment configuration, the database path of the data that you want to place on each selected volume. Each segment must have a unique database path for each volume. 

------
#### [ Parameters ]

`db_paths`  
**Description** – The database path of the data you want to place on each selected volume for the segment. Each segment must have a unique database path for each volume.  
**Type** – Array of strings  
**Required** – Yes

`volume_name`  
**Description** – The name of the Managed kdb volume where you would like to add data.  
**Type** – String  
**Required** – Yes

------
#### [ Example ]

The following example shows how this function can take a single db path.

```
.aws.sgmtcfg[("/*");"example-volume"]
```

Alternatively, you can use this function with multiple db paths as following.

```
.aws.sgmtcfg[("/2020.01.06/*";"/2020.01.02/*");"example-volume"]
```

------
#### [ Result ]

The output of this function is used as input for `.aws.sgmtcfgs`.

------

## Discovery APIs


**Function**: `.aws.list_kx_clusters()`

Returns a table of clusters in non-deleted state.

------
#### [ Parameters ]

 N/A

------
#### [ Result ]

Returns a table of Managed kdb clusters that are in a non-deleted state. This table consists of the following fields – `cluster_name`, `status`, `cluster_type`, and `description`.

------

**Function**: `.aws.list_kx_cluster_nodes[cluster_name]`

Retrieves a list of nodes within a cluster.

------
#### [ Parameters ]

`cluster_name`  
**Description** – The name of the Managed kdb cluster that you specified when creating a kdb cluster. You can also get this by using the `.aws.list_kx_clusters()` function.  
**Type** – String  
**Required** – Yes

------
#### [ Result ]

Returns a table of nodes in the Managed kdb cluster that consists of `node_id`, `az_id`, and `launch_time`.

------

## Authorization APIs


**Note**  
You must create clusters with the IAM `executionRole` field to use the q auth APIs. Clusters will assume this role when calling the auth APIs, so the role should have `GetKxConnectionString` and `ConnectKxCluster` permissions.

**Function**: `.aws.get_kx_node_connection_string[cluster_name;node_id]`

Retrieves the connection string for a given kdb cluster node.

------
#### [ Parameters ]

`cluster_name`  
**Description** – The name of the destination Managed kdb cluster for the connection string.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$`  
**Length** – 3-63  
**Required** – Yes

`node_id`  
**Description** – The node identifier of the target cluster.  
**Type** – String  
**Length** – 1-40  
**Required** – Yes

------
#### [ Result ]

Returns the connection string.

------

**Function**: `.aws.get_kx_connection_string[cluster_name]`

Retrieves the connection string for a given kdb cluster.

------
#### [ Parameters ]

`cluster_name`  
**Description** – The name of the destination cluster for the connection string.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$`  
**Length** – 3-63  
**Required** – Yes

------
#### [ Result ]

Returns the connection string.

------

## Cluster management APIs


**Note**  
You must create clusters with the IAM `executionRole` field to use the cluster management APIs.

**Function**: `.aws.stop_current_kx_cluster_creation[message]`

Stops the current cluster creation and puts the cluster in the `CREATE_FAILED` state. You can only call this function from an initialization script. 

------
#### [ Parameters ]

`message`  
**Description** – A message to display in the `statusReason` field of the cluster after the cluster reaches the `CREATE_FAILED` state.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9\_\-\.\s]*$`  
**Length** – 0-50  
**Required** – Yes

------
#### [ Example ]

The following code is an example request to stop creation of current cluster with a message.

```
.aws.stop_current_kx_cluster_creation[""]
```

------
#### [ Result ]

This function does not return any value.

------

**Function**: `.aws.delete_kx_cluster[clusterName]`

Deletes the specified cluster. If `clusterName` is an empty string, this function deletes the current cluster. 

------
#### [ Permissions ]

For this function, the `executionRole` must have the following permissions to delete the cluster:
+ `ec2:DescribeTags`
+ `ec2:DeleteVpcEndpoints`
+ `finspace:DeleteKxCluster`

------
#### [ Parameters ]

`clusterName`  
**Description** – The name of the cluster that you want to delete.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9-_]*$`  
**Length** – 1-63  
**Required** – Yes

------
#### [ Example ]

The following example deletes the *samplecst* cluster.

```
.aws.delete_kx_cluster["samplecst"]
```

The following example deletes the current cluster.

```
.aws.delete_kx_cluster[""]
```

------
#### [ Result ]

This function does not return any value.

------

**Function**: `.aws.get_kx_cluster[clusterName]`

Retrieves information about the specified cluster.

------
#### [ Permissions ]

For this function, the `executionRole` must have the `finspace:GetKxCluster` permission.

------
#### [ Parameters ]

`clusterName`  
**Description** – The name of the target cluster.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9-_]*$`  
**Length** – 1-63  
**Required** – Yes

------
#### [ Result ]

```
status               | "RUNNING"
>>>>>>> mainline
clusterName          | "example-cluster-name"
clusterType          | "HDB"
capacityConfiguration| `nodeType`nodeCount!("kx.s.xlarge";1f)
releaseLabel         | "1.0"
vpcConfiguration     | `vpcId`securityGroupIds`subnetIds`ipAddressType!("vpcId";,"securityGroupId";,"subnetId";"IP_V4")
executionRole        | "arn:aws:iam::111111111111:role/exampleRole"
lastModifiedTimestamp| 1.695064e+09
azMode               | "SINGLE"
availabilityZoneId   | "use1-az1"
createdTimestamp     | 1.695063e+09
```

------

**Function**: `.aws.update_kx_cluster_databases[clusterName;databases;properties]`

Updates the database of the specified kdb cluster.

------
#### [ Permissions ]
+ For this function, the `executionRole` must have the `finspace:UpdateKxClusterDatabases` permission.
+ You must have `finspace:GetKXCluster` permission for the `clusterName`.

------
#### [ Parameters ]

`clusterName`  
**Description** – The name of the target cluster.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$`  
**Length** – 3-63  
**Required** – Yes

`databases`  
**Description** – The output of the `.aws.sdbs` function.  
**Required** – Yes

`properties`  
**Description** – The output of the `.aws.sdep` function.  
**Required** – Yes

------
#### [ Example ]

The following code is an example request to update the cluster database.

```
.aws.update_kx_cluster_databases["HDB_TAQ_2021H1";
    .aws.sdbs[
        .aws.db["TAQ_2021H1";"osSoXB58eSXuDXLZFTCHyg";
            .aws.cache["CACHE_1000";"/"];
            ""
            ]
        ];
    .aws.sdep["ROLLING"]]
```

------
#### [ Result ]

This function does not return any value.

------

**Function**: `.aws.sdbs[databases]`

This is a helper function to construct arguments for `.aws.update_kx_cluster_databases`.

------
#### [ Parameters ]

databases  
**Description** – It is either a single output of `.aws.db` or a list of `.aws.db` outputs.   
**Required** – Yes

------
#### [ Example ]

Here is an example of how you can use this function.

```
.aws.sdbs[
    .aws.db["TAQ_2021H1";"osSoXB58eSXuDXLZFTCHyg";
        .aws.cache["CACHE_1000";"/"];
         ""
        ]
    ];
```

------
#### [ Result ]

The output of this function is used as input for `.aws.update_kx_cluster_databases` function.

------

**Function**: `.aws.db[databaseName;changesetId;caches;dataviewName]`

This is a helper function to construct arguments for `.aws.sdbs`.

------
#### [ Parameters ]

databaseName  
**Description** – The name of the target database.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$`  
**Length** – 3-63  
**Required** – Yes

changesetId  
**Description** – A unique identifier of the changeset. If you pass empty string “” for this parameter, the latest changeset of the database will be used.  
**Type** – String  
**Length** – 1-26  
**Required** – No

caches  
**Description** – It is either a single output of `.aws.cache` or a list of .`aws.cache` outputs. If there is no cache associated to the cluster, this list must be empty.  
**Required** – No

dataviewName  
**Description** – The name of the dataview.  
**Type** – String  
**Pattern** – `^[a-zA-Z0-9][a-zA-Z0-9-_]*[a-zA-Z0-9]$`  
**Length** – 3-63  
**Required** – No

------
#### [ Example ]

You can use this function to specify the changeset that you want to update, as following:

```
.aws.db["example-db";"example-changeset-id"; .aws.cache["CACHE_1000";"/"];""]
```

Alternatively, if the cluster is attached to a dataview, you can use this function to update the cluster to the latest version of the dataview with the specified `dataviewName`, as following:

```
.aws.db["example-db";""; .aws.cache["CACHE_1000";"/"];"example-dataview-name"]
```

------
#### [ Result ]

The output of this function is used as input for `.aws.sdbs` function.

------

**Function**: `.aws.cache[cacheType;dbPaths]`

This is a helper function to construct arguments for `.aws.db`.

------
#### [ Parameters ]

cacheType  
**Description** – The type of disk cache. This parameter is used to map the database path to cache storage.  
**Type** – String  
**Length** – 8-10  
**Required** – Yes

dbPaths  
**Description** – The portions of database that will be loaded into the cache for access.   
**Type** – Array of strings  
**Pattern** – `^\/([^\/]+\/){0,2}[^\/]*$`  
**Length** – 1-1025  
**Required** – Yes

------
#### [ Example ]

The following two examples show how you can send different requests by using this function.

```
.aws.cache["CACHE_1000";"/"]
```

```
.aws.cache["CACHE_1000";("path1";"path2")]
```

------
#### [ Result ]

The output of this function is used as input for `.aws.db` function.

------

**Function**: `.aws.sdbs[deploymentStrategy]`

This is a helper function to construct arguments for `.aws.update_kx_cluster_databases`.

------
#### [ Parameters ]

deploymentStrategy  
**Description** – The type of deployment that you want on a cluster. The following types are available.   
+ `ROLLING` – This options loads the updated database by stopping the exiting q process and starting a new q process with updated configuration.
+ `NO_RESTART` – This option loads the updated database on the running q process without stopping it. This option is quicker as it reduces the turn around time to update a kdb database changeset configuration on a cluster.
**Type** – String  
**Required** – Yes

------
#### [ Result ]

The output of this function is used as input for `.aws.update_kx_cluster_databases` function.

------

## Pub/Sub APIs


**Function**: `.aws.sub[table;sym_list]`

Initializes the publish and subscribe function.

------
#### [ Parameters ]

`table`  
**Description** – The symbol of the table that you want to subscribe to. The symbol ``` subscribes to all tables.  
**Type** – Symbol  
**Required** – Yes

`sym_list`  
**Description** – The list of symbols to filter published records. Defaults to ``` if no filter is applied.  
**Type** – Symbol list  
**Required** – Yes

------
#### [ Result ]

Returns table schema or table schemas list.

**Example 1: Subscribes to ``tab` table and filtering ``AAPL`MSFT`**.

```
target_instance_connection_handle ".aws.sub[`tab;`AAPL`MSFT]"
```

**Result**

```
`tab
+`sym`sales`prices!(`g#`symbol$();`long$();`long$())
```

**Example 2: Subscribes to all tables with no filtering**.

```
target_instance_connection_handle ".aws.sub[`;`]"
```

**Result**

```
`tab  +`sym`sales`prices!(`g#`symbol$();`long$();`long$())
`tab1 +`sym`sales`prices!(`g#`symbol$();`long$();`long$())
`tab2 +`sym`sales`prices!(`g#`symbol$();`long$();`long$())
`tab3 +`sym`sales`prices!(`g#`symbol$();`long$();`long$())
```

------

**Function**: `.aws.pub[table;table_records]`

Publishes table records to table subscribers by calling `upd[table;table_records]` within the subscriber connection handle.

------
#### [ Parameters ]

`table`  
**Description** – Publishes the records to the table subscribers.  
**Type** – Symbol  
**Required** – Yes

`table_records`  
**Description** – The table records that you want to publish.  
**Type** – Table  
**Required** – Yes

------
#### [ Example ]

This example publishes ``tab` table and values to the subscribers.

```
.aws.pub[`tab;value `tab]
```

------
#### [ Result ]

This function does not return any value.

------

## Database maintenance APIs


**Function**: `.aws.commit_kx_database[database_name]`

Commits the database changes after performing database maintenance.

------
#### [ Parameters ]

`database_name`  
**Description** – The name of the database where you performed database maintenance operations and whose changes you want to commit.  
**Type** – String  
**Required** – Yes

------
#### [ Example ]

```
.aws.commit_kx_database["welcomedb"]
```

------
#### [ Result ]

Returns the `id` and `status` of the changeset that the API creates.

------