/AWS1/CL_DMGKAFKASETTINGS¶
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
CONSTRUCTOR¶
IMPORTING¶
Optional arguments:¶
iv_broker TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.
iv_topic TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"as the migration topic.
iv_messageformat TYPE /AWS1/DMGMESSAGEFORMATVALUE /AWS1/DMGMESSAGEFORMATVALUE¶
The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab).
iv_includetransactiondetails TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id, previoustransaction_id, andtransaction_record_id(the record offset within a transaction). The default isfalse.
iv_includepartitionvalue TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. The default isfalse.
iv_partitioninclschematable TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Prefixes schema and table names to partition values, when the partition type is
primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse.
iv_includetablealterops TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table,drop-table,add-column,drop-column, andrename-column. The default isfalse.
iv_includecontroldetails TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false.
iv_messagemaxbytes TYPE /AWS1/DMGINTEGEROPTIONAL /AWS1/DMGINTEGEROPTIONAL¶
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
iv_includenullandempty TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Include NULL and empty columns for records migrated to the endpoint. The default is
false.
iv_securityprotocol TYPE /AWS1/DMGKAFKASECURITYPROTOCOL /AWS1/DMGKAFKASECURITYPROTOCOL¶
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword.
iv_sslclientcertificatearn TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
iv_sslclientkeyarn TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
iv_sslclientkeypassword TYPE /AWS1/DMGSECRETSTRING /AWS1/DMGSECRETSTRING¶
The password for the client private key used to securely connect to a Kafka target endpoint.
iv_sslcacertificatearn TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
iv_saslusername TYPE /AWS1/DMGSTRING /AWS1/DMGSTRING¶
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
iv_saslpassword TYPE /AWS1/DMGSECRETSTRING /AWS1/DMGSECRETSTRING¶
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
iv_nohexprefix TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Set this optional parameter to
trueto avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefixendpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
iv_saslmechanism TYPE /AWS1/DMGKAFKASASLMECHANISM /AWS1/DMGKAFKASASLMECHANISM¶
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN.
iv_sslendptidentificationalg TYPE /AWS1/DMGKAFKASSLENDPTIDENTI00 /AWS1/DMGKAFKASSLENDPTIDENTI00¶
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
iv_uselargeintegervalue TYPE /AWS1/DMGBOOLEANOPTIONAL /AWS1/DMGBOOLEANOPTIONAL¶
Specifies using the large integer value with Kafka.
Queryable Attributes¶
Broker¶
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_BROKER() |
Getter for BROKER, with configurable default |
ASK_BROKER() |
Getter for BROKER w/ exceptions if field has no value |
HAS_BROKER() |
Determine if BROKER has a value |
Topic¶
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"as the migration topic.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_TOPIC() |
Getter for TOPIC, with configurable default |
ASK_TOPIC() |
Getter for TOPIC w/ exceptions if field has no value |
HAS_TOPIC() |
Determine if TOPIC has a value |
MessageFormat¶
The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab).
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_MESSAGEFORMAT() |
Getter for MESSAGEFORMAT, with configurable default |
ASK_MESSAGEFORMAT() |
Getter for MESSAGEFORMAT w/ exceptions if field has no value |
HAS_MESSAGEFORMAT() |
Determine if MESSAGEFORMAT has a value |
IncludeTransactionDetails¶
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id, previoustransaction_id, andtransaction_record_id(the record offset within a transaction). The default isfalse.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INCLUDETRANSACTIONDETS() |
Getter for INCLUDETRANSACTIONDETAILS, with configurable defa |
ASK_INCLUDETRANSACTIONDETS() |
Getter for INCLUDETRANSACTIONDETAILS w/ exceptions if field |
HAS_INCLUDETRANSACTIONDETS() |
Determine if INCLUDETRANSACTIONDETAILS has a value |
IncludePartitionValue¶
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. The default isfalse.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INCLUDEPARTITIONVALUE() |
Getter for INCLUDEPARTITIONVALUE, with configurable default |
ASK_INCLUDEPARTITIONVALUE() |
Getter for INCLUDEPARTITIONVALUE w/ exceptions if field has |
HAS_INCLUDEPARTITIONVALUE() |
Determine if INCLUDEPARTITIONVALUE has a value |
PartitionIncludeSchemaTable¶
Prefixes schema and table names to partition values, when the partition type is
primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_PARTITIONINCLSCHEMATABLE() |
Getter for PARTITIONINCLUDESCHEMATABLE, with configurable de |
ASK_PARTITIONINCLSCHEMATABLE() |
Getter for PARTITIONINCLUDESCHEMATABLE w/ exceptions if fiel |
HAS_PARTITIONINCLSCHEMATABLE() |
Determine if PARTITIONINCLUDESCHEMATABLE has a value |
IncludeTableAlterOperations¶
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table,drop-table,add-column,drop-column, andrename-column. The default isfalse.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INCLUDETABLEALTEROPS() |
Getter for INCLUDETABLEALTEROPERATIONS, with configurable de |
ASK_INCLUDETABLEALTEROPS() |
Getter for INCLUDETABLEALTEROPERATIONS w/ exceptions if fiel |
HAS_INCLUDETABLEALTEROPS() |
Determine if INCLUDETABLEALTEROPERATIONS has a value |
IncludeControlDetails¶
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INCLUDECONTROLDETAILS() |
Getter for INCLUDECONTROLDETAILS, with configurable default |
ASK_INCLUDECONTROLDETAILS() |
Getter for INCLUDECONTROLDETAILS w/ exceptions if field has |
HAS_INCLUDECONTROLDETAILS() |
Determine if INCLUDECONTROLDETAILS has a value |
MessageMaxBytes¶
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_MESSAGEMAXBYTES() |
Getter for MESSAGEMAXBYTES, with configurable default |
ASK_MESSAGEMAXBYTES() |
Getter for MESSAGEMAXBYTES w/ exceptions if field has no val |
HAS_MESSAGEMAXBYTES() |
Determine if MESSAGEMAXBYTES has a value |
IncludeNullAndEmpty¶
Include NULL and empty columns for records migrated to the endpoint. The default is
false.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_INCLUDENULLANDEMPTY() |
Getter for INCLUDENULLANDEMPTY, with configurable default |
ASK_INCLUDENULLANDEMPTY() |
Getter for INCLUDENULLANDEMPTY w/ exceptions if field has no |
HAS_INCLUDENULLANDEMPTY() |
Determine if INCLUDENULLANDEMPTY has a value |
SecurityProtocol¶
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SECURITYPROTOCOL() |
Getter for SECURITYPROTOCOL, with configurable default |
ASK_SECURITYPROTOCOL() |
Getter for SECURITYPROTOCOL w/ exceptions if field has no va |
HAS_SECURITYPROTOCOL() |
Determine if SECURITYPROTOCOL has a value |
SslClientCertificateArn¶
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SSLCLIENTCERTIFICATEARN() |
Getter for SSLCLIENTCERTIFICATEARN, with configurable defaul |
ASK_SSLCLIENTCERTIFICATEARN() |
Getter for SSLCLIENTCERTIFICATEARN w/ exceptions if field ha |
HAS_SSLCLIENTCERTIFICATEARN() |
Determine if SSLCLIENTCERTIFICATEARN has a value |
SslClientKeyArn¶
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SSLCLIENTKEYARN() |
Getter for SSLCLIENTKEYARN, with configurable default |
ASK_SSLCLIENTKEYARN() |
Getter for SSLCLIENTKEYARN w/ exceptions if field has no val |
HAS_SSLCLIENTKEYARN() |
Determine if SSLCLIENTKEYARN has a value |
SslClientKeyPassword¶
The password for the client private key used to securely connect to a Kafka target endpoint.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SSLCLIENTKEYPASSWORD() |
Getter for SSLCLIENTKEYPASSWORD, with configurable default |
ASK_SSLCLIENTKEYPASSWORD() |
Getter for SSLCLIENTKEYPASSWORD w/ exceptions if field has n |
HAS_SSLCLIENTKEYPASSWORD() |
Determine if SSLCLIENTKEYPASSWORD has a value |
SslCaCertificateArn¶
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SSLCACERTIFICATEARN() |
Getter for SSLCACERTIFICATEARN, with configurable default |
ASK_SSLCACERTIFICATEARN() |
Getter for SSLCACERTIFICATEARN w/ exceptions if field has no |
HAS_SSLCACERTIFICATEARN() |
Determine if SSLCACERTIFICATEARN has a value |
SaslUsername¶
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SASLUSERNAME() |
Getter for SASLUSERNAME, with configurable default |
ASK_SASLUSERNAME() |
Getter for SASLUSERNAME w/ exceptions if field has no value |
HAS_SASLUSERNAME() |
Determine if SASLUSERNAME has a value |
SaslPassword¶
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SASLPASSWORD() |
Getter for SASLPASSWORD, with configurable default |
ASK_SASLPASSWORD() |
Getter for SASLPASSWORD w/ exceptions if field has no value |
HAS_SASLPASSWORD() |
Determine if SASLPASSWORD has a value |
NoHexPrefix¶
Set this optional parameter to
trueto avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefixendpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_NOHEXPREFIX() |
Getter for NOHEXPREFIX, with configurable default |
ASK_NOHEXPREFIX() |
Getter for NOHEXPREFIX w/ exceptions if field has no value |
HAS_NOHEXPREFIX() |
Determine if NOHEXPREFIX has a value |
SaslMechanism¶
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SASLMECHANISM() |
Getter for SASLMECHANISM, with configurable default |
ASK_SASLMECHANISM() |
Getter for SASLMECHANISM w/ exceptions if field has no value |
HAS_SASLMECHANISM() |
Determine if SASLMECHANISM has a value |
SslEndpointIdentificationAlgorithm¶
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_SSLENDPTIDENTIFICATION00() |
Getter for SSLENDPOINTIDENTIFICATIONALG, with configurable d |
ASK_SSLENDPTIDENTIFICATION00() |
Getter for SSLENDPOINTIDENTIFICATIONALG w/ exceptions if fie |
HAS_SSLENDPTIDENTIFICATION00() |
Determine if SSLENDPOINTIDENTIFICATIONALG has a value |
UseLargeIntegerValue¶
Specifies using the large integer value with Kafka.
Accessible with the following methods¶
| Method | Description |
|---|---|
GET_USELARGEINTEGERVALUE() |
Getter for USELARGEINTEGERVALUE, with configurable default |
ASK_USELARGEINTEGERVALUE() |
Getter for USELARGEINTEGERVALUE w/ exceptions if field has n |
HAS_USELARGEINTEGERVALUE() |
Determine if USELARGEINTEGERVALUE has a value |