Interface CfnLocationHDFSProps
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnLocationHDFSProps.Jsii$Proxy
CfnLocationHDFS.
Example:
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import software.amazon.awscdk.services.datasync.*;
CfnLocationHDFSProps cfnLocationHDFSProps = CfnLocationHDFSProps.builder()
.agentArns(List.of("agentArns"))
.authenticationType("authenticationType")
.nameNodes(List.of(NameNodeProperty.builder()
.hostname("hostname")
.port(123)
.build()))
// the properties below are optional
.blockSize(123)
.kerberosKeytab("kerberosKeytab")
.kerberosKrb5Conf("kerberosKrb5Conf")
.kerberosPrincipal("kerberosPrincipal")
.kmsKeyProviderUri("kmsKeyProviderUri")
.qopConfiguration(QopConfigurationProperty.builder()
.dataTransferProtection("dataTransferProtection")
.rpcProtection("rpcProtection")
.build())
.replicationFactor(123)
.simpleUser("simpleUser")
.subdirectory("subdirectory")
.tags(List.of(CfnTag.builder()
.key("key")
.value("value")
.build()))
.build();
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic final classA builder forCfnLocationHDFSPropsstatic final classAn implementation forCfnLocationHDFSProps -
Method Summary
Modifier and TypeMethodDescriptionstatic CfnLocationHDFSProps.Builderbuilder()The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster.AWS::DataSync::LocationHDFS.AuthenticationType.default NumberThe size of data blocks to write into the HDFS cluster.default StringThe Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys.default StringThekrb5.conffile that contains the Kerberos configuration information.default StringThe Kerberos principal with access to the files and folders on the HDFS cluster.default StringThe URI of the HDFS cluster's Key Management Server (KMS).The NameNode that manages the HDFS namespace.default ObjectThe Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.default NumberThe number of DataNodes to replicate the data to when writing to the HDFS cluster.default StringThe user name used to identify the client on the host operating system.default StringA subdirectory in the HDFS cluster.getTags()The key-value pair that represents the tag that you want to add to the location.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getAgentArns
The Amazon Resource Names (ARNs) of the agents that are used to connect to the HDFS cluster. -
getAuthenticationType
AWS::DataSync::LocationHDFS.AuthenticationType. -
getNameNodes
The NameNode that manages the HDFS namespace.The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode.
-
getBlockSize
The size of data blocks to write into the HDFS cluster.The block size must be a multiple of 512 bytes. The default block size is 128 mebibytes (MiB).
-
getKerberosKeytab
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys.Provide the base64-encoded file text. If
KERBEROSis specified forAuthType, this value is required. -
getKerberosKrb5Conf
Thekrb5.conffile that contains the Kerberos configuration information. You can load thekrb5.confby providing a string of the file's contents or an Amazon S3 presigned URL of the file. IfKERBEROSis specified forAuthType, this value is required. -
getKerberosPrincipal
The Kerberos principal with access to the files and folders on the HDFS cluster.If
KERBEROSis specified forAuthenticationType, this parameter is required. -
getKmsKeyProviderUri
The URI of the HDFS cluster's Key Management Server (KMS). -
getQopConfiguration
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster.If
QopConfigurationisn't specified,RpcProtectionandDataTransferProtectiondefault toPRIVACY. If you setRpcProtectionorDataTransferProtection, the other parameter assumes the same value. -
getReplicationFactor
The number of DataNodes to replicate the data to when writing to the HDFS cluster.By default, data is replicated to three DataNodes.
-
getSimpleUser
The user name used to identify the client on the host operating system.If
SIMPLEis specified forAuthenticationType, this parameter is required. -
getSubdirectory
A subdirectory in the HDFS cluster.This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to
/. -
getTags
The key-value pair that represents the tag that you want to add to the location.The value can be an empty string. We recommend using tags to name your resources.
-
builder
- Returns:
- a
CfnLocationHDFSProps.BuilderofCfnLocationHDFSProps
-