

# AWS Transfer Family SFTP connectors
SFTP connectors

An AWS Transfer Family SFTP connector establishes a connection with a remote SFTP server to transfer files between Amazon storage and a remote server, using the SFTP protocol. You can send files from Amazon S3 to an external, partner-owned SFTP server, retrieve files from a partner's SFTP server to Amazon S3 or list, delete, rename or move files on the remote server. SFTP connectors support two egress types: service managed (using AWS managed infrastructure) and VPC (routing through your VPC using Amazon VPC Lattice ). Using SFTP connectors, you can build automated, event-driven file transfer workflows in AWS .

The following video provides a brief introduction to Transfer Family SFTP connectors.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/Gm-FMGrVpAg/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/Gm-FMGrVpAg)


**Topics**
+ [

# Creating SFTP connectors
](configure-sftp-connector.md)
+ [

# VPC connectivity for SFTP connectors
](sftp-connectors-vpc-overview.md)
+ [

# Using SFTP connectors
](transfer-sftp-connectors.md)
+ [

# Monitoring SFTP connectors
](track-connector-progress.md)
+ [

# Managing SFTP connectors
](manage-sftp-connectors.md)
+ [

# Scaling and quotas for SFTP connectors
](scale-and-limits-sftp-connector.md)
+ [

# Reference architectures using SFTP connectors
](reference-architectures.md)

# Creating SFTP connectors


This topic describes how to create SFTP connectors. Each connector provides the ability to connect with one remote SFTP server. You perform the following high-level tasks to configure an SFTP connector.

**Note**  
For VPC-based connectors that route traffic through your Virtual Private Cloud, see [Create an SFTP connector with VPC-based egress](create-vpc-sftp-connector-procedure.md).

1. Store the authentication credentials for the connector in AWS Secrets Manager.

1. Create the connector, by specifying the secret ARN, the remote server's URL or Resource Configuration ARN, the security policy containing the algorithms that will be supported by the connector, and other configuration settings.

1. After you create the connector, you can test it to ensure that it can establish connections with the remote SFTP server.

## Choosing SFTP connector egress type


When you create a SFTP connector, you choose the Egress Type between "Service managed" and "VPC Lattice".
+ **Service managed** (default): The connector uses NAT gateways and IP addresses owned by AWS Transfer Family to route connections over the public internet. The service provides 3 static IP addresses for your connectors that need to be allowlisted on the remote servers to establish connections.
+ **VPC Lattice**: The connector routes traffic through your VPC environment using Amazon VPC Lattice. Use VPC connectivity for SFTP connectors in these scenarios:
  + **Private SFTP servers**: Connect to SFTP servers that are only accessible from your VPC
  + **On-premises connectivity**: Connect to on-premises SFTP servers through AWS Direct Connect or AWS Virtual Private Network connections
  + **Custom IP addresses**: Present your own NAT gateways and Elastic IP addresses to the remote server
  + **Centralized security controls**: Route file transfers through your organization's central ingress/egress controls

The following matrix helps you choose the right connector type for your use-cases.


**SFTP Connector Egress Type matrix**  

| Capability | Egress Type = Service managed | Egress Type = VPC Lattice | 
| --- | --- | --- | 
| Connectivity to Publicly hosted (internet-accessible) SFTP servers | Supported | Supported1 | 
| Connectivity to Privately hosted (on-premises) SFTP servers | Not supported | Supported2 | 
| Connectivity to Privately hosted (in-VPC) SFTP servers | Not supported | Supported | 
| Static IP addresses presented to remote SFTP server | Supported via service supplied static IP addresses | Supported via customer owned static IP addresses | 
| Bandwidth available | 50 MBPS per account | Higher bandwidth, as available from customer owned Resource Gateway and NAT Gateway | 
| Traffic routing to internet over customer-owned NAT Gateways and Network Firewalls | Not supported. NAT Gateways are owned and managed by Transfer Family service. | Supported | 

1 *With Egress Type = VPC Lattice, connectivity to publicly hosted servers is supported using the egress infrastructure (NAT Gateways) setup in your egress VPCs.*

2 *With Egress Type = VPC Lattice, connectivity to privately hosted servers is supported using existing networks in your VPC, such as AWS Direct Connect or VPN.*

## Choosing IP addressing mode


When you create an SFTP connector with service-managed egress, you can choose between two IP addressing modes:
+ **IPv4 only** (default): The connector uses IPv4 addresses exclusively to connect to the remote SFTP server. This is the default mode when creating connectors through the console, AWS CLI, or API.
+ **Dual-stack**: The connector supports both IPv6 and IPv4 addresses. In dual-stack mode, the connector prefers IPv6 when DNS resolution returns IPv6 results, and uses IPv4 when only IPv4 DNS results are returned.

**Note**  
IP addressing mode applies only to connectors with service-managed egress type. Connectors that use VPC Lattice egress do not support this setting.

**Topics**
+ [

## Choosing SFTP connector egress type
](#choosing-egress-type)
+ [

## Choosing IP addressing mode
](#choosing-ip-address-type)
+ [

# Store authentication credentials for SFTP connectors in Secrets Manager
](sftp-connector-secret-procedure.md)
+ [

# Create an SFTP connector with service-managed egress
](create-sftp-connector-procedure.md)
+ [

# Create an SFTP connector with VPC-based egress
](create-vpc-sftp-connector-procedure.md)
+ [

# Test an SFTP connector
](test-sftp-connector.md)

# Store authentication credentials for SFTP connectors in Secrets Manager
Store credentials in Secrets Manager

You can use Secrets Manager to store user credentials for your SFTP connectors. When you create your secret, you must provide a username. Additionally, you can provide either a password, a private key, or both. For details, see [Quotas for SFTP connectors](scale-and-limits-sftp-connector.md#limits-sftp-connector).

**Note**  
When you store secrets in Secrets Manager, your AWS account incurs charges. For information about pricing, see [AWS Secrets Manager Pricing](https://aws.amazon.com/secrets-manager/pricing).

**To store user credentials in Secrets Manager for an SFTP connector**

1. Sign in to the AWS Management Console and open the AWS Secrets Manager console at [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

1. In the left navigation pane, choose **Secrets**. 

1. On the **Secrets** page, choose **Store a new secret**.

1. On the **Choose secret type** page, for **Secret type**, choose **Other type of secret**.

1. Provide the key/value information for your secret: you need to provide the username, and either a private key or a password.

   1. In the **Key/value pairs** section, choose the **Key/value** tab.
      + **Key** – Enter **Username**.
      + **value** – Enter the name of the user that is authorized to connect to the partner's server.

   1. If you want to provide a key pair, choose **Add row**, and in the **Key/value pairs** section, choose the **Key/value** tab.
      + **Key** – Enter **PrivateKey**.
      + **value** – paste in your private key.

      **Tip**: The private key data that you enter must correspond to the public key that is stored for this user on the remote SFTP server.
**Note**  
It is not possible to use a passphrase-protected private key for authentication with an AWS Transfer Family SFTP connector.

      For details on how to generate a public/private key pair, see [Creating SSH keys on macOS, Linux, or Unix](macOS-linux-unix-ssh.md).

   1. If you want to provide a password, choose **Add row**, and in the **Key/value pairs** section, choose the **Key/value** tab.
      + **Key** – Enter **Password**.
      + **value** – Enter the password for the user.

1. Choose **Next**.

1. On the **Configure secret** page, enter a name and description for your secret. We recommend that you use a prefix of **aws/transfer/** for the name. For example, you could name your secret **aws/transfer/connector-1**.

1. Choose **Next**, and then accept the defaults on the **Configure rotation** page. Then choose **Next**.

1. On the **Review** page, choose **Store** to create and store the secret.

# Create an SFTP connector with service-managed egress


This procedure explains how to create SFTP connectors by using the AWS Transfer Family console or AWS CLI.

------
#### [ Console ]<a name="create-sftp-connector"></a>

**To create an SFTP connector**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **SFTP Connectors**, then choose **Create SFTP connector**.

1. In the **Connector configuration** section, for **Egress type**, choose **Service managed**. This option uses AWS Transfer Family managed egress infrastructure. The Transfer Family service provides and manages static IP addresses for each SFTP connector.

1. In the **Connector configuration** section, provide the following information:  
![\[The Transfer Family SFTP connector console, showing the Connector configuration settings.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-connector-example-config.png)
   + For the **URL**, enter the URL for a remote SFTP server. This URL must be formatted as `sftp://partner-SFTP-server-url`, for example `sftp://AnyCompany.com`.
**Note**  
Optionally, you can provide a port number in your URL. The format is `sftp://partner-SFTP-server-url:port-number`. The default port number (when no port is specified) is port 22.
   + For the **Access role**, choose the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role to use.
     + **Make sure that this role provides read and write access** to the parent directory of the file location that's used in the `StartFileTransfer` request.
     + **Make sure that this role provides permission** for `secretsmanager:GetSecretValue` to access the secret.
**Note**  
In the policy, you must specify the ARN for the secret. The ARN contains the secret name, but appends the name with six, random, alphanumeric characters. An ARN for a secret has the following format.  

       ```
       arn:aws:secretsmanager:region:account-id:secret:aws/transfer/SecretName-6RandomCharacters
       ```
     + **Make sure this role contains a trust relationship** that allows the connector to access your resources when servicing your users' transfer requests. For details on establishing a trust relationship, see [To establish a trust relationship](requirements-roles.md#establish-trust-transfer).  
****  

     ```
     {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Sid": "AllowListingOfUserFolder",
             "Action": [
                 "s3:ListBucket",
                 "s3:GetBucketLocation"
             ],
             "Effect": "Allow",
             "Resource": [
                 "arn:aws:s3:::amzn-s3-demo-bucket"
             ]
         },
         {
             "Sid": "HomeDirObjectAccess",
             "Effect": "Allow",
             "Action": [
                 "s3:PutObject",
                 "s3:GetObject",
                 "s3:DeleteObject",
                 "s3:DeleteObjectVersion",
                 "s3:GetObjectVersion",
                 "s3:GetObjectACL",
                 "s3:PutObjectACL"
             ],
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
         },
         {
             "Sid": "GetConnectorSecretValue",
             "Effect": "Allow",
             "Action": [
                 "secretsmanager:GetSecretValue"
             ],
             "Resource": "arn:aws:secretsmanager:us-west-2:111122223333:secret:aws/transfer/SecretName-6RandomCharacters"
         }
       ]
     }
     ```
**Note**  
For the access role, the example grants access to a single secret. However, you can use a wildcard character, which can save work if you want to reuse the same IAM role for multiple users and secrets. For example, the following resource statement grants permissions for all secrets that have names beginning with `aws/transfer`.  

     ```
     "Resource": "arn:aws:secretsmanager:region:account-id:secret:aws/transfer/*"
     ```
You can also store secrets containing your SFTP credentials in another AWS account. For details on enabling cross-account secret access, see [Permissions to AWS Secrets Manager secrets for users in a different account](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples_cross.html).

1. Complete the connector configuration:
   + (Optional) For the **Logging role**, choose the IAM role for the connector to use to push events to your CloudWatch logs. The following example policy lists the necessary permissions to log events for SFTP connectors.  
****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "VisualEditor0",
                 "Effect": "Allow",
                 "Action": [
                     "logs:CreateLogStream",
                     "logs:DescribeLogStreams",
                     "logs:CreateLogGroup",
                     "logs:PutLogEvents"
                 ],
                 "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
             }
         ]
     }
     ```

1. In the **SFTP Configuration** section, provide the following information:  
![\[The Transfer Family SFTP connector console, showing the SFTP configuration settings.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/create-connector-example-sftp-config.png)
   + For **Connector credentials**, from the dropdown list, choose the name of a secret in AWS Secrets Manager that contains the SFTP user's private key or password. You must create a secret and store it in a specific manner. For details, see [Store authentication credentials for SFTP connectors in Secrets Manager](sftp-connector-secret-procedure.md).
   + (Optional) You have an option to create your connector while leaving the `TrustedHostKeys` parameter empty. However, your connector will not be able to transfer files with the remote server until you provide this parameter in your connector’s configuration. You can enter the Trusted host key(s) at the time of creating your connector, or update your connector later by using the host key information returned by the `TestConnection` console action or API command. That is, for the **Trusted host keys** text box, you can do either of the following:
     + **Provide the Trusted Host Key(s) at the time of creating your connector.** Paste in the public portion of the host key that is used to identify the external server. You can add more than one key, by choosing **Add trusted host key** to add an additional key. You can use the `ssh-keyscan` command against the SFTP server to retrieve the necessary key. For details about the format and type of trusted host keys that Transfer Family supports, see [https://docs.aws.amazon.com//transfer/latest/APIReference/API_SftpConnectorConfig.html](https://docs.aws.amazon.com//transfer/latest/APIReference/API_SftpConnectorConfig.html).
     + *Leave the Trusted Host Key(s) text box empty when creating your connector and update your connector at a later time with this information.* If you do not have the host key information at the time of creating your connector, you can leave this parameter empty for now and proceed with creating your connector. After the connector is created, use the new connector's ID to run the `TestConnection` command, either in the AWS CLI or from the connector's detail page. If successful, `TestConnection` will return the necessary host key information. You can then edit your connector using the console (or by running the `UpdateConnector` AWS CLI command) and add the host key information that was returned when you ran `TestConnection`.
**Important**  
If you retrieve the remote server's host key by running `TestConnection`, make sure that you perform out-of-band validation on the key that is returned.  
You must accept the new key as trusted, or verify the presented fingerprint with a previously known fingerprint that you have received from the owner of the remote SFTP server you are connecting to.
   + (Optional) For **Maximum concurrent connections**, from the dropdown list, choose the number of concurrent connections that your connector creates to the remote server. The default selection on the console is **5**.

     This setting specifies the number of active connections that your connector can establish with the remote server at the same time. Creating concurrent connections can enhance connector performance by enabling parallel operations.

1. In the **Cryptographic algorithm options** section, choose a **Security policy** from the dropdown list in the **Security Policy** field. The security policy enables you to select the cryptographic algorithms that your connector supports. For details on the available security policies and algorithms, see [Security policies for AWS Transfer Family SFTP connectors](security-policies-connectors.md).

1. (Optional) In the **Tags** section, for **Key** and **Value**, enter one or more tags as key-value pairs.

1. After you have confirmed all of your settings, choose **Create SFTP connector** to create the SFTP connector. If the connector is created successfully, a screen appears with a list of the assigned static IP addresses and a **Test connection** button. Use the button to test the configuration for your new connector.  
![\[The connector creation screen that appears when an SFTP connector has been successfully created. It contains a button for testing the connection and a list of the service-managed static IP addresses of this connector.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/connector-success-ip.png)

The **Connectors** page appears, with the ID of your new SFTP connector added to the list. To view the details for your connectors, see [View SFTP connector details](manage-sftp-connectors.md#sftp-connectors-view-info).

------
#### [ CLI ]

You use the [https://docs.aws.amazon.com/transfer/latest/APIReference/API_CreateConnector.html](https://docs.aws.amazon.com/transfer/latest/APIReference/API_CreateConnector.html) command to create a connector. To use this command to create an SFTP connector, you must provide the following information.
+ The URL for a remote SFTP server. This URL must be formatted as `sftp://partner-SFTP-server-url`, for example `sftp://AnyCompany.com`.
+ The access role. Choose the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role to use.
  + **Make sure that this role provides read and write access** to the parent directory of the file location that's used in the `StartFileTransfer` request.
  + **Make sure that this role provides permission** for `secretsmanager:GetSecretValue` to access the secret.
**Note**  
In the policy, you must specify the ARN for the secret. The ARN contains the secret name, but appends the name with six, random, alphanumeric characters. An ARN for a secret has the following format.  

    ```
    arn:aws:secretsmanager:region:account-id:secret:aws/transfer/SecretName-6RandomCharacters
    ```
  + **Make sure this role contains a trust relationship** that allows the connector to access your resources when servicing your users' transfer requests. For details on establishing a trust relationship, see [To establish a trust relationship](requirements-roles.md#establish-trust-transfer).  
****  

  ```
  {
    "Version":"2012-10-17",		 	 	 
    "Statement": [
      {
          "Sid": "AllowListingOfUserFolder",
          "Action": [
              "s3:ListBucket",
              "s3:GetBucketLocation"
          ],
          "Effect": "Allow",
          "Resource": [
              "arn:aws:s3:::amzn-s3-demo-bucket"
          ]
      },
      {
          "Sid": "HomeDirObjectAccess",
          "Effect": "Allow",
          "Action": [
              "s3:PutObject",
              "s3:GetObject",
              "s3:DeleteObject",
              "s3:DeleteObjectVersion",
              "s3:GetObjectVersion",
              "s3:GetObjectACL",
              "s3:PutObjectACL"
          ],
          "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
      },
      {
          "Sid": "GetConnectorSecretValue",
          "Effect": "Allow",
          "Action": [
              "secretsmanager:GetSecretValue"
          ],
          "Resource": "arn:aws:secretsmanager:us-west-2:111122223333:secret:aws/transfer/SecretName-6RandomCharacters"
      }
    ]
  }
  ```
**Note**  
For the access role, the example grants access to a single secret. However, you can use a wildcard character, which can save work if you want to reuse the same IAM role for multiple users and secrets. For example, the following resource statement grants permissions for all secrets that have names beginning with `aws/transfer`.  

  ```
  "Resource": "arn:aws:secretsmanager:region:account-id:secret:aws/transfer/*"
  ```
You can also store secrets containing your SFTP credentials in another AWS account. For details on enabling cross-account secret access, see [Permissions to AWS Secrets Manager secrets for users in a different account](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples_cross.html).
+ (Optional) Choose the IAM role for the connector to use to push events to your CloudWatch logs. The following example policy lists the necessary permissions to log events for SFTP connectors.  
****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "logs:CreateLogStream",
                  "logs:DescribeLogStreams",
                  "logs:CreateLogGroup",
                  "logs:PutLogEvents"
              ],
              "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
          }
      ]
  }
  ```
+ Provide the following SFTP configuration information.
  + The ARN of a secret in AWS Secrets Manager that contains the SFTP user's private key or password.
  + The public portion of the host key that is used to identify the external server. You can provide multiple trusted host keys if you like.

  The easiest way to provide the SFTP information is to save it to a file. For example, copy the following example text to a file named `testSFTPConfig.json`.

  ```
  // Listing for testSFTPConfig.json
  {   
     "UserSecretId": "arn:aws::secretsmanager:us-east-2:123456789012:secret:aws/transfer/example-username-key",
     "TrustedHostKeys": [
        "sftp.example.com ssh-rsa AAAAbbbb...EEEE="
     ]
  }
  ```
+ Specify a security policy for your connector, entering the security policy name.

**Note**  
The `SecretId` can be either the entire ARN or the name of the secret (*example-username-key* in the previous listing).

Then run the following command to create the connector:

```
aws transfer create-connector --url "sftp://partner-SFTP-server-url" \
--access-role your-IAM-role-for-bucket-access \
--logging-role arn:aws:iam::your-account-id:role/service-role/AWSTransferLoggingAccess \
--sftp-config file:///path/to/testSFTPConfig.json \
--security-policy-name security-policy-name \
--maximum-concurrent-connections integer-from-1-to-5
```

When you describe a VPC egress type connector, the response includes the new fields:

```
{
   "Connector": { 
      "AccessRole": "arn:aws:iam::123456789012:role/connector-role",
      "Arn": "arn:aws:transfer:us-east-1:123456789012:connector/c-1234567890abcdef0",
      "ConnectorId": "c-1234567890abcdef0",
      "Status": "ACTIVE",
      "EgressConfig": {
        "VpcLattice": {
          "ResourceConfigurationArn": "arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-12345678",
          "PortNumber": 22
        }
      },
      "EgressType": "VPC",
      "ServiceManagedEgressIpAddresses": null,
      "SftpConfig": { 
         "TrustedHostKeys": [ "ssh-rsa AAAAB3NzaC..." ],
         "UserSecretId": "aws/transfer/connector-secret"
      },
      "Url": "sftp://my.sftp.server.com:22"
   }
}
```

Note that `ServiceManagedEgressIpAddresses` is null for VPC egress type connectors since traffic routes through your VPC instead of AWS managed infrastructure.

------

# Create an SFTP connector with VPC-based egress


This topic provides step-by-step instructions for creating SFTP connectors with VPC connectivity. VPC\$1LATTICE-enabled connectors use Amazon VPC Lattice to route traffic through your Virtual Private Cloud, enabling secure connections to private endpoints or using your own NAT gateways for internet access.

**When to use VPC connectivity**

Use VPC connectivity for SFTP connectors in these scenarios:
+ **Private SFTP servers**: Connect to SFTP servers that are only accessible from your VPC.
+ **On-premises connectivity**: Connect to on-premises SFTP servers through AWS Direct Connect or AWS Site-to-Site VPN connections.
+ **Custom IP addresses**: Use your own NAT gateways and Elastic IP addresses, including BYOIP scenarios.
+ **Centralized security controls**: Route file transfers through your organization's central ingress/egress controls.

![\[Architecture diagram showing VPC-based egress for SFTP connectors, illustrating how Cross-VPC Resource Access enables secure connections through your Virtual Private Cloud.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/vpc-egress-diagram.png)


## Prerequisites for VPC\$1LATTICE-enabled SFTP connectors


Before creating a VPC\$1LATTICE-enabled SFTP connector, you must complete the following prerequisites:

**How VPC-based connectivity works**

VPC Lattice enables you to securely share VPC resources with other AWS services. AWS Transfer Family uses a service network to simplify the resource sharing process. The key components are:
+ **Resource Gateway**: Serves as the point of access into your VPC. You create this in your VPC with a minimum of two Availability Zones.
+ **Resource Configuration**: Contains the private IP address or public DNS name of the SFTP server you want to connect to.

When you create a VPC\$1LATTICE-enabled connector, AWS Transfer Family uses Forward Access Session (FAS) to temporarily obtain your credentials and associate your Resource Configuration with our service network.

**Required setup steps**

1. **VPC infrastructure**: Ensure you have a properly configured VPC with the necessary subnets, route tables, and security groups for your SFTP server connectivity requirements.

1. **Resource Gateway**: Create a Resource Gateway in your VPC using the VPC Lattice `create-resource-gateway` command. The Resource Gateway must be associated with subnets in at least two Availability Zones. For more information, see [Resource gateways](https://docs.aws.amazon.com/vpc-lattice/latest/ug/resource-gateway.html) in the *Amazon VPC Lattice User Guide*.

1. **Resource Configuration**: Create a Resource Configuration that represents the target SFTP server using the VPC Lattice `create-resource-configuration` command. You can specify either:
   + A private IP address for private endpoints
   + A public DNS name for public endpoints (IP addresses are not supported for public endpoints)

1. **Authentication credentials**: Store the SFTP user credentials in AWS Secrets Manager as described in [Store authentication credentials for SFTP connectors in Secrets Manager](sftp-connector-secret-procedure.md).

**Important**  
The Resource Gateway and Resource Configuration must be created in the same AWS account. When creating a Resource Configuration, you must first have a Resource Gateway in place.

For more information on VPC resource configurations, see [Resource configurations](https://docs.aws.amazon.com/vpc-lattice/latest/ug/resource-configuration.html) in the *Amazon VPC Lattice User Guide*.

**Note**  
VPC connectivity for SFTP connectors is available in AWS Regions where Amazon VPC Lattice resources are available. For more information, see [VPC Lattice FAQs](https://aws.amazon.com/vpc/lattice/faqs/#topic-0). Availability Zone support varies by region, and Resource Gateways require a minimum of two Availability Zones.

## Create a VPC\$1LATTICE-enabled SFTP connector


After completing the prerequisites, you can create an SFTP connector with VPC connectivity using the AWS CLI, AWS Management Console, or AWS SDKs.

------
#### [ Console ]<a name="create-vpc-sftp-connector"></a>

**To create a VPC\$1LATTICE-enabled SFTP connector**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **SFTP Connectors**, then choose **Create SFTP connector**.

1. In the **Connector configuration** section, for **Egress type**, choose **VPC Lattice**.

   This option routes traffic through your VPC using Amazon VPC Lattice for cross-VPC resource access. You can use this option to connect to privately hosted server endpoints, route traffic through your VPC's security controls, or use your own NAT gateways and Elastic IP addresses. The address of the remote SFTP server is represented as a Resource Configuration in your VPC. For more information about Resource Configurations, see [Resource configurations for VPC resources](https://docs.aws.amazon.com/vpc-lattice/latest/ug/resource-configuration.html) in the Amazon VPC Lattice User Guide.

1. Complete the connector configuration:
   + For the **Access role**, choose the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role to use.
     + **Make sure that this role provides read and write access** to the parent directory of the file location that's used in the `StartFileTransfer` request.
     + **Make sure that this role provides permission** for `secretsmanager:GetSecretValue` to access the secret.
**Note**  
In the policy, you must specify the ARN for the secret. The ARN contains the secret name, but appends the name with six, random, alphanumeric characters. An ARN for a secret has the following format.  

       ```
       arn:aws:secretsmanager:region:account-id:secret:aws/transfer/SecretName-6RandomCharacters
       ```
     + **Make sure this role contains a trust relationship** that allows the connector to access your resources when servicing your users' transfer requests. For details on establishing a trust relationship, see [To establish a trust relationship](requirements-roles.md#establish-trust-transfer).  
****  

     ```
     {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Sid": "AllowListingOfUserFolder",
             "Action": [
                 "s3:ListBucket",
                 "s3:GetBucketLocation"
             ],
             "Effect": "Allow",
             "Resource": [
                 "arn:aws:s3:::amzn-s3-demo-bucket"
             ]
         },
         {
             "Sid": "HomeDirObjectAccess",
             "Effect": "Allow",
             "Action": [
                 "s3:PutObject",
                 "s3:GetObject",
                 "s3:DeleteObject",
                 "s3:DeleteObjectVersion",
                 "s3:GetObjectVersion",
                 "s3:GetObjectACL",
                 "s3:PutObjectACL"
             ],
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
         },
         {
             "Sid": "GetConnectorSecretValue",
             "Effect": "Allow",
             "Action": [
                 "secretsmanager:GetSecretValue"
             ],
             "Resource": "arn:aws:secretsmanager:us-west-2:111122223333:secret:aws/transfer/SecretName-6RandomCharacters"
         }
       ]
     }
     ```
**Note**  
For the access role, the example grants access to a single secret. However, you can use a wildcard character, which can save work if you want to reuse the same IAM role for multiple users and secrets. For example, the following resource statement grants permissions for all secrets that have names beginning with `aws/transfer`.  

     ```
     "Resource": "arn:aws:secretsmanager:region:account-id:secret:aws/transfer/*"
     ```
You can also store secrets containing your SFTP credentials in another AWS account. For details on enabling cross-account secret access, see [Permissions to AWS Secrets Manager secrets for users in a different account](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples_cross.html).
   + For **Resource Configuration ARN**, enter the ARN of the VPC Lattice Resource Configuration that points to your SFTP server:

     ```
     arn:aws:vpc-lattice:region:account-id:resourceconfiguration/rcfg-12345678
     ```
   + (Optional) For the **Logging role**, choose the IAM role for the connector to use to push events to your CloudWatch logs.  
****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "VisualEditor0",
                 "Effect": "Allow",
                 "Action": [
                     "logs:CreateLogStream",
                     "logs:DescribeLogStreams",
                     "logs:CreateLogGroup",
                     "logs:PutLogEvents"
                 ],
                 "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*"
             }
         ]
     }
     ```

1. In the **SFTP Configuration** section, provide the following information:
   + For **Connector credentials**, choose the name of a secret in AWS Secrets Manager that contains the SFTP user's private key or password.
   + For **Trusted host keys**, paste in the public portion of the host key that is used to identify the external server, or leave empty to configure later using the `TestConnection` command.

     Since this host key is for a VPC\$1LATTICE connector, remove the host name in the key
   + (Optional) For **Maximum concurrent connections**, choose the number of concurrent connections that your connector creates to the remote server (default is 5).

1. In the **Cryptographic algorithm options** section, choose a **Security policy** from the dropdown list.

1. (Optional) In the **Tags** section, add tags as key-value pairs.

1. Choose **Create SFTP connector** to create the VPC\$1LATTICE-enabled SFTP connector.

The connector will be created with a status of `PENDING` while the resource association is being provisioned, which typically takes several minutes. Once the status changes to `ACTIVE`, the connector is ready for use.

------
#### [ CLI ]

Use the following command to create a VPC\$1LATTICE-enabled SFTP connector:

```
aws transfer create-connector \
    --url "sftp://my.sftp.server.com:22" \
    --access-role arn:aws:iam::123456789012:role/TransferConnectorRole \
    --sftp-config UserSecretId=my-secret-id,TrustedHostKeys="ssh-rsa AAAAB3NzaC..." \
    --egress-config VpcLattice={ResourceConfigurationArn=arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-1234567890abcdef0} \
    --security-policy-name TransferSecurityPolicy-2024-01
```

The key parameter for VPC connectivity is `--egress-config`, which specifies the Resource Configuration ARN that defines your SFTP server target.

------

## Monitoring VPC connector status


VPC\$1LATTICE-enabled connectors have an asynchronous setup process. After creation, monitor the connector status:
+ **PENDING**: The connector is being provisioned. Service network provisioning is in progress, which typically takes several minutes.
+ **ACTIVE**: The connector is ready for use and can transfer files.
+ **ERRORED**: The connector failed to provision. Check the error details for troubleshooting information.

Check the connector status using the `describe-connector` command:

```
aws transfer describe-connector --connector-id c-1234567890abcdef0
```

During the PENDING state, the `test-connection` API will return "Connector not available" until provisioning is complete.

## Limitations and considerations

+ **Public endpoints**: When connecting to public endpoints through VPC, you must provide a DNS name in the Resource Configuration. Public IP addresses are not supported.
+ **Regional availability**: VPC connectivity is available in select AWS Regions. Cross-region resource sharing is not supported.
+ **Availability Zone requirements**: Resource Gateways must be associated with subnets in at least two Availability Zones. Not all Availability Zones support VPC Lattice in every region.
+ **Connection limits**: Maximum of 350 connections per resource with a 350-second idle timeout for TCP connections.

## Cost considerations


There are no additional charges from AWS Transfer Family beyond regular service charges. However, customers may be subject to additional charges from Amazon VPC Lattice associated with sharing their Amazon Virtual Private Cloud resources, and NAT gateway charges if they use their own NAT gateways for egress to internet.

For complete AWS Transfer Family pricing information, see the [AWS Transfer Family pricing page](https://aws.amazon.com/aws-transfer-family/pricing/).

## VPC connectivity examples for SFTP connectors
VPC connectivity examples

This section provides examples of creating SFTP connectors with VPC connectivity for various scenarios. Before using these examples, ensure you have completed the VPC infrastructure setup as described in the VPC connectivity documentation.

### Example: Private endpoint connection


This example shows how to create an SFTP connector that connects to a private SFTP server accessible only from your VPC.

**Prerequisites**

1. Create a Resource Gateway in your VPC:

   ```
   aws vpc-lattice create-resource-gateway \
       --name my-private-server-gateway \
       --vpc-identifier vpc-1234567890abcdef0 \
       --subnet-ids subnet-1234567890abcdef0 subnet-0987654321fedcba0
   ```

1. Create a Resource Configuration for your private SFTP server:

   ```
   aws vpc-lattice create-resource-configuration \
       --name my-private-server-config \
       --resource-gateway-identifier rgw-1234567890abcdef0 \
       --resource-configuration-definition ipResource={ipAddress="10.0.1.100"} \
       --port-ranges 22
   ```

**Create the VPC\$1LATTICE-enabled connector**

1. Create the SFTP connector with VPC connectivity:

   ```
   aws transfer create-connector \    
       --access-role arn:aws:iam::123456789012:role/TransferConnectorRole \
       --sftp-config UserSecretId=my-private-server-credentials,TrustedHostKeys="ssh-rsa AAAAB3NzaC..." \
       --egress-config VpcLattice={ResourceConfigurationArn=arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-1234567890abcdef0,PortNumber=22}
   ```

1. Monitor the connector status until it becomes `ACTIVE`:

   ```
   aws transfer describe-connector --connector-id c-1234567890abcdef0
   ```

The remote SFTP server will see connections coming from the Resource Gateway's IP address within your VPC CIDR range.

### Example: Public endpoint via VPC


This example shows how to route connections to a public SFTP server through your VPC to leverage centralized security controls and use your own NAT Gateway IP addresses.

**Prerequisites**

1. Create a Resource Gateway in your VPC (same as private endpoint example).

1. Create a Resource Configuration for the public SFTP server using its DNS name:

   ```
   aws vpc-lattice create-resource-configuration \
       --name my-public-server-config \
       --resource-gateway-identifier rgw-1234567890abcdef0 \
       --resource-configuration-definition dnsResource={domainName="sftp.example.com"} \
       --port-ranges 22
   ```
**Note**  
For public endpoints, you must use a DNS name, not an IP address.

**Create the connector**
+ Create the SFTP connector:

  ```
  aws transfer create-connector \
      --access-role arn:aws:iam::123456789012:role/TransferConnectorRole \
      --sftp-config UserSecretId=my-public-server-credentials,TrustedHostKeys="ssh-rsa AAAAB3NzaC..." \
      --egress-config VpcLattice={ResourceConfigurationArn=arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-0987654321fedcba0,PortNumber=22}
  ```

Traffic will flow from the connector to your Resource Gateway, then through your NAT Gateway to reach the public SFTP server. The remote server will see your NAT Gateway's Elastic IP address as the source.

### Example: Cross-account private endpoint


This example shows how to connect to a private SFTP server in a different AWS account by using resource sharing.

**Note**  
If you already have cross-VPC resource sharing enabled through other mechanisms, such as AWS Transit Gateway, you don't need to configure the resource sharing described here. The existing routing mechanisms, such as Transit Gateway route tables, are automatically used by SFTP connectors. You only need to create a Resource Configuration in the same account where you're creating the SFTP connector.

**Account A (Resource Provider) - Share the Resource Configuration**

1. Create Resource Gateway and Resource Configuration in Account A (same as previous examples).

1. Share the Resource Configuration with Account B using AWS Resource Access Manager:

   ```
   aws ram create-resource-share \
       --name cross-account-sftp-share \
       --resource-arns arn:aws:vpc-lattice:us-east-1:111111111111:resourceconfiguration/rcfg-1234567890abcdef0 \
       --principals 222222222222
   ```

**Account B (Resource Consumer) - Accept and Use the Share**

1. Accept the resource share invitation:

   ```
   aws ram accept-resource-share-invitation \
       --resource-share-invitation-arn arn:aws:ram:us-east-1:111111111111:resource-share-invitation/invitation-id
   ```

1. Create the SFTP connector in Account B:

   ```
   aws transfer create-connector \
       --access-role arn:aws:iam::222222222222:role/TransferConnectorRole \
       --sftp-config UserSecretId=cross-account-server-credentials,TrustedHostKeys="ssh-rsa AAAAB3NzaC..." \
       --egress-config VpcLattice={ResourceConfigurationArn=arn:aws:vpc-lattice:us-east-1:111111111111:resourceconfiguration/rcfg-1234567890abcdef0,PortNumber=22}
   ```

The connector in Account B can now access the private SFTP server in Account A through the shared Resource Configuration.

### Common troubleshooting scenarios


Here are solutions for common issues when creating VPC\$1LATTICE-enabled connectors:
+ **Connector stuck in PENDING status**: Check that your Resource Gateway is ACTIVE and has subnets in supported Availability Zones. If the connector is still stuck with a status of PENDING, call `UpdateConnector` using the same configuration parameters that you used initially. This triggers a new status event that might resolve the problem.
+ **Connection timeouts**: Verify security group rules allow traffic on port 22 and that your VPC routing is correct.
+ **DNS resolution issues**: For public endpoints, ensure your VPC has internet connectivity through a NAT Gateway or Internet Gateway.
+ **Cross-account access denied**: Verify the resource share is accepted and the Resource Configuration ARN is correct. If the proper permission policy is attached to the resource configuration when the origin account creates the resource share, these permissions are required:`vpc-lattice:AssociateViaAWSService`, `vpc-lattice:AssociateViaAWSService-EventsAndStates`, `vpc-lattice:CreateServiceNetworkResourceAssociation`, `vpc-lattice:GetResourceConfiguration`.

# Test an SFTP connector


After you create an SFTP connector, we recommend that you test it before you attempt to transfer any files using your new connector.

**To test an SFTP connector**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **SFTP Connectors**, and select a connector.

1. From the **Actions** menu, choose **Test connection**.  
![\[The Transfer Family console, showing an SFTP connector selected, and the Test connectionTest connection action highlighted.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/connector-test-choose.png)

The system returns a message, indicating whether the test passes or fails. If the test fails, the system provides an error message based on the reason the test failed.

![\[The SFTP connector test connection panel, showing a successful test.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/connector-test-success.png)


![\[The SFTP connector test connection panel, showing a failed test: the error message indicates that the access role for the connector is incorrect.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/connector-test-fail-role.png)


**Note**  
To use the API to test your connector, see the [https://docs.aws.amazon.com/transfer/latest/APIReference/API_TestConnection](https://docs.aws.amazon.com/transfer/latest/APIReference/API_TestConnection) API documentation.

# VPC connectivity for SFTP connectors
VPC connectivity

AWS Transfer Family SFTP connectors support connectivity to remote SFTP servers through your VPC environments using Amazon VPC Lattice. This enables you to connect with privately hosted SFTP servers or route internet traffic through your VPC's security controls, and use your own NAT gateways and Elastic IP addresses.

**Egress types**

SFTP connectors can use one of two egress types:
+ **Service Managed** (default): The connector uses NAT gateways and IP addresses owned by AWS Transfer Family to route connections over the public internet.
+ **VPC\$1LATTICE**: The connector routes traffic through your VPC environment using Cross-VPC Resource Access.

**When to use VPC connectivity**

Use VPC connectivity for SFTP connectors in these scenarios:
+ **Private SFTP servers**: Connect to SFTP servers that are only accessible from your VPC.
+ **On-premises connectivity**: Connect to on-premises SFTP servers through AWS Direct Connect or AWS Site-to-Site VPN connections.
+ **Custom IP addresses**: Use your own NAT gateways and Elastic IP addresses, including BYOIP scenarios.
+ **Centralized security controls**: Route file transfers through your organization's central ingress/egress controls.

**Requirements**

Before creating a VPC\$1LATTICE-enabled SFTP connector, you need:
+ VPC and related infrastructure (subnets, route tables, security groups)
+ Resource Gateway in your VPC (minimum two Availability Zones)
+ Resource Configuration specifying the target SFTP server

For detailed setup instructions, see [Create a VPC\$1LATTICE-enabled SFTP connector](create-vpc-sftp-connector-procedure.md#create-vpc-connector-procedure). And, for examples, see [VPC connectivity examples for SFTP connectors](create-vpc-sftp-connector-procedure.md#sftp-connectors-vpc-examples).

# Using SFTP connectors
Using SFTP connectors

This topic describes how to perform the supported file operations using your SFTP connector. You can also find example commands to perform these operations by selecting your connector's details on the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

After you have created an SFTP connector, you can use it to perform the following file operations on the remote SFTP server that it's associated with.
+ Send files from Amazon S3 to the remote SFTP server.
+ Retrieve files from the remote SFTP server to Amazon S3.
+ List files and sub-folders from a directory on the remote SFTP server.
+ Delete, rename or move files and directories on the remote SFTP server.

For details on creating connectors, see [Creating SFTP connectors](configure-sftp-connector.md).

**Topics**
+ [

# Transfer files
](transfer-files-and-track.md)
+ [

# List contents of a remote directory
](sftp-connector-list-dir.md)
+ [

# Move, rename, or delete files or directories on the remote server
](move-delete-remote-files.md)

# Transfer files


**Topics**
+ [

## Send and retrieve files by using an SFTP connector
](#send-retrieve-connector-details)

## Send and retrieve files by using an SFTP connector


To send and retrieve files by using an SFTP connector, you use the [https://docs.aws.amazon.com/transfer/latest/APIReference/API_StartFileTransfer.html](https://docs.aws.amazon.com/transfer/latest/APIReference/API_StartFileTransfer.html) API operation and specify the following parameters, depending on whether you're *sending files* (outbound transfers) or *receiving files* (inbound transfers). Note that each `StartFileTransfer` request can contain 10 distinct paths. 

**Note**  
 By default, SFTP connectors process one file at a time, transferring files sequentially. You have an option to accelerate transfer performance by having your connectors create concurrent sessions with remote servers that support concurrent sessions from the same user, and process up to 5 files in parallel.   
 To enable concurrent connections for any connector, you can edit the **Maximum conncurent connections** setting when creating or updating a connector. For details, see [Create an SFTP connector with service-managed egress](create-sftp-connector-procedure.md). 
+ **Outbound transfers** 
  + `send-file-paths` contains from one to ten source file paths, for files to transfer to the partner's SFTP server.
  + `remote-directory-path` is the remote path to send a file to on the customer's SFTP server.
+ **Inbound transfers** 
  + `retrieve-file-paths` contains from one to ten remote paths. Each path specifies a location for transferring files from the partner's SFTP server to your Transfer Family server.
  + `local-directory-path` is the Amazon S3 location (bucket and optional prefix) where your files are stored.

To send files, you specify the `send-file-paths` and `remote-directory-path` parameters. You can specify up to 10 files for the `send-file-paths` parameter. The following example command sends the files named `/amzn-s3-demo-source-bucket/file1.txt` and `/amzn-s3-demo-source-bucket/file2.txt`, located in Amazon S3 storage, to the `/tmp` directory on your partner's SFTP server. To use this example command, replace the `amzn-s3-demo-source-bucket` with your own bucket.

```
aws transfer start-file-transfer --send-file-paths /amzn-s3-demo-source-bucket/file1.txt /amzn-s3-demo-source-bucket/file2.txt \
    --remote-directory-path /tmp --connector-id c-1111AAAA2222BBBB3 --region us-east-2
```

To retrieve files, you specify the `retrieve-file-paths` and `local-directory-path` parameters. The following example retrieves the files `/my/remote/file1.txt` and `/my/remote/file2.txt` on the partner's SFTP server, and places it in the Amazon S3 location /amzn-s3-demo-bucket/*prefix*. To use this example command, replace the `user input placeholders` with your own information.

```
aws transfer start-file-transfer --retrieve-file-paths /my/remote/file1.txt  /my/remote/file2.txt \
   --local-directory-path /amzn-s3-demo-bucket/prefix --connector-id c-2222BBBB3333CCCC4 --region us-east-2
```

The previous examples specify absolute paths on the SFTP server. You can also use relative paths: that is, paths that are relative to the SFTP user's home directory. For example, if the SFTP user is `marymajor` and their home directory on the SFTP server is `/users/marymajor/`, the following command sends `/amzn-s3-demo-source-bucket/file1.txt` to `/users/marymajor/test-connectors/file1.txt`

```
aws transfer start-file-transfer --send-file-paths /amzn-s3-demo-source-bucket/file1.txt \
   --remote-directory-path test-connectors --connector-id c-2222BBBB3333CCCC4 --region us-east-2
```

# List contents of a remote directory
List contents of remote directories

Before you retrieve files from a remote SFTP server, you can retrieve the contents of a directory on the remote SFTP server. To do this, you use the [https://docs.aws.amazon.com/transfer/latest/APIReference/API_StartDirectoryListing.html](https://docs.aws.amazon.com/transfer/latest/APIReference/API_StartDirectoryListing.html) API operation.

The following example lists the contents of the `home` folder on the remote SFTP server, which is specified in the connector's configuration. The results are placed into the Amazon S3 location `/amzn-s3-demo-bucket/connector-files`, and into a file named `c-AAAA1111BBBB2222C-6666abcd-11aa-22bb-cc33-0000aaaa3333.json`.

```
aws transfer start-directory-listing  \
   --connector-id c-AAAA1111BBBB2222C  \ 
   --output-directory-path /amzn-s3-demo-bucket/example/connector-files  \
   --remote-directory-path /home
```

This AWS CLI command returns a listing ID and the name of the file that contains the results.

```
{
    "ListingId": "6666abcd-11aa-22bb-cc33-0000aaaa3333",
    "OutputFileName": "c-AAAA1111BBBB2222C-6666abcd-11aa-22bb-cc33-0000aaaa3333.json"
}
```

**Note**  
The naming convention for the output file is `connector-ID-listing-ID.json`.

The JSON file contains the following information:
+ `filePath`: the complete path of a remote file, relative to the directory of the listing request for your SFTP connector on the remote server.
+ `modifiedTimestamp`: the last time the file was modified, in seconds, Coordinated Universal Time (UTC) format. This field is optional. If the remote file attributes don't contain a timestamp, it is omitted from the file listing.
+ `size`: the size of the file, in bytes. This field is optional. If the remote file attributes don't contain a file size, it is omitted from the file listing.
+ `path`: the complete path of a remote directory, relative to the directory of the listing request for your SFTP connector on the remote server.
+ `truncated`: a flag indicating whether the list output contains all of the items contained in the remote directory or not. If your `truncated` output value is true, you can increase the value provided in the optional `max-items` input attribute to be able to list more items (up to the maximum allowed list size of 10,000 items).

The following is an example of the contents of the output file (`c-AAAA1111BBBB2222C-6666abcd-11aa-22bb-cc33-0000aaaa3333.json`), where the remote directory contains two files and two sub-directories (paths).

```
{
    "files": [
        {
            "filePath": "/home/what.txt",
            "modifiedTimestamp": "2024-01-30T20:34:54Z",
            "size" : 2323
        },
        {
            "filePath": "/home/how.pgp",
            "modifiedTimestamp": "2024-01-30T20:34:54Z",
            "size" : 4691
        }
    ],
    "paths": [
        {
            "path": "/home/magic"
        },
        {
            "path": "/home/aws"
        },
    ],
    "truncated": "false"
}
```

# Move, rename, or delete files or directories on the remote server
Move and delete files on the remote server

**Topics**
+ [

## Move or rename files or directories on the remote SFTP server
](#move-remote-file)
+ [

## Delete files or directories on the remote SFTP server
](#delete-remote-file)

## Move or rename files or directories on the remote SFTP server
Move or rename files on remote server

You can use an SFTP connector to move or rename files and directories on a remote SFTP server. Note that the remote server needs to support these operations for successful processing using connectors.

Some common use cases are as follows.
+ A remote server generates or receives a new file every hour, with the same filename but a different timestamp. To keep the main folder up to date (so that it contains only the latest file), you can use a connector to move older files to an archived folder.
+ You use a connector to list all of the files in a remote directory, then transfer all of the files to your local storage. You can then use a connector to move the files to an archived folder on the remote server.

You must use a `StartRemoteMove` call for each file or directory you want to process, as the command takes a single source and destination file or directory as arguments. However, you can accelerate performance by having your connectors create concurrent sessions with remote servers that support concurrent sessions from the same user, and move/rename up to 5 files in parallel.

The following example moves a file on the remote SFTP server from `/source/folder/sourceFile` to `/destination/targetFile`, and returns a unique identifier for the operation.

```
aws transfer --connector-id c-AAAA1111BBBB2222C start-remote-move \
   --source-path /source/folder/sourceFile --target-path /destination/targetFile
```

**Note**  
For the move/rename operations, Transfer Family uses the standard `SFTP SSH_FXP_RENAME` command to do the move/rename operation.

## Delete files or directories on the remote SFTP server
Delete files on remote server

You can use an SFTP connector to delete files or directories on a remote SFTP server. Note that the remote server needs to support these operations for successful processing using connectors.

**Note**  
Delete operations for remote directories are only supported for empty directories.

Some common use cases are as follows.
+ You use a connector to retrieve a file from a remote SFTP server, store it in your Amazon S3 bucket, then encrypt it. Finally, you can use a connector to delete the unencrypted file on the remote server.
+ You use a connector to list all of the files in a remote directory, then transfer all of the files to your local storage. You can then use a connector to delete all of the files that you transferred. You could also delete the remote directory if you prefer.

You must use a `StartRemoteDelete` call for each file or directory you want to delete, as the command takes a single file or directory as an argument. However, you can accelerate performance by having your connectors create concurrent sessions with remote servers that support concurrent sessions from the same user, and delete up to 5 files/directories in parallel.

The following example deletes a file on the remote SFTP server in the path `/delete/folder/deleteFile`, and returns a unique identifier for the operation.

```
aws transfer start-remote-delete --connector-id c-AAAA1111BBBB2222C \
   --delete-path /delete/folder/deleteFile
```

**Note**  
For the delete operation, Transfer Family uses the standard `SSH_FXP_REMOVE` command to delete a file, and `SSH_FXP_RMDIR` to delete a directory.

# Monitoring SFTP connectors


You can monitor the status of your connector operations using any of the following ways. Choose the approach that meet your needs.

## Use the connector API to query the status of file transfer requests


To track the progress of a file transfer operation, you use the [https://docs.aws.amazon.com//transfer/latest/APIReference/API_ListFileTransferResults.html](https://docs.aws.amazon.com//transfer/latest/APIReference/API_ListFileTransferResults.html) API operation, which returns real-time updates and detailed information on the status of each individual file being transferred in a specific file transfer operation. You specify the file transfer by providing its Connector ID and its Transfer ID. The following example returns a list of files for connector ID `a-11112222333344444` and transfer-ID `aa1b2c3d4-5678-90ab-cdef-EXAMPLE11111`.

```
aws transfer list-file-transfer-results --connector-id a-11112222333344444 --transfer-id a1b2c3d4-5678-90ab-cdef-EXAMPLE11111
```

**Note**  
File transfer results are available up to 7 days after you call the `ListFileTransferResults` API operation.

You can also view logs and events for your file transfer requests that use SFTP connectors. Amazon EventBridge events for Transfer Family are described in [SFTP connector events](events-detail-reference.md#event-detail-sftp-connector-events). For how to view Transfer Family CloudWatch log entries, see [Viewing Transfer Family log streams](view-log-entries.md).

## View SFTP connector events in Amazon EventBridge


For each operation performed by SFTP connectors, Transfer Family automatically generates and sends events to the default event bus in your Amazon EventBridge account. The events contain detailed metadata about the operation, including the operation status. You can subscribe to these events in EventBridge, apply filters on specific event criteria such as operation status, and automatically trigger downstream actions based on the status. For details on the events generated by SFTP connector operations, see [SFTP connector events](events-detail-reference.md#event-detail-sftp-connector-events).

## View SFTP connector logs in Amazon CloudWatch


All SFTP connector operations generate detailed logs in CloudWatch. For example log entries generated by SFTP connectors, see [Example log entries for SFTP connectors](cw-example-logs.md#example-sftp-connector-logs).

## Monitoring VPC egress type connectors


VPC egress type connectors provide additional monitoring capabilities and considerations beyond standard service managed connectors:

### Connector status monitoring


VPC\$1LATTICE connectors include additional information to help you monitor the provisioning and operational state:
+ **EgressType field**: Shows `VPC` for VPC\$1LATTICE egress type connectors
+ **EgressConfig field**: Contains the Resource Configuration ARN and port information

Monitor connector status using the `describe-connector` API:

```
aws transfer describe-connector --connector-id c-1234567890abcdef0
```

### VPC Lattice cost monitoring


VPC egress type connectors incur additional VPC Lattice charges that you should monitor:
+ **Resource provider charges**: You are billed \$10.006/GB for data processing as the resource provider (billed directly by VPC Lattice)
+ **Resource consumer charges**: AWS Transfer Family absorbs the \$10.01/GB resource consumer costs (first 1 PB)
+ **NAT Gateway charges**: For public endpoints accessed via VPC, additional NAT Gateway and data transfer charges may apply
+ **Transfer Family charges**: Standard \$10.40/GB data processing fees still apply

Monitor VPC Lattice usage and costs through the AWS Cost and Billing console, filtering by the VPC Lattice service.

### Network monitoring for VPC connectors


Monitor network activity and performance for VPC egress type connectors:
+ **VPC Flow Logs**: Enable VPC Flow Logs to monitor network traffic patterns between Resource Gateways and SFTP servers
+ **VPC Lattice access logs**: VPC Lattice provides access logs showing source/destination IP addresses, connection timing, and data transfer volumes
+ **Security group monitoring**: Monitor security group rules and traffic patterns to ensure proper network access controls
+ **DNS resolution monitoring**: Monitor DNS resolution times and failures for service network endpoints

Example VPC Lattice access log entry:

```
{
  "eventTimestamp": "2025-01-16T20:59:08.531Z",
  "serviceNetworkArn": "arn:aws:vpc-lattice:us-east-1:123456789012:servicenetwork/sn-1234567890abcdef0",
  "sourceVpcArn": "arn:aws:ec2:us-east-1:123456789012:vpc/vpc-12345678",
  "resourceConfigurationArn": "arn:aws:vpc-lattice:us-east-1:123456789012:resourceconfiguration/rcfg-12345678",
  "protocol": "tcp",
  "sourceIpPort": "10.0.1.100:33760",
  "destinationIpPort": "10.0.2.200:22",
  "gatewayIpPort": "10.0.1.150:1769",
  "resourceIpPort": "10.0.2.200:22"
}
```

### Troubleshooting through monitoring


Use monitoring data to troubleshoot common VPC connector issues:
+ **PENDING status**: Monitor DNS resolution progress and wait for ACTIVE status before attempting transfers
+ **Connection timeouts**: Check VPC Flow Logs and security group rules for blocked traffic on port 22
+ **Transfer failures**: Review CloudWatch logs for detailed error messages and VPC Lattice access logs for network-level issues
+ **Performance issues**: Monitor VPC Lattice access logs for connection timing and throughput metrics

# Managing SFTP connectors
Managing SFTP connectors

This topic describes how to view and update SFTP connectors.

**Note**  
Each connector is automatically assigned static IP addresses that remain unchanged over the lifetime of the connector. This allows you to connect with remote SFTP servers that only accept inbound connections from known IP addresses. Your connectors are assigned a set of static IP addresses that are shared by all connectors using the same protocol (SFTP or AS2) in your AWS account.  
For VPC\$1LATTICE-enabled connectors, the remote SFTP server will see IP addresses from your VPC CIDR range instead of AWS Transfer Family service-managed IP addresses.

## Update SFTP connectors


To change the existing parameter values for your connectors, you can run the `update-connector` command. The following command updates the secret for the connector `connector-id`, in the Region `region-id` to `secret-ARN`. To use this example command, replace the `user input placeholders` with your own information.

```
aws transfer update-connector --sftp-config '{"UserSecretId":"secret-ARN"}' \
   --connector-id connector-id --region region-id
```

### Updating VPC connectivity settings


You can update VPC connectivity settings for existing connectors, including switching between service-managed and VPC egress types or changing the Resource Configuration ARN.

To switch a connector from service-managed to VPC egress:

```
aws transfer update-connector \
   --connector-id connector-id \
   --egress-type VPC \
   --egress-config ResourceConfigurationArn=resource-configuration-arn
```

To update the Resource Configuration ARN for a VPC\$1LATTICE-enabled connector:

```
aws transfer update-connector \
   --connector-id connector-id \
   --egress-config ResourceConfigurationArn=new-resource-configuration-arn
```

**Note**  
When updating VPC connectivity settings, the connector status will change to `PENDING` during the reconfiguration process. Monitor the connector status using the `describe-connector` command.

## View SFTP connector details


You can find a list of details and properties for an SFTP connector in the AWS Transfer Family console.

**To view connector details**

1. Open the AWS Transfer Family console at [https://console.aws.amazon.com/transfer/](https://console.aws.amazon.com/transfer/).

1. In the left navigation pane, choose **Connectors**.

1. Choose the identifier in the **Connector ID** column to see the details page for the selected connector.

You can change the properties for the SFTP connector by choosing **Edit** on the connector details page.

### Monitoring VPC connector status


VPC\$1LATTICE-enabled connectors include additional status information to help you monitor the provisioning process:
+ **Status**: Shows `PENDING`, `ACTIVE`, or `ERRORED`
+ **EgressType**: Shows `VPC` or `SERVICE_MANAGED`
+ **EgressConfig**: Contains the Resource Configuration ARN for VPC connectors
+ **Error**: Provides detailed error information if the connector is in `ERRORED` state

For VPC connectors, the `ServiceManagedEgressIpAddresses` field will be null since traffic uses your VPC IP addresses instead.

**Note**  
You can get much of this information, albeit in a different format, by running the following AWS Command Line Interface (AWS CLI) command. To use this example command, replace the `user input placeholders` with your own information.   

```
aws transfer describe-connector --connector-id your-connector-id
```
For more information, see [https://docs.aws.amazon.com/transfer/latest/APIReference/API_DescribeConnector.html](https://docs.aws.amazon.com/transfer/latest/APIReference/API_DescribeConnector.html) in the API reference.

# Scaling and quotas for SFTP connectors
Quotas for SFTP connectors

**Topics**
+ [

## Quotas for SFTP connectors
](#limits-sftp-connector)
+ [

## Scaling your SFTP connectors
](#scaling-sftp-connector)

## Quotas for SFTP connectors


The following quotas are in place for SFTP connectors.

**Note**  
More service quotas for SFTP connectors are listed in [AWS Transfer Family endpoints and quotas](https://docs.aws.amazon.com//general/latest/gr/transfer-service.html) in the *Amazon Web Services General Reference*.


**SFTP connector quotas**  

| Name | Default | Adjustable | 
| --- | --- | --- | 
| Maximum test connection transactions per second (TPS) | 1 request per second, per account | No | 
| Maximum queue size for pending file transfers | 1000 | No | 
| Maximum file size | 150 gibibytes (GiB) | No | 
| Maximum transfer time per file | 12 hours | No | 
| Maximum request wait time per file | 12 hours | No | 
| Maximum bandwidth for connectors per account (both SFTP and AS2 connectors contribute to this value) | 50 MBps | No | 
| Maximum number of items for directory listing operations | 10,000 | No | 

**Note**  
 By default, SFTP connectors process one file at a time, transferring files sequentially. You have an option to accelerate transfer performance by having your connectors create concurrent sessions with remote servers that support concurrent sessions from the same user, and process up to 5 files in parallel.   
 To enable concurrent connections for any connector, you can edit the **Maximum conncurent connections** setting when creating or updating a connector. For details, see [Create an SFTP connector with service-managed egress](create-sftp-connector-procedure.md). 

For storing the credentials for SFTP connectors, there are quotas associated with each Secrets Manager secret. If you use the same secret to store multiple types of keys, for multiple purposes, you may encounter these quotas.
+ Total length for a single secret: 12,000 characters
+ Maximum length of the **Password** string: 1024 characters
+ Maximum length of the **PrivateKey** string: 8192 characters
+ Maximum length of the **Username** string: 100 characters

## Scaling your SFTP connectors


This section describes considerations for how to scale your AWS Transfer Family SFTP connector workloads. You need to take into account the following three quotas that apply when you want to scale your workloads with SFTP connectors.
+ **The maximum queue size.** This refers to the maximum number of pending operations in a connector’s queue that have been requested. A pending operation refers to any previously submitted transfer request that has not yet completed, either successfully or unsuccessfully.

  The maximum queue depth for pending requests is currently set at 1,000 per connector (as defined in [AWS Transfer Family service quotas](https://docs.aws.amazon.com//general/latest/gr/transfer-service.html)). Your workloads may exceed this service limit when you request thousands of transfer operations over a short duration, and you will receive a `ThrottlingException` with the message Exceeded maximum pending requests. If your workloads are subject to this quota, contact the Transfer Family service team via AWS Support or your account team to discuss your scalability requirements.

  You can also take either or both of the following actions.
  + Distribute your file volumes across multiple connectors.
  +  Have your connectors create concurrent sessions with the remote server to process multiple requests from the queue in parallel.
+ **The number of concurrent sessions.** By default, an SFTP connector transfers one file at a time, transferring files sequentially from its queue.

  You have an option to accelerate transfer performance by having your connectors transfer multiple files in parallel. You can create concurrent sessions with remote servers that support concurrent sessions from the same user, and process up to 5 files in parallel. When you create an SFTP connector, choose a value up to 5 for the **Maximum concurrent connections** setting when you create or update the connector. For details, see [Create an SFTP connector with service-managed egress](create-sftp-connector-procedure.md).
+ **The rate of `StartFileTransfer` requests.** You can request up to 100 file paths per second for transfer with each SFTP connector. The requested file paths are added to your connectors’ queue for processing. You can use the `StartFileTransfer` command recursively to request up to 100 file paths per second per connector, irrespective of the number of files provided in an individual `StartFileTransfer` command.

# Reference architectures using SFTP connectors
Reference architectures

This section lists the reference materials that are available for configuring automated file transfer workflows using SFTP connectors. You can design your own event-driven architectures by using the SFTP connector events in Amazon EventBridge, to orchestrate between your file transfer action and pre- and post-processing actions in AWS.

## Blog posts


The following blog post provides a reference architecture to build an MFT workflow using SFTP connectors, including encryption of files using PGP before sending them to a remote SFTP server using SFTP connectors: [Architecting secure and compliant managed file transfers with AWS Transfer Family SFTP connectors and PGP encryption.](https://aws.amazon.com/blogs//storage/architecting-secure-and-compliant-managed-file-transfers-with-aws-transfer-family-sftp-connectors-and-pgp-encryption/)

## Workshops

+ The following workshop provides hands on labs for configuring SFTP connectors and using your connectors to send or retrieve files from remote SFTP servers: [Transfer Family - SFTP workshop](https://catalog.workshops.aws/transfer-family-sftp/en-US).
+ The following workshop provides hands on labs to build fully automated and event-driven workflows involving file transfer to or from external SFTP servers to Amazon S3, and common pre- and post-processing of those files: [Event-driven MFT workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/e55c90e0-bbb0-47e1-be83-6bafa3a59a8a/en-US).

  This video provides a walk through of this workshop.  
[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/oojopisG4lA/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/oojopisG4lA)

## Solutions


AWS Transfer Family provides the following solutions:
+ The [File transfer synchronization solution](https://github.com/aws-samples/file-transfer-sync-solution) provides a reference architecture to automate the process of syncing remote SFTP directories—including entire folder structures—with your local Amazon S3 buckets using an SFTP connector. It orchestrates the process of listing remote directories, detecting changes, and transferring new or modified files.
+ [Serverlessland - Selective file transfer between remote SFTP server & S3; using AWS Transfer Family](https://serverlessland.com/patterns/awstransfer-s3-sam?ref=search) provides a sample pattern for listing files stored on remote SFTP locations, and transferring selective files to Amazon S3.

## VPC reference architectures


The following reference architectures show common patterns for deploying VPC\$1LATTICE-enabled SFTP connectors. These examples help you understand where VPC Lattice resources need to be created in your overall AWS architecture.

### Single account with shared egress infrastructure


In this architecture, the egress infrastructure (NAT Gateway, VPN tunnel, or Direct Connect) is configured in a VPC within the same account as your SFTP connectors. All connectors can share the same Resource Gateway and NAT Gateway.

![\[Architecture diagram showing VPC_LATTICE-enabled SFTP connectors in a single account with shared egress infrastructure including NAT Gateway, Resource Gateway, and VPC Lattice components.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/vpc-customer-architecture-1.png)


This pattern is ideal when:
+ All SFTP connectors are managed within a single AWS account
+ Egress infrastructure is setup in a VPC within the same account as your SFTP connectors

### Cross-account with centralized egress infrastructure


In this architecture, the egress infrastructure (NAT Gateway, VPN tunnel, Direct Connect, or B2B Firewalls) is configured in a central Egress account managed by the networking team. SFTP connectors are created in the MFT Application account managed by the MFT admin team. Cross-account networking is established using Transit Gateway to honor existing networking rules.

![\[Architecture diagram showing VPC_LATTICE-enabled SFTP connectors in a cross-account setup with centralized egress infrastructure managed by a separate networking team account.\]](http://docs.aws.amazon.com/transfer/latest/userguide/images/vpc-customer-architecture-2.png)


This pattern is ideal when:
+ Network infrastructure is managed by a separate team in a dedicated account
+ You have existing routes (such as AWS Transit Gateway ) between the account where SFTP connectors are created and the account where Egress infrastructure is setup. SFTP connectors will be able to leverage your existing routes connecting the two accounts.
+ Centralized security controls and B2B firewalls are required
+ You need to maintain separation of duties between networking and application teams