Troubleshooting
Known issue resolution provides instructions to mitigate known errors. If these instructions don’t address your issue, see the Contact AWS Support section for instructions on opening an AWS Support case for this solution.
Known issue resolution
Failed to upload data in S3 bucket
Issue: Unable to Upload New Data
Reason: For security purposes, data upload permissions to the bucket are restricted to users with the data-admin role. Standard admin users do not have upload privileges.
Resolution:
-
Go to IAM console and find the role that ends with data-admin
-
Switch to the data-admin role in that account
-
Add the required data in S3 transformed bucket
-
Switch back to the main role
-
Run the crawler to index the new data
Data Science Configuration Deployment Failure
Issue: The deployment failed while deploying basic_datascience configuration
Reason: To set up a data science environment in SageMaker Studio, a unique user profile is needed with a unique name. This profile will grant the user permission to access and launch SageMaker Studio resources.
Resolution: User Profile Name Issues:
-
Modify the user profile name in datascience-team.yaml
-
Please change the <my-own-data-science-profile-name> to something custom that you can identify
userProfiles: # The key/name of the user profile should be specified as follows: # If the Domain is in SSO auth mode, this should map to an SSO User ID. # If in IAM mode, this should map to Session Name portion of the aws:userid variable. "<my-own-data-science-profile-name>": # Required if the domain is in IAM AuthMode. This is the role # from which the user will launch the user profile in Studio. # The role's id will be combined with the userid # to grant the user access to launch the user profile. userRole: id: generated-role-id:data-user
Lake Formation Data Lake Deployment Issues
Issue: The following error messages
Reading config from /Users/xxx/Documents/MDAA/config/lakeformation_datalake/datascience/datascience-team.yaml. Error: ENOENT: no such file or directory
Reason: LakeFormation expects a datascience.yaml to create datascience related configurations
Resolution:
-
Create a folder named datascience inside lakeformation_datalake folder
-
Create a file named datascience-team.yaml inside the folder
-
Please add the sample configuration values as below:
# List of roles which will be provided admin access to the team resources dataAdminRoles: - id: generated-role-id:data-admin # List of roles which will be provided usage access to the team resources # Can be either directly referenced Role Arns, Role Arns via SSM Params, # or generated roles created using the MDAA roles module. teamUserRoles: - id: generated-role-id:data-user # The role which will be used to execute Team SageMaker resources (Studio Domain Apps, SageMaker Jobs/Pipelines, etc) teamExecutionRole: id: generated-role-id:team-execution # The team Studio Domain config studioDomainConfig: authMode: IAM vpcId: "{{context:vpc_id}}" subnetIds: - "{{context:subnet_id}}" notebookSharingPrefix: sagemaker/notebooks/ # List of Studio user profiles which will be created. userProfiles: # The key/name of the user profile should be specified as follows: # If the Domain is in SSO auth mode, this should map to an SSO User ID. # If in IAM mode, this should map to Session Name portion of the aws:userid variable. "<my-own-data-science-profile-name>": # Required if the domain is in IAM AuthMode. This is the role # from which the user will launch the user profile in Studio. # The role's id will be combined with the userid # to grant the user access to launch the user profile. userRole: id: generated-role-id:data-user
Failed to resolve context: vpc_id
Issue: Encounters the following error message
Error: Failed to resolve context: vpc_id at MdaaConfigRefValueTransformer.parseContext (/Users/xxx/Documents/MDAA/packages/utilities/mdaa-config/lib/config.ts:193:19) at /Users/xxxx/Documents/MDAA/packages/utilities/mdaa-config/lib/config.ts:165:38
Reason: vpc_id and subnet_id are needed to create a secure data environment within the vpc
Resolution:
-
Go to
mdaa.yamlfile -
Please check if you have vpc_id configured in the file
-
The below values should go after organization in the config file.
-
Please run the deployment again after the values are changed
# TODO: Set an appropriate, unique organization name # Failure to do so may result in global naming conflicts. organization: trial-datalake-lk context: vpc_id: vpc-00000090000 subnet_id: subnet-09090909090
Issue: Failure in deploying Generative AI Accelerator
Error Message:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? [100%] fail: docker build —tag cdkasset-7a1e3989751f91a191cd33edf97f22ef63c06ad34f01895a7af11a3e32e3a97a . exited with error code 1
Resolution:
-
Ensure Docker is installed and running
-
Verify Docker daemon is active before deployment
-
Check Docker configuration settings
Cross-Account Lake Formation Region Issues
Issue: Lake Formation cross-account access fails when regions differ between accounts
Reason: Lake Formation resource links and cross-account permissions require region alignment for proper functionality
Resolution:
-
Ensure Lake Formation settings are configured in the same region across accounts
-
Verify resource links are created in the correct region
-
Check that IAM roles have permissions for the target region
-
Update Lake Formation resource shares to include the correct region
-
Redeploy affected modules after region alignment
CLI and Configuration Errors
Issue: MDAA fails during synth or deploy with error:
DuplicateAccountLevelModulesException { duplicates: [ [ 'default/default', 'qs-account' ] ], message: 'Found account-level modules that will be deployed more than once' }
Reason: Certain MDAA modules (such as qs-account, data-catalog) are designated as "account-level modules" - they should only be deployed once per AWS account/region combination. This error occurs when the same account-level module is configured in multiple environments that target the same AWS account.
Resolution:
-
Review your
mdaa.yamlto identify which environments share the same AWS account -
Ensure account-level modules only appear once per account/region:
-
Define the module in only ONE environment per account, OR
-
Use different AWS accounts for different domains/environments
-
-
If you need the same functionality in multiple environments on the same account, the module only needs to be deployed once - other environments can reference the shared resources
Important Note:
The -e (environment) and -d (domain) CLI flags do NOT bypass this validation. MDAA validates the entire configuration file for consistency before any synthesis or deployment begins, regardless of which subset you intend to deploy. This is by design to prevent configuration conflicts.
VPC Endpoints Deployment with Bedrock Knowledge Base
Issue: VPC Endpoints fail to deploy when Bedrock Knowledge Base uses OpenSearch Serverless on different VPCs
Reason: VPC endpoint configuration conflicts when Knowledge Base and OpenSearch Serverless are deployed in separate VPCs
Resolution:
-
Ensure Bedrock Knowledge Base and OpenSearch Serverless are in the same VPC
-
If separate VPCs are required, configure VPC peering:
-
Create VPC peering connection between the VPCs
-
Update route tables to allow traffic between VPCs
-
Update security groups to allow necessary traffic
-
-
Verify VPC endpoint service names are correct for your region
-
Check that subnet configurations allow VPC endpoint creation
Alternative Approach:
-
Deploy Knowledge Base and OpenSearch Serverless in the same VPC
-
Use private subnets for both services
-
Configure security groups to allow communication between services
Additional Notes: * Ensure you’re using the latest version for automatic resolution