Using EMR Serverless with AWS Lake Formation for fine-grained access control
Overview
With Amazon EMR releases 7.2.0 and higher, leverage AWS Lake Formation to apply fine-grained access controls on Data Catalog tables that are backed by S3. This capability lets you configure table, row, column, and cell level access controls for read queries within your Amazon EMR Serverless Spark jobs. To configure fine-grained access control for Apache Spark batch jobs and interactive sessions, use EMR Studio. See the following sections to learn more about Lake Formation and how to use it with EMR Serverless.
Using Amazon EMR Serverless with AWS Lake Formation incurs additional charges. For more
information, refer to Amazon EMR
pricing
How EMR Serverless works with AWS Lake Formation
Using EMR Serverless with Lake Formation lets you enforce a layer of permissions on each Spark
job to apply Lake Formation permissions control when EMR Serverless executes jobs.
EMR Serverless uses Spark resource profiles
When you use pre-initialized capacity with Lake Formation, we suggest that you have a minimum of two Spark drivers. Each Lake Formation-enabled job utilizes two Spark drivers, one for the user profile and one for the system profile. For the best performance, use double the number of drivers for Lake Formation-enabled jobs compared to if you don't use Lake Formation.
When you run Spark jobs on EMR Serverless, also consider the impact of dynamic allocation
on resource management and cluster performance. The configuration spark.dynamicAllocation.maxExecutors of the maximum number of
executors per resource profile applies to user and system executors. If you configure that number to be equal to the
maximum allowed number of executors, your job run might get stuck because of one type of executor that uses all available resources, which
prevents the other executor when you run jobs jobs.
So you don't run out of resources, EMR Serverless sets the default maximum number of executors per resource profile to 90% of the
spark.dynamicAllocation.maxExecutors value. You can override this configuration when you specify
spark.dynamicAllocation.maxExecutorsRatio with a value between 0 and 1. Additionally, also configure the following properties to
optimize resource allocation and overall performance:
-
spark.dynamicAllocation.cachedExecutorIdleTimeout -
spark.dynamicAllocation.shuffleTracking.timeout -
spark.cleaner.periodicGC.interval
The following is a high-level overview of how EMR Serverless gets access to data protected by Lake Formation security policies.
-
A user submits Spark job to an AWS Lake Formation-enabled EMR Serverless application.
-
EMR Serverless sends the job to a user driver and runs the job in the user profile. The user driver runs a lean version of Spark that has no ability to launch tasks, request executors, access S3 or the Glue Catalog. It builds a job plan.
-
EMR Serverless sets up a second driver called the system driver and runs it in the system profile (with a privileged identity). EMR Serverless sets up an encrypted TLS channel between the two drivers for communication. The user driver uses the channel to send the job plans to the system driver. The system driver does not run user-submitted code. It runs full Spark and communicates with S3, and the Data Catalog for data access. It request executors and compiles the Job Plan into a sequence of execution stages.
-
EMR Serverless then runs the stages on executors with the user driver or system driver. User code in any stage is run exclusively on user profile executors.
-
Stages that read data from Data Catalog tables protected by AWS Lake Formation or those that apply security filters are delegated to system executors.
Enabling Lake Formation in Amazon EMR
To enable Lake Formation, set spark.emr-serverless.lakeformation.enabled
to true under spark-defaults classification for the
runtime-configuration parameter when creating an EMR Serverless application.
aws emr-serverless create-application \ --release-label emr-7.12.0 \ --runtime-configuration '{ "classification": "spark-defaults", "properties": { "spark.emr-serverless.lakeformation.enabled": "true" } }' \ --type "SPARK"
You can also enable Lake Formation when you create a new application in EMR Studio. Choose Use Lake Formation for fine-grained access control, available under Additional configurations.
Inter-worker encryption is enabled by default when you use Lake Formation with EMR Serverless, so you do not need to explicitly enable inter-worker encryption again.
Enabling Lake Formation for Spark jobs
To enable Lake Formation for individual Spark jobs, set spark.emr-serverless.lakeformation.enabled to true when using spark-submit.
--conf spark.emr-serverless.lakeformation.enabled=true
Job runtime role IAM permissions
Lake Formation permissions control access to AWS Glue Data Catalog resources, Amazon S3 locations, and the
underlying data at those locations. IAM permissions control access to the Lake Formation and
AWS Glue APIs and resources. Although you might have the Lake Formation permission to access a table
in the Data Catalog (SELECT), your operation fails if you don’t have the IAM permission on
the glue:Get* API operation.
The following is an example policy of how to provide IAM permissions to access a script in S3, uploading logs to S3, AWS Glue API permissions, and permission to access Lake Formation.
Setting up Lake Formation permissions for job runtime role
First, register the location of your Hive table with Lake Formation. Then create permissions for your job runtime role on your desired table. For more details about Lake Formation, refer to What is AWS Lake Formation? in the AWS Lake Formation Developer Guide.
After you set up the Lake Formation permissions, submit Spark jobs on Amazon EMR Serverless. For more information about Spark jobs, refer to Spark examples.
Submitting a job run
After you finish setting up the Lake Formation grants, you can submit Spark jobs on EMR Serverless. The section that follows shows examples of how to configure and submit job run properties.
Permission requirements
Tables not registered in AWS Lake Formation
For tables not registered with AWS Lake Formation, the job runtime role accesses both the AWS Glue Data Catalog and the underlying table data in Amazon S3. This requires the job runtime role to have appropriate IAM permissions for both AWS Glue and Amazon S3 operations.
Tables registered in AWS Lake Formation
For tables registered with AWS Lake Formation, the job runtime role accesses the AWS Glue Data Catalog metadata, while temporary credentials vended by Lake Formation access the underlying table data in Amazon S3. The Lake Formation permissions required to execute an operation depend on the AWS Glue Data Catalog and Amazon S3 API calls that the Spark job initiates and can be summarized as follows:
-
DESCRIBE permission allows the runtime role to read table or database metadata in the Data Catalog
-
ALTER permission allows the runtime role to modify table or database metadata in the Data Catalog
-
DROP permission allows the runtime role to delete table or database metadata from the Data Catalog
-
SELECT permission allows the runtime role to read table data from Amazon S3
-
INSERT permission allows the runtime role to write table data to Amazon S3
-
DELETE permission allows the runtime role to delete table data from Amazon S3
Note
Lake Formation evaluates permissions lazily when a Spark job calls AWS Glue to retrieve table metadata and Amazon S3 to retrieve table data. Jobs that use a runtime role with insufficient permissions will not fail until Spark makes an AWS Glue or Amazon S3 call that requires the missing permission.
Note
In the following supported table matrix:
-
Operations marked as Supported exclusively use Lake Formation credentials to access table data for tables registered with Lake Formation. If Lake Formation permissions are insufficient, the operation will not fall back to runtime role credentials. For tables not registered with Lake Formation, the job runtime role credentials access the table data.
-
Operations marked as Supported with IAM permissions on Amazon S3 location do not use Lake Formation credentials to access underlying table data in Amazon S3. To run these operations, the job runtime role must have the necessary Amazon S3 IAM permissions to access the table data, regardless of whether the table is registered with Lake Formation.