

# Indicators for functional testing


Assess specific functionalities of systems to ensure they operate correctly and meet predefined requirements.

**Topics**
+ [

# [QA.FT.1] Ensure individual component functionality with unit tests
](qa.ft.1-ensure-individual-component-functionality-with-unit-tests.md)
+ [

# [QA.FT.2] Validate system interactions and data flows with integration tests
](qa.ft.2-validate-system-interactions-and-data-flows-with-integration-tests.md)
+ [

# [QA.FT.3] Confirm end-user experience and functional correctness with acceptance tests
](qa.ft.3-confirm-end-user-experience-and-functional-correctness-with-acceptance-tests.md)
+ [

# [QA.FT.4] Balance developer feedback and test coverage using advanced test selection
](qa.ft.4-balance-developer-feedback-and-test-coverage-using-advanced-test-selection.md)

# [QA.FT.1] Ensure individual component functionality with unit tests


 **Category:** FOUNDATIONAL 

 Unit tests evaluate the functionality of one individual part of an application, called *units*. The goal of unit tests is to provide fast, thorough feedback while reducing the risk of introducing flaws when making changes. This feedback is accomplished by writing tests cases that cover a sufficient amount of the code. These test cases run the code using predefined inputs and set expectations for a specific output. 

 Unit tests should be isolated to a single class, function, or method within the code. Fakes or mocks are used in place of external or infrastructure components to help ensure that the scope is isolated. These tests should be fast, repeatable, and provide assertions that lead to a pass or fail outcome. Teams should be able to run unit tests locally as well as through continuous integration pipelines. 

 Ideally, teams adopt [Test-Driven Development (TDD)](https://www.agilealliance.org/glossary/tdd/) practices and write tests before the software is developed. This approach can lead to faster feedback, more effective tests, and introducing less defects when writing code. 

**Related information:**
+  [AWS Well-Architected Reliability Pillar: REL12-BP03 Test functional requirements](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_testing_resiliency_test_functional.html) 
+  [Building hexagonal architectures on AWS - Write and run tests from the beginning](https://docs.aws.amazon.com/prescriptive-guidance/latest/hexagonal-architectures/best-practices.html) 
+  [AWS Deployment Pipeline Reference Architecture](https://aws-samples.github.io/aws-deployment-pipeline-reference-architecture/application-pipeline/index.html) 
+  [Testing software and systems at Amazon: Unit tests](https://youtu.be/o1sc3cK9bMU?t=930) 
+  [Adopt a test-driven development approach using AWS CDK](https://docs.aws.amazon.com/prescriptive-guidance/latest/best-practices-cdk-typescript-iac/development-best-practices.html) 
+  [Getting started with testing serverless applications](https://aws.amazon.com/blogs/compute/getting-started-with-testing-serverless-applications/) 
+  [TestDouble](https://martinfowler.com/bliki/TestDouble.html) 

# [QA.FT.2] Validate system interactions and data flows with integration tests


 **Category:** FOUNDATIONAL 

 Integration tests evaluate the interactions between multiple components that make up the system, including infrastructure and external systems. The goal of integration testing is to help ensure that these interactions and data flows work together, ensuring that recent changes have not disrupted any interfaces or introduced undesired behaviors. 

 Integration tests often run much slower than unit testing due to the fact that they interact with real system, such as databases, message queues, and external APIs. Strive to make integration tests as efficient as possible by optimizing setup and tear down using automation and infrastructure as code (IaC). Optimize test execution by running tests in parallel where possible. This allows for quicker feedback loops and makes it possible to run integration tests through continuous integration pipelines. 

 While integration tests should involve real components, they should still be isolated from production or shared environments where possible. This helps ensure that tests do not inadvertently affect real data or services. Consider using dedicated emulation, containers, or cloud-based test environments to make tests more efficient, consistent, and safe. 

 Just as with unit tests, adopting [Test-Driven Development (TDD)](https://www.agilealliance.org/glossary/tdd/) by writing tests before the software is developed helps to highlight potential integration pain points early, and verifies that the interfaces between components are correctly implemented from the start. 

**Related information:**
+  [AWS Well-Architected Reliability Pillar: REL12-BP03 Test functional requirements](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_testing_resiliency_test_functional.html) 
+  [AWS Deployment Pipeline Reference Architecture](https://aws-samples.github.io/aws-deployment-pipeline-reference-architecture/application-pipeline/index.html) 
+  [Getting started with testing serverless applications](https://aws.amazon.com/blogs/compute/getting-started-with-testing-serverless-applications/) 
+  [Amazon's approach to high-availability deployment: Integration testing](https://youtu.be/bCgD2bX1LI4?t=1480) 
+  [Building hexagonal architectures on AWS - Write and run tests from the beginning](https://docs.aws.amazon.com/prescriptive-guidance/latest/hexagonal-architectures/best-practices.html) 

# [QA.FT.3] Confirm end-user experience and functional correctness with acceptance tests


 **Category:** FOUNDATIONAL 

 Acceptance tests evaluate the observable functional behavior of the system from the perspective of the end user in a production-like environment. These tests encompass functional correctness of user interfaces, general application behavior, and ensuring that user interface elements lead to expected user experiences. 

 By considering all facets of user interactions and expectations, acceptance testing provides a comprehensive evaluation of an application's readiness for production deployment. There are various forms of functional acceptance tests which should be used throughout development lifecycle: 
+  **End-To-End (E2E) Testing:** Acceptance tests performed by the development team through delivery pipelines to validate integrated components and user flows. Begin by identifying the most impactful user flows and create test cases for them. Ideally, teams practice [Behavior-Driven Development (BDD)](https://www.agilealliance.org/glossary/bdd/) to define how the system will be designed to be tested before code is written. Next, adopt a suitable automated testing framework, such as [AWS Device Farm](https://aws.amazon.com/device-farm/) or [Selenium](https://www.selenium.dev/documentation/). Using the continuous delivery pipeline, trigger the testing tool to run scripted tests cases against the system while it is running in the test environment. 
+  **User Acceptance Testing (UAT):** Acceptance tests performed by external end-users of the system to validate that the system aligns with business needs and requirements. The users measure the application against defined acceptance criteria by interacting with the system and providing feedback based on if the system behaves as expected. The development team engages, instructs, and supports these users as they test the system. Log the results of the test by gathering feedback from the users, using the acceptance criteria as a guide. Feedback should highlight areas where the system met or exceeded expectations as well as areas where the system did not meet expectations. 
+  **Synthetic Testing:** Continuously run simulations of user behavior in a live testing environment to proactively spot issues. Define the metrics you want to test, such as response times or error rates. Choose a preferred tool that integrates well with your desired programming tools and frameworks. Write automated test scripts which simulate user interactions against the user interface and APIs of the system. These scripts should be regularly run by the synthetic testing tool in the testing environment. Synthetic tests can also be used to perform continuous application performance monitoring in production environments for observability purposes. 

**Related information:**
+  [AWS Well-Architected Performance Pillar: PERF01-BP06 Benchmark existing workloads](https://docs.aws.amazon.com/wellarchitected/latest/performance-efficiency-pillar/perf_performing_architecture_benchmark.html) 
+  [AWS Well-Architected Reliability Pillar: REL12-BP03 Test functional requirements](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_testing_resiliency_test_functional.html) 
+  [Behavior Driven Development (BDD)](https://www.agilealliance.org/glossary/bdd/) 
+  [Amazon CloudWatch Synthetics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html) 
+  [AWS Deployment Pipeline Reference Architecture](https://aws-samples.github.io/aws-deployment-pipeline-reference-architecture/application-pipeline/index.html) 
+  [Getting started with testing serverless applications](https://aws.amazon.com/blogs/compute/getting-started-with-testing-serverless-applications/) 

# [QA.FT.4] Balance developer feedback and test coverage using advanced test selection
[QA.FT.4] Balance developer feedback and test coverage using advanced test selection

 **Category:** OPTIONAL 

 In traditional software models, regression testing was a distinct form of functional testing, designed to ensure that new code integrations did not disrupt existing system functionalities. In a DevOps model, there is a new perspective: regression testing is no longer a testing activity with human involvement. Instead, every change triggers automated pipelines that conduct a new cycle of tests, making each pipeline execution effectively a *regression test*. As systems become more complex over time, so do the test suites. Running all tests every time a change is made can become time-consuming and inefficient as test suites grow, slowing down the development feedback loop. 

 Before choosing to implement advanced test selection methods using machine learning (ML). you should first optimize test execution through parallelization, reducing stale or ineffective tests, improving the infrastructure the tests are run on, and changing the order of tests to optimize for faster feedback. If these methods do not produce sufficient outcomes, there are algorithmic and ML methods that provide advanced test selection capabilities. 

 Test Impact Analysis (TIA) offers a structured approach to advanced test selection. By examining the differences in the codebase, TIA determines the tests that are most likely to be affected by the recent changes. This results in running only a relevant subset of the entire test suite, ensuring efficiency without the need for machine learning models. 

 Predictive test selection is an evolving approach to test selection which uses ML models trained on historical code changes and test outcomes to determine how likely a test is to reveal errors based on the change. This results in a subset of tests to run tailored to the specific change that are most likely to detect regressions. Predictive test selection strikes a balance between providing faster feedback to developers and thorough test coverage. 

 Using ML for this purpose introduces a level of uncertainty into the quality assurance process. If you do choose to implement predictive test selection, we recommend putting additional controls in place, including: 
+  Add manual approval stages that require developers to assess and accept the level of tests that will be run before they run. These manual approvals allow the team to decide if the test coverage trade-off makes sense and to accept the risk for the given change. 
+  Provide eventual consistency of test results by running the full set of tests asynchronously outside of the development workflow. If there are tests that fail at this stage, provide feedback to the development team so that they can triage the issues and decide if they need to roll back the change. 
+  We do not recommend using predictive test selection to exclude security-related tests or relying on this approach for sensitive systems which are critical to your business. 

**Related information:**
+  [Predictive Test Selection](https://research.facebook.com/publications/predictive-test-selection/) 
+  [Machine Learning - Amazon Web Services](https://aws.amazon.com/sagemaker/) 
+  [The Rise of Test Impact Analysis](https://martinfowler.com/articles/rise-test-impact-analysis.html) 