Continuous Improvement for Existing Solutions (25%)
Last updated
Last updated
Question 4: Correct
A media company has a suite of internet-facing web applications hosted in US West (N. California) region in AWS. The architecture is composed of several On-Demand Amazon EC2 instances behind an Application Load Balancer, which is configured to use public SSL/TLS certificates. The Application Load Balancer also enables incoming HTTPS traffic through the fully qualified domain names (FQDNs) of the applications for SSL termination. A Solutions Architect has been instructed to upgrade the corporate web applications to a multi-region architecture that uses various AWS Regions such as ap-southeast-2, ca-central-1, eu-west-3, and so forth.
Which of the following approach should the Architect implement to ensure that all HTTPS services will continue to work without interruption?
Use the AWS KMS in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.
In each new AWS Region, request for SSL/TLS certificates using AWS KMS for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
In each new AWS Region, request for SSL/TLS certificates using the AWS Certificate Manager for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
(Correct)
Use the AWS Certificate Manager service in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.
Explanation
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.
With AWS Certificate Manager, you can quickly request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally. Public and private certificates provisioned through AWS Certificate Manager for use with ACM-integrated services are free. You pay only for the AWS resources you create to run your application. With AWS Certificate Manager Private Certificate Authority, you pay monthly for the operation of the private CA and for the private certificates you issue.
You can use the same SSL certificate from ACM in more than one AWS Region but it depends on whether you’re using Elastic Load Balancing or Amazon CloudFront. To use a certificate with Elastic Load Balancing for the same site (the same fully qualified domain name, or FQDN, or set of FQDNs) in a different Region, you must request a new certificate for each Region in which you plan to use it. To use an ACM certificate with Amazon CloudFront, you must request the certificate in the US East (N. Virginia) region. ACM certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution.
Hence, the correct answer is the option that says: In each new AWS Region, request for SSL/TLS certificates using the AWS Certificate Manager for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
The option that says: In each new AWS Region, request for SSL/TLS certificates using AWS KMS for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region is incorrect because AWS KMS is not the right service to use to generate the SSL/TLS certificates. You have to utilize ACM instead.
The option that says: Use the AWS KMS in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add is incorrect. You have to use the AWS Certificate Manager (ACM) service to generate the certificates and not AWS KMS as this service is primarily used for data encryption. Moreover, you have to associate the certificates that were generated from the same AWS Region where the load balancer is launched.
The option that says: Use the AWS Certificate Manager service in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add is incorrect. You can only use the same SSL certificate from ACM in more than one AWS Region if you are attaching it to your CloudFront distribution only, and not to your Application Load Balancer. To use a certificate with Elastic Load Balancing for the same site (the same fully qualified domain name, or FQDN, or set of FQDNs) in a different Region, you must request a new certificate for each Region in which you plan to use it.
References:
Check out these AWS Certificate Manager and Elastic Load Balancer Cheat Sheets:
Question 9: Correct
A company wants to launch its online shopping website to give customers an easy way to purchase the products they need. The proposed setup is to host the application on an AWS Fargate cluster, utilize a Load Balancer to distribute traffic between the Fargate tasks, and use Amazon CloudFront for caching and content delivery. The company wants to ensure that the website complies with industry best practices and should be able to protect customers from common “man-in-the-middle” attacks for e-commerce websites such as DNS spoofing, HTTPS spoofing, or SSL hijacking.
Which of the following configurations will provide the MOST secure access to the website?
Use Route 53 for domain registration. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Use AWS Certificate Manager (ACM) to generate a valid 2048-bit TLS/SSL certificate for the domain name and configure the Application Load Balancer HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Register the domain name on Route 53 and enable DNSSEC validation for all public hosted zones to ensure that all DNS requests have not been tampered with during transit. Use AWS Certificate Manager (ACM) to generate a valid TLS/SSL certificate for the domain name. Configure the Application Load Balancer with an HTTPS listener to use the ACM TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
(Correct)
Register the domain name on Route 53. Since Route 53 only supports DNSSEC for registration, host the company DNS root servers on Amazon EC2 instances running the BIND service. Enable DNSSEC for DNS requests to ensure the replies have not been tampered with. Generate a valid certificate for the website domain name on AWS ACM and configure the Application Load Balancers HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Register the domain name on Route 53. Use a third-party DNS provider that supports the import of the customer-managed keys for DNSSEC. Import a 2048-bit TLS/SSL certificate from a third-party certificate service to AWS Certificate Manager (ACM). Configure the Application Load Balancer with an HTTPS listener to use the imported TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Explanation
Amazon now allows you to enable Domain Name System Security Extensions (DNSSEC) signing for all existing and new public hosted zones, and enable DNSSEC validation for Amazon Route 53 Resolver. Amazon Route 53 DNSSEC provides data origin authentication and data integrity verification for DNS and can help customers meet compliance mandates, such as FedRAMP.
When you enable DNSSEC signing on a hosted zone, Route 53 cryptographically signs each record in that hosted zone. Route 53 manages the zone-signing key, and you can manage the key-signing key in AWS Key Management Service (AWS KMS). Amazon’s domain name registrar, Route 53 Domains, already supports DNSSEC, and customers can now register domains and host their DNS on Route 53 with DNSSEC signing enabled. When you enable DNSSEC validation on the Route 53 Resolver in your VPC, it ensures that DNS responses have not been tampered with in transit. This can prevent DNS Spoofing.
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. Using a valid SSL Certificate for your application load balancer ensures that all requests are encrypted on transit as well as protection against SSL hijacking.
CloudFront supports Server Name Indication (SNI) for custom SSL certificates, along with the ability to take incoming HTTP requests and redirect them to secure HTTPS requests to ensure that clients are always directed to the secure version of your website.
Therefore, the correct answer is: Register the domain name on Route 53 and enable DNSSEC validation for all public hosted zones to ensure that all DNS requests have not been tampered with during transit. Use AWS Certificate Manager (ACM) to generate a valid TLS/SSL certificate for the domain name. Configure the Application Load Balancer with an HTTPS listener to use the ACM TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
The option that says: Register the domain name on Route 53. Use a third-party DNS provider that supports the import of the customer-managed keys for DNSSEC. Import a 2048-bit TLS/SSL certificate from a third-party certificate service to AWS Certificate Manager (ACM). Configure the Application Load Balancer with an HTTPS listener to use the imported TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect. Although this is possible, you don’t have to rely on a third-party DNS provider as Route 53 supports DNSSEC signing. Also, ACM can secure a 2048-bit TLS/SSL Certificate for free so you don't have to buy certificates from other providers.
The option that says: Use Route 53 for domain registration. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Use AWS Certificate Manager (ACM) to generate a valid 2048-bit TLS/SSL certificate for the domain name and configure the Application Load Balancer HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect. This is also possible, but you don't have to rely on a third-party DNS provider as Amazon Route 53 already supports DNSSEC signing.
The option that says: Register the domain name on Route 53. Since Route 53 only supports DNSSEC for registration, host the company DNS root servers on Amazon EC2 instances running the BIND service. Enable DNSSEC for DNS requests to ensure the replies have not been tampered with. Generate a valid certificate for the website domain name on AWS ACM and configure the Application Load Balancers HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect as this solution is no longer recommended. This setup was previously used as a workaround when DNSSEC signing was not supported natively yet in Amazon Route 53.
References:
Check out these AWS Cheat Sheets:
Question 11: Correct
A company processes several petabytes of images submitted by users on their photo hosting site every month. Each month, the images are processed in its on-premises data center by a High-Performance Computing (HPC) cluster with a capacity of 5,000 cores and 10 petabytes of data. Processing a month’s worth of images by thousands of jobs running in parallel takes about a week and the processed images are stored on a network file server, which also backups the data to a disaster recovery site.
The current data center is nearing its capacity so the users are forced to spread the jobs within the course of the month. This is not ideal for the requirement of the jobs, so the Solutions Architect was tasked to design a scalable solution that can exceed the current capacity with the least amount of management overhead while maintaining the current level of durability.
Which of the following solutions will meet the company's requirements while being cost-effective?
Package the executable file for the job in a Docker image stored on Amazon Elastic Container Registry (Amazon ECR). Run the Docker images on Amazon Elastic Kubernetes Service (Amazon EKS). Auto Scaling can be handled automatically by EKS. Store the raw data temporarily on Amazon EBS SC1 volumes and then send the images to an Amazon S3 bucket after processing.
Create an Amazon SQS queue and submit the list of jobs to be processed. Create an Auto Scaling Group of Amazon EC2 Spot Instances that will process the jobs from the SQS queue. Share the raw data across all the instances using Amazon EFS. Store the processed images in an Amazon S3 bucket for long term storage.
Using a combination of On-demand and Reserved Instances as Task Nodes, create an EMR cluster that will use Spark to pull the raw data from an Amazon S3 bucket. List the jobs that need to be processed by the EMR cluster on a DynamoDB table. Store the processed images on a separate Amazon S3 bucket.
Utilize AWS Batch with Managed Compute Environments to create a fleet using Spot Instances. Store the raw data on an Amazon S3 bucket. Create jobs on AWS Batch Job Queues that will pull objects from the Amazon S3 bucket and temporarily store them to the EC2 EBS volumes for processing. Send the processed images back to another Amazon S3 bucket.
(Correct)
Explanation
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.
There is no additional charge for AWS Batch. You only pay for the AWS resources (e.g. EC2 instances or Fargate jobs) you create to store and run your batch jobs. From the AWS Batch use cases page, we can see an example similar to this scenario wherein Digital Media and Entertainment companies require highly scalable batch computing resources to enable accelerated and automated processing of data as well as the compilation and processing of files, graphics, and visual effects for high-resolution video content. Use AWS Batch to accelerate content creation, dynamically scale media packaging, and automate asynchronous media supply chain workflows.
In AWS Batch, job queues are mapped to one or more compute environments. Compute environments contain the Amazon ECS container instances that are used to run containerized batch jobs. A specific compute environment can also be mapped to one or many job queues. Within a job queue, the associated compute environments each have an order that's used by the scheduler to determine where jobs that are ready to be run should run.
Therefore, the correct answer is: Utilize AWS Batch with Managed Compute Environments to create a fleet using Spot Instances. Store the raw data on an Amazon S3 bucket. Create jobs on AWS Batch Job Queues that will pull objects from the Amazon S3 bucket and temporarily store them to the EC2 EBS volumes for processing. Send the processed images back to another Amazon S3 bucket.
The option that says: Package the executable file for the job in a Docker image stored on Amazon Elastic Container Registry (Amazon ECR). Run the Docker images on Amazon Elastic Kubernetes Service (Amazon EKS). Auto Scaling can be handled automatically by EKS. Store the raw data temporarily on Amazon EBS SC1 volumes and then send the images to an Amazon S3 bucket after processing is incorrect. Although this is possible, converting the application to a container and deploying it to an EKS cluster will entail a lot of changes for the application. Additionally, since you can’t quickly increase/decrease SC1 EBS volumes, creating a large volume to handle petabytes of data is not cost-effective.
The option that says: Using a combination of On-demand and Reserved Instances as Task Nodes, create an EMR cluster that will use Apache Spark to pull the raw data from an Amazon S3 bucket. List the jobs that need to be processed by the EMR cluster on a DynamoDB table. Store the processed images on a separate Amazon S3 bucket is incorrect as managing the EMR cluster and Apache Spark adds significant management overhead for this solution. There is also an additional cost for the EC2 instances that are constantly running even if there are only a few jobs that need to be run.
The option that says: Create an Amazon SQS queue and submit the list of jobs to be processed. Create an Auto Scaling Group of Amazon EC2 Spot Instances that will process the jobs from the SQS queue. Share the raw data across all the instances using Amazon EFS. Store the processed images in an Amazon S3 bucket for long term storage is incorrect as Amazon EFS is more expensive than storing the raw data on S3 buckets. This is also not efficient as listing the jobs on SQS Queue can cause some to be processed twice, depending on the state of your Spot instances.
References:
Check out this AWS Batch Cheat Sheet:
Question 18: Incorrect
A stocks brokerage firm hosts its legacy application on Amazon EC2 in a private subnet of its Amazon VPC. The application is accessed by the employees from their corporate laptops through a proprietary desktop program. The company network is peered with the AWS Direct Connect (DX) connection to provide a fast and reliable connection to the private EC2 instances inside the VPC. To comply with the strict security requirements of financial institutions, the firm is required to encrypt its network traffic that flows from the employees' laptops to the resources inside the VPC.
Which of the following solution will comply with this requirement while maintaining the consistent network performance of Direct Connect?
Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
(Incorrect)
Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
(Correct)
Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
Explanation
To connect to services such as EC2 using just Direct Connect you need to create a private virtual interface. However, if you want to encrypt the traffic flowing through Direct Connect, you will need to use the public virtual interface of DX to create a VPN connection that will allow access to AWS services such as S3, EC2, and other services.
To connect to AWS resources that are reachable by a public IP address (such as an Amazon Simple Storage Service bucket) or AWS public endpoints, use a public virtual interface. With a public virtual interface, you can:
- Connect to all AWS public IP addresses globally.
- Create public virtual interfaces in any DX location to receive Amazon’s global IP routes.
- Access publicly routable Amazon services in any AWS Region (except for the AWS China Region).
To connect to your resources hosted in an Amazon Virtual Private Cloud (Amazon VPC) using their private IP addresses, use a private virtual interface. With a private virtual interface, you can:
- Connect VPC resources (such as Amazon Elastic Compute Cloud (Amazon EC2) instances or load balancers) on your private IP address or endpoint.
- Connect a private virtual interface to a DX gateway. Then, associate the DX gateway with one or more virtual private gateways in any AWS Region (except the AWS China Region).
- Connect to multiple VPCs in any AWS Region (except the AWS China Region), because a virtual private gateway is associated with a single VPC.
If you want to establish a virtual private network (VPN) connection from your company network to an Amazon Virtual Private Cloud (Amazon VPC) over an AWS Direct Connect (DX) connection, you must use a public virtual interface for your DX connection.
Therefore, the correct answer is: Using the current Direct Connect connection, create a new public virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
The option that says: Using the current Direct Connect connection, create a new private virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the employees’ laptops to connect to this VPN is incorrect because you must use a public virtual interface for your AWS Direct Connect (DX) connection and not a private one. You won't be able to establish an encrypted VPN along with your DX connection if you create a private virtual interface.
The following options are incorrect because you need to establish the VPN connection through the DX connection, and not over the Internet.
- Using the current Direct Connect connection, create a new public virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the internet. Configure the employees’ laptops to connect to this VPN.
- Using the current Direct Connect connection, create a new private virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the internet. Configure the company network to route employee traffic to this VPN.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 21: Incorrect
A company wants to implement a multi-account strategy that will be distributed across its several research facilities. There will be approximately 50 teams in total that will need their own AWS accounts. A solution is needed to simplify the DNS management as there is only one team that manages all the domains and subdomains for the whole organization. This means that the solution should allow private DNS to be shared among virtual private clouds (VPCs) in different AWS accounts.
Which of the following solutions has the LEAST complex DNS architecture and allows all VPCs to resolve the needed domain names?
On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Set up VPC peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this zone. Programmatically associate the VPCs from other accounts with this hosted zone.
(Correct)
On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Create a peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account.
Set up a VPC peering connection among the VPC of each account. Ensure that the each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “TRUE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account.
Set up Direct Connect connections among the VPCs of each account using private virtual interfaces. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “FALSE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. Programmatically associate the VPCs from other accounts with this hosted zone.
(Incorrect)
Explanation
When you create a VPC using Amazon VPC, Route 53 Resolver automatically answers DNS queries for local VPC domain names for EC2 instances (ec2-192-0-2-44.compute-1.amazonaws.com) and records in private hosted zones (acme.example.com). For all other domain names, Resolver performs recursive lookups against public name servers.
You also can integrate DNS resolution between Resolver and DNS resolvers on your network by configuring forwarding rules. Your network can include any network that is reachable from your VPC, such as the following:
- The VPC itself
- Another peered VPC
- An on-premises network that is connected to AWS with AWS Direct Connect, a VPN, or a network address translation (NAT) gateway
VPC sharing allows customers to share subnets with other AWS accounts within the same AWS Organization. This is a very powerful concept that allows for a number of benefits:
- Separation of duties: centrally controlled VPC structure, routing, IP address allocation.
- Application owners continue to own resources, accounts, and security groups.
- VPC sharing participants can reference security group IDs of each other.
- Efficiencies: higher density in subnets, efficient use of VPNs and AWS Direct Connect.
- Hard limits can be avoided, for example, 50 VIFs per AWS Direct Connect connection through simplified network architecture.
- Costs can be optimized through reuse of NAT gateways, VPC interface endpoints, and intra-Availability Zone traffic.
Essentially, we can decouple accounts and networks. In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner. You can simplify network topologies by interconnecting shared Amazon VPCs using connectivity features, such as AWS PrivateLink, AWS Transit Gateway, and Amazon VPC peering.
Therefore, the correct answer is: On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Set up VPC peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this zone. Programmatically associate the VPCs from other accounts with this hosted zone.
The option that says: Set up Direct Connect connections among the VPCs of each account using private virtual interfaces. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “FALSE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. Programmatically associate the VPCs from other accounts with this hosted zone is incorrect. Using AWS Direct Connect is not a suitable service to connect the various VPCs. In addition, attributes enableDnsHostnames
and enableDnsSupport
are set to “TRUE” by default and are needed for VPC resources to query Route 53 zone entries.
The option that says: Set up a VPC peering connection among the VPC of each account. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “TRUE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account is incorrect. You won't be able to resolve the hosted private zone entries even if you configure your Route 53 zone NS entry to use the central accounts' DNS servers.
The option that says: On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Create a peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account is incorrect. Although creating the shared services VPC is a good solution, configuring Route 53 Name Server (NS) records to point to the shared services VPC’s Route 53 is not enough. You need to associate the VPCs from other accounts to the hosted zone on the central account.
References:
Check out these Amazon VPC and Route 53 Cheat Sheets:
Question 29: Incorrect
A retail company hosts its web application on an Auto Scaling group of Amazon EC2 instances deployed across multiple Availability Zones. The Auto Scaling group is configured to maintain a minimum EC2 cluster size and automatically replace unhealthy instances. The EC2 instances are behind an Application Load Balancer so that the load can be spread evenly on all instances. The application target group health check is configured with a fixed HTTP page that queries a dummy item on the database. The web application connects to a Multi-AZ Amazon RDS MySQL instance. A recent outage caused a major loss to the company's revenue. Upon investigation, it was found that the web server metrics are within the normal range but the database CPU usage is very high, causing the EC2 health checks to timeout. Failing the health checks, the Auto Scaling group continuously replaced the unhealthy instances thus causing the downtime.
Which of the following options should the Solution Architect implement to prevent this from happening again and allow the application to handle more traffic in the future? (Select TWO.)
Change the target group health check to use a TCP check on the EC2 instances instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails.
Create an Amazon CloudWatch alarm to monitor the Amazon RDS MySQL instance if it has a high-load or in impaired status. Set the alarm action to recover the RDS instance. This will automatically reboot the database to reset the queries.
(Incorrect)
Change the target group health check to a simple HTML page instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails.
(Correct)
Reduce the load on the database tier by creating multiple read replicas for the Amazon RDS MySQL Multi-AZ cluster. Configure the web application to use the single reader endpoint of RDS for all read operations.
Reduce the load on the database tier by creating an Amazon ElastiCache cluster to cache frequently requested database queries. Configure the application to use this cache when querying the RDS MySQL instance.
(Correct)
Explanation
Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor one of the following:
The health of a specified resource, such as a web server - You can configure a health check that monitors an endpoint that you specify either by IP address or by the domain name. At regular intervals that you specify, Route 53 submits automated requests over the Internet to your application. You can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL.
The status of other health checks - You can create a health check that monitors whether Route 53 considers other health checks healthy or unhealthy. One situation where this might be useful is when you have multiple resources that perform the same function, such as multiple web servers, and your chief concern is whether some minimum number of your resources are healthy.
The status of an Amazon CloudWatch alarm - You can create CloudWatch alarms that monitor the status of CloudWatch metrics, such as the number of throttled read events for an Amazon DynamoDB database or the number of Elastic Load Balancing hosts that are considered healthy.
After you create a health check, you can get the status of the health check, get notifications when the status changes, and configure DNS failover. To improve resiliency and availability, Route 53 doesn't wait for the CloudWatch alarm to go into the ALARM state. The status of a health check changes from healthy to unhealthy based on the data stream and on the criteria in the CloudWatch alarm.
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks. Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check. If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
Each health check will be executed at configured intervals to all the EC2 instances so if the health check query page involves a database query, there will be several simultaneous queries to the database. This can increase the load of your database tier if there are many EC2 instances and the health check interval period is very quick.
Amazon ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. At the same time, it helps remove the complexity associated with deploying and managing a distributed cache environment.
ElastiCache for Memcached has multiple features to enhance reliability for critical production deployments:
- Automatic detection and recovery from cache node failures.
- Automatic discovery of nodes within a cluster enabled for automatic discovery so that no changes need to be made to your application when you add or remove nodes.
- Flexible Availability Zone placement of nodes and clusters.
- Integration with other AWS services such as Amazon EC2, Amazon CloudWatch, AWS CloudTrail, and Amazon SNS to provide a secure, high-performance, managed in-memory caching solution.
The option that says: Change the target group health check to a simple HTML page instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails is correct. Changing the target group health check to a simple HTML page will reduce the queries to the database tier. The Route 53 health check can act as the “external” check on a specific page that queries the database to ensure that the application is working as expected. The Route 53 health check has an overall lower request count compared to using the target group health check.
The option that says: Reduce the load on the database tier by creating an Amazon ElastiCache cluster to cache frequently requested database queries. Configure the application to use this cache when querying the RDS MySQL instance is correct. Since this is a retail web application, most of the queries will be read-intensive as customers are searching for products. ElastiCache is effective at caching frequent requests, which overall improves the application response time and reduces database queries.
The option that says: Reduce the load on the database tier by creating multiple read replicas for the Amazon RDS MySQL Multi-AZ cluster. Configure the web application to use the single reader endpoint of RDS for all read operations is incorrect. This may be possible because creating read replicas is recommended to increase the read performance of an RDS cluster. However, this option does not resolve the original intention to reduce the number of repetitive queries hitting the database.
The option that says: Change the target group health check to use a TCP check on the EC2 instances instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails is incorrect. An Application Load Balancer does not support a TCP health check. ALB only supports HTTP and HTTPS target health checks.
The option that says: Create an Amazon CloudWatch alarm to monitor the Amazon RDS MySQL instance if it has a high-load or in impaired status. Set the alarm action to recover the RDS instance. This will automatically reboot the database to reset the queries is incorrect. Recovering the database instance results in downtime. If you have the Multi-AZ enabled, the standby database will shoulder all the load causing it to crash too. It is better to scale the database by creating read replicas or adding an ElastiCache cluster in front of it.
References:
Check out these Amazon ElastiCache and Amazon RDS Cheat Sheets:
Question 31: Incorrect
A graphics design startup is using multiple Amazon S3 buckets to store high-resolution media files for their various digital artworks. After securing a partnership deal with a leading media company, the two parties shall be sharing digital resources with one another as part of the contract. The media company frequently performs multiple object retrievals from the S3 buckets every day, which increased the startup's data transfer costs.
As the Solutions Architect, what should you do to help the startup lower their operational costs?
Advise the media company to create their own S3 bucket. Then run the aws s3 sync s3://sourcebucket s3://destinationbucket
command to copy the objects from their S3 bucket to the other party's S3 bucket. In this way, future retrievals can be made on the media company's S3 bucket instead.
(Incorrect)
Enable the Requester Pays feature in all of the startup's S3 buckets to make the media company pay the cost of the data transfer from the buckets.
(Correct)
Provide cross-account access for the media company, which has permissions to access contents in the S3 bucket. Cross-account retrieval of S3 objects is charged to the account that made the request.
Create a new billing account for the social media company by using AWS Organizations. Apply SCPs on the organization to ensure that each account has access only to its own resources and each other's S3 buckets.
Explanation
In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.
You must authenticate all requests involving Requester Pays buckets. The request authentication enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket. After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in a REST request to show that they understand that they will be charged for the request and the data download.
Hence, the correct answer is to enable the Requester Pays feature in all of the startup's S3 buckets to make the media company pay the cost of the data transfer from the buckets.
The option that says: Advise the media company to create their own S3 bucket. Then run the aws s3 sync s3://sourcebucket s3://destinationbucket
command to copy the objects from their S3 bucket to the other party's S3 bucket. In this way, future retrievals can be made on the media company's S3 bucket instead is incorrect because sharing all the assets of the startup to the media entails a lot of costs considering that you will be charged for the data transfer charges made during the sync process.
Creating a new billing account for the social media company by using AWS Organizations, then applying SCPs on the organization to ensure that each account has access only to its own resources and each other's S3 buckets is incorrect because AWS Organizations does not create a separate billing account for every account under it. Instead, what AWS Organizations has is consolidated billing. You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts. Every organization in AWS Organizations has a master account that pays the charges of all the member accounts.
The option that says: Provide cross-account access for the media company, which has permissions to access contents in the S3 bucket. Cross-account retrieval of S3 objects is charged to the account that made the request is incorrect because cross-account access does not shoulder the charges that are made during S3 object requests. Unless Requester Pays is enabled on the bucket, the bucket owner is still the one that is charged.
Reference:
Check out this Amazon S3 Cheat Sheet:
Question 34: Correct
A company wants to host its internal web application in AWS. The front-end uses Docker containers and it connects to a MySQL instance as the backend database. The company plans to use AWS-managed container services to reduce the overhead in managing the servers. The application should allow employees to access company documents, which are accessed frequently for the first 3 months and then rarely after that. As part of the company policy, these documents must be retained for at least five years. Because this is an internal web application, the company wants to have the lowest possible cost.
Which of the following implementations is the most cost-effective solution?
Deploy the Docker containers using Amazon Elastic Kubernetes Service (EKS)with auto-scaling enabled. Use Amazon EC2 Spot instances for the EKS cluster to further reduce costs. Use On-Demand instances for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years.
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Ensure that Spot Instance draining is enabled on the ECS agent config. Use Reserved instance for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years.
(Correct)
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 On-Demand instances. Use On-Demand instances as well for the Amazon RDS database and its read replicas. Create an Amazon EFS volume that is mounted on the EC2 instances to store the company documents. Create a cron job that will copy the documents to Amazon S3 Glacier after three months and then create a bucket lifecycle policy that will delete objects older than five years.
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Use Spot instances for the Amazon RDS database and its read replicas. Create an encrypted ECS volume on the EC2 hosts that is shared with the containers to store the company documents. Set up a cron job that will delete the files after five years.
Explanation
A Spot Instance is an unused Amazon EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and adjusted gradually based on the long-term supply of and demand for Spot Instances. You can register Spot Instances to your Amazon ECS clusters. Amazon EC2 terminates, stops, or hibernates your Spot Instance when the Spot price exceeds the maximum price for your request or capacity is no longer available. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted. If Amazon ECS Spot Instance draining is enabled on the instance, ECS receives the Spot Instance interruption notice and places the instance in DRAINING status.
When a container instance is set to DRAINING
, Amazon ECS prevents new tasks from being scheduled for placement on the container instance. Service tasks on the draining container instance that are in the PENDING state are stopped immediately. If there are container instances in the cluster that are available, replacement service tasks are started on them. Spot Instance draining is disabled by default and must be manually enabled by adding the line ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
on your /etc/ecs/ecs.config
file.
Within the Spot provisioning model, you can provide an allocation strategy of either “Diversified” or “Lowest Price” which will define how the EC2 Spot Instances are provisioned. The recommended best practice is to select the “Diversified” strategy, to maximize provisioning choices, while reducing the costs. When this is combined with Spot Instance draining, you can allow your Spot instances to drain connections gracefully while having enough time for the cluster to spawn other Spot instance types to handle the load. When configured correctly, you can significantly reduce downtime of Spot instances or eliminate downtime entirely.
Therefore, the correct answer is: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Ensure that Spot Instance draining is enabled on the ECS agent config. Use Reserved instance for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years. With a diversified Spot instance type and Spot instance draining, you can allow your ECS cluster to spawn other EC2 instance types automatically to handle the load at a very low cost. Reserved instances are recommended cost-saving to RDS instances that will be running continuously for years.
The option that says: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Use Spot instances for the Amazon RDS database and its read replicas. Create an encrypted ECS volume on the EC2 hosts that is shared with the containers to store the company documents. Set up a cron job that will delete the files after five years is incorrect. Storing company documents on the EC2 instances will require more disk space on instances, which is unnecessary and expensive. Using Spot instances for RDS instances is not recommended as this will cause major downtime or data loss in case AWS terminates your spot instance.
The option that says: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 On-Demand instances. Use On-Demand instances as well for the Amazon RDS database and its read replicas. Create an Amazon EFS volume that is mounted on the EC2 instances to store the company documents. Create a cron job that will copy the documents to Amazon S3 Glacier after three months and then create a bucket lifecycle policy that will delete objects older than five years is incorrect. This is possible, however, using EFS volumes is more expensive than just storing the files on Amazon S3 in the first place.
The option that says: Deploy the Docker containers using Amazon Elastic Kubernetes Service (EKS)with auto-scaling enabled. Use Amazon EC2 Spot instances for the EKS cluster to further reduce costs. Use On-Demand instances for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years is incorrect. This option is also possible, however, using on-demand instances for continuously running RDS instances is expensive. You can save costs by using Reserved instances for Amazon RDS.
References:
Check out this Amazon ECS Cheat Sheet:
Question 36: Correct
A top university has launched its serverless online portal using Lambda and API Gateway in AWS that enables its students to enroll, manage their class schedules, and see their grades online. After a few weeks, the portal abruptly stopped working and lost all of its data. The university hired an external cybersecurity consultant and based on the investigation, the outage was due to an SQL injection vulnerability on the portal's login page in which the attacker simply injected the malicious SQL code. You also need to track historical changes to the rules and metrics associated with your firewall.
Which of the following is the most suitable and cost-effective solution to avoid another SQL Injection attack against their infrastructure in AWS?
Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
(Correct)
Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Explanation
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. With AWS Config, you can track changes to WAF web access control lists (web ACLs). For example, you can record the creation and deletion of rules and rule actions, as well as updates to WAF rule configurations.
AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.
In this scenario, the best option is to deploy WAF in front of the API Gateway. Hence the correct answer is the option that says: Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
The option that says: Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect because you have to use AWS WAF in front of the API Gateway and not directly to the Lambda functions. AWS Firewall Manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations and hence, it is not suitable for tracking changes to WAF web access control lists. You should use AWS Config instead.
The option that says: Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect. Even though it is valid to use AWS WAF with CloudFront, it entails an additional and unnecessary cost to launch a CloudFront distribution for this scenario. There is no requirement that the serverless online portal should be scalable and be accessible around the globe hence, a CloudFront distribution is not necessary.
The option that says: Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect. Launching a new Application Load Balancer entails additional cost and is not cost-effective. In addition, AWS Firewall manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations. Using AWS Config is much more suitable for tracking changes to WAF web access control lists.
References:
Check out this AWS WAF Cheat Sheet:
AWS Security Services Overview - WAF, Shield, CloudHSM, KMS:
Question 37: Correct
A fintech startup has developed a cloud-based payment processing system that accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple, and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.
In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?
Create an origin access control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio.
Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
(Correct)
Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio.
Explanation
Field-level encryption adds an additional layer of security, along with HTTPS, that lets you protect specific data throughout system processing so that only certain applications can see it. Field-level encryption allows you to securely upload user-submitted sensitive information to your web servers. The sensitive information provided by your clients is encrypted at the edge closer to the user and remains encrypted throughout your entire application stack, ensuring that only applications that need the data—and have the credentials to decrypt it—are able to do so.
To use field-level encryption, you configure your CloudFront distribution to specify the set of fields in POST requests that you want to be encrypted, and the public key to use to encrypt them. You can encrypt up to 10 data fields in a request. Hence, the correct answer for this scenario is the option that says: Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution. To increase your cache hit ratio, you can configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
. The shorter the cache duration, the more frequently CloudFront forwards another request to your origin to determine whether the object has changed and, if so, to get the latest version.
The option that says: Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio is incorrect. Although it provides secure end-to-end connections to origin servers, it is better to add field-level encryption to protect the credit card information.
The option that says: Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio is incorrect because a Signed URL provides a way to distribute private content but it doesn't encrypt the sensitive credit card information.
The option that says: Create an Origin Access Control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio is incorrect because OAC is mainly used to restrict access to objects in S3 bucket, but not provide encryption to specific fields.
References:
Check out this Amazon CloudFront Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 38: Correct
A company is hosting its production environment in AWS Fargate. To save costs, the Chief Information Officer (CIO) wants to deploy its new development environment workloads on its on-premises servers as this leverages existing capital investments. As the Solutions Architect, you have been tasked by the CIO to provide a solution that will:
have both on-premises and Fargate managed in the same cluster
easily migrate development environment workloads running on-premises to production environment running in AWS Fargate
ensure consistent tooling and API experience across container-based workloads
Which of the following is the MOST operationally efficient solution that meets these requirements?
Use EKS Amazon Anywhere to simplify on-premises Kubernetes management with default component configurations and automated cluster management tools. This makes it easy to migrate the development workloads running on-premises to EKS in an AWS region on Fargate.
Utilize Amazon ECS Anywhere to streamline software management on-premises and on AWS with a standardized container orchestrator. This makes it easy to migrate the development workloads running on-premises to ECS in an AWS region on Fargate.
(Correct)
Install and configure AWS Outposts in your on-premises data center. Run Amazon EKS Anywhere on AWS Outposts to launch container-based workloads. Migrate development workloads to production that is running on AWS Fargate.
Install and configure AWS Outposts in your on-premises data center. Run Amazon ECS on AWS Outposts to launch the development environment workloads. Migrate development workloads to production that is running on AWS Fargate.
Explanation
Amazon Elastic Container Service (ECS) Anywhere is a feature of Amazon ECS that lets you run and manage container workloads on your infrastructure. This feature helps you meet compliance requirements and scale your business without sacrificing your on-premises investments.
It ensures consistency with the same on-premises Amazon ECS tools when you migrate to AWS.
ECS Anywhere extends the reach of Amazon ECS to provide you with a single management interface for all of your container-based applications, irrespective of the environment they’re running in. As a result, you have a simple, consistent experience when it comes to cluster management, workload scheduling, and monitoring for both the cloud and on-premises. With ECS Anywhere, you do not need to install and maintain any container orchestration software, thus removing the need for your team to learn specialized knowledge domains and skillsets for disparate tooling.
ECS Anywhere makes it easy for you to run your applications in on-premises environments as long as desired and then migrate to the cloud with a single click at any time.
Therefore, the correct answer is: Utilize Amazon ECS Anywhere to streamline software management on-premises and on AWS with a standardized container orchestrator. This makes it easy to migrate the development workloads running on-premises to ECS in an AWS region on Fargate as it can have on-premises compute, EC2 instances, and Fargate in the same ECS cluster, making it easy to migrate ECS workloads running on-premises to ECS in an AWS region on Fargate or EC2 in the future if necessary.
The option that says: Use EKS Amazon Anywhere to simplify on-premises Kubernetes management with default component configurations and automated cluster management tools. This makes it easy to migrate the development workloads running on-premises to EKS in an AWS region on Fargate is incorrect because Amazon EKS Anywhere is not designed to run in the AWS cloud. It does not integrate with the Kubernetes Cluster API Provider for AWS.
The following sets of options are incorrect because AWS Fargate is not available on AWS Outposts, and Amazon EKS Anywhere isn't designed to run on AWS Outposts:
- Install and configure AWS Outposts in your on-premises data center. Run Amazon ECS on AWS Outposts to launch the development environment workloads. Migrate development workloads to production that is running on AWS Fargate.
- Install and configure AWS Outposts in your on-premises data center. Run Amazon EKS Anywhere on AWS Outposts to launch container-based workloads. Migrate development workloads to production that is running on AWS Fargate.
References:
Question 40: Incorrect
A leading financial company is planning to launch its MERN (MongoDB, Express, React, Node.js) application with an Amazon RDS MariaDB database to serve its clients worldwide. The application will run on both on-premises servers as well as Reserved EC2 instances. To comply with the company's strict security policy, the database credentials must be encrypted both at rest and in transit. These credentials will be used by the application servers to connect to the database. The Solutions Architect is tasked to manage all of the aspects of the application architecture and production deployment.
How should the Architect automate the deployment process of the application in the MOST secure manner?
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk.
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Attach this IAM policy to the instance profile for CodeDeploy-managed EC2 instances. Associate the same policy as well to the on-premises instances. Using AWS CodeDeploy, launch the application packages to the Amazon EC2 instances and on-premises servers.
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to the EC2 instances. Create an IAM Service Role that will be associated with the on-premises servers. Deploy the application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.
(Correct)
Upload the database credentials with key rotation in AWS Secrets Manager. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk.
(Incorrect)
Explanation
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud.
Servers and virtual machines (VMs) in a hybrid environment require an IAM role to communicate with the Systems Manager service. The role grants AssumeRole
trust to the Systems Manager service. You only need to create the service role for a hybrid environment once for each AWS account.
Users in your company or organization who will use Systems Manager on your hybrid machines must be granted permission in IAM to call the SSM API.
Service role: A service role is an AWS Identity and Access Management (IAM) that grants permissions to an AWS service so that the service can access AWS resources. Only a few Systems Manager scenarios require a service role. When you create a service role for Systems Manager, you choose the permissions to grant in order for it to access or interact with other AWS resources.
Service-linked role: A service-linked role is predefined by Systems Manager and includes all the permissions that the service requires to call other AWS services on your behalf.
If you plan to use Systems Manager to manage on-premises servers and virtual machines (VMs) in what is called a hybrid environment, you must create an IAM role for those resources to communicate with the Systems Manager service.
Hence, the correct answer is: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to the EC2 instances. Create an IAM Service Role that will be associated with the on-premises servers. Deploy the application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.
The option that says: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk is incorrect. You can't deploy an application to your on-premises servers using Elastic Beanstalk. This is only applicable to your Amazon EC2 instances.
The option that says: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Attach this IAM policy to the instance profile for CodeDeploy-managed EC2 instances. Associate the same policy as well to the on-premises instances. Using AWS CodeDeploy, launch the application packages to the Amazon EC2 instances and on-premises servers is incorrect. You have to use an IAM Role and not an IAM Policy to grant access to AWS Systems Manager Parameter Store.
The option that says: Upload the database credentials with key rotation in AWS Secrets Manager. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk is incorrect. Although you can store the database credentials to AWS Secrets Manager, you still can't deploy an application to your on-premises servers using Elastic Beanstalk.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 41: Correct
A data analytics startup has been chosen to develop a data analytics system that will track all statistics in the Fédération Internationale de Football Association (FIFA) World Cup, which will also be used by other 3rd-party analytics sites. The system will record, store and provide statistical data reports about the top scorers, goal scores for each team, average goals, average passes, average yellow/red cards per match, and many other details. FIFA fans all over the world will frequently access the statistics reports every day and thus, it should be durably stored, highly available, and highly scalable. In addition, the data analytics system will allow the users to vote for the best male and female FIFA player as well as the best male and female coach. Due to the popularity of the FIFA World Cup event, it is projected that there will be over 10 million queries on game day and could spike to 30 million queries over the course of time.
Which of the following is the most cost-effective solution that will meet these requirements?
1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas. 2. Set up a batch job that puts reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.
(Correct)
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table. 4. Delete the old DynamoDB tables every day.
1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.
Explanation
In this scenario, you are required to have the following:
A durable storage for the generated reports.
A database that is highly available and can scale to handle millions of queries.
A Content Delivery Network that can distribute the report files to users all over the world.
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. It’s a simple storage service that offers industry leading durability, availability, performance, security, and virtually unlimited scalability at very low costs.
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups.
Amazon RDS uses the MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. The source DB instance becomes the primary DB instance. Updates made to the primary DB instance are asynchronously copied to the read replica.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
Hence, the following option is the best solution that satisfies all of these requirements:
1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Set up a batch job that puts reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.
In the above, S3 provides durable storage; Multi-AZ RDS with Read Replicas provide a scalable and highly available database and CloudFront provides the CDN.
The following option is incorrect:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
Although the database is scalable and highly available, it neither has any durable data storage nor a CDN.
The following option is incorrect:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.
Although this option handles and provides a better read capability for the system, it is still lacking a durable storage and a CDN.
The following option is incorrect:
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table.
4. Delete the old DynamoDB tables every day.
The above is not a cost-effective solution to maintain both RDS and a DynamoDB instance.
References:
Check out this Amazon RDS Cheat Sheet:
Question 47: Correct
A company has production, development, and test environments in its software development department, and each environment contains tens to hundreds of EC2 instances, along with other AWS services. Recently, Ubuntu released a series of security patches for a critical flaw that was detected in their OS. Although this is an urgent matter, there is no guarantee yet that these patches will be bug-free and production-ready hence, the company must immediately patch all of its affected Amazon EC2 instances in all the environments, except for the production environment. The EC2 instances in the production environment will only be patched after it has been verified that the patches work effectively. Each environment also has different baseline patch requirements that needed to be satisfied.
Using the AWS Systems Manager service, how should you perform this task with the least amount of effort?
Schedule a maintenance period in AWS Systems Manager Maintenance Windows for each environment, where the period is after business hours so as not to affect daily operations. During the maintenance period, Systems Manager will execute a cron job that will install the required patches for each EC2 instance in each environment. After that, verify in Systems Manager Managed Instances that your environments are fully patched and compliant.
Tag each instance based on its environment and OS. Create various shell scripts for each environment that specifies which patch will serve as its baseline. Using AWS Systems Manager Run Command, place the EC2 instances into Target Groups and execute the script corresponding to each Target Group.
Tag each instance based on its OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and then apply the patches specified in the corresponding patch baseline to each Patch Group. Afterward, verify that the patches have been installed correctly using Patch Compliance. Record the changes to patch and association compliance statuses using AWS Config.
Tag each instance based on its environment and OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and apply the patches specified in the corresponding patch baseline to each Patch Group.
(Correct)
Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type.
Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager Maintenance Window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. For each auto-approval rule that you create, you can specify an auto-approval delay. This delay is the number of days of wait after the patch was released, before the patch is automatically approved for patching.
A patch group is an optional means of organizing instances for patching. For example, you can create patch groups for different operating systems (Linux or Windows), different environments (Development, Test, and Production), or different server functions (web servers, file servers, databases). Patch groups can help you avoid deploying patches to the wrong set of instances. They can also help you avoid deploying patches before they have been adequately tested. You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a patch group must be defined with the tag key: Patch Group
. After you create a patch group and tag instances, you can register the patch group with a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching execution.
Hence, the correct answer is: Tag each instance based on its environment and OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and apply the patches specified in the corresponding patch baseline to each Patch Group.
The option that says: Tag each instance based on its environment and OS. Create various shell scripts for each environment that specifies which patch will serve as its baseline. Using AWS Systems Manager Run Command, place the EC2 instances into Target Groups and execute the script corresponding to each Target Group is incorrect as this option takes more effort to perform because you are using Systems Manager Run Command instead of Patch Manager. The Run Command service enables you to automate common administrative tasks and perform ad hoc configuration changes at scale, however, it takes a lot of effort to implement this solution. You can use Patch Manager instead to perform the task required by the scenario since you need to perform this task with the least amount of effort.
The option that says: Tag each instance based on its OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and then apply the patches specified in the corresponding patch baseline to each Patch Group. Afterward, verify that the patches have been installed correctly using Patch Compliance. Record the changes to patch and association compliance statuses using AWS Config is incorrect. You should be tagging instances based on the environment and its OS type in which they belong and not just its OS type. This is because the type of patches that will be applied varies between the different environments. With this option, the Ubuntu EC2 instances in all of your environments, including in production, will automatically be patched.
The option that says: Schedule a maintenance period in AWS Systems Manager Maintenance Windows for each environment, where the period is after business hours so as not to affect daily operations. During the maintenance period, Systems Manager will execute a cron job that will install the required patches for each EC2 instance in each environment. After that, verify in Systems Manager Managed Instances that your environments are fully patched and compliant is incorrect because this is not the simplest way to address the issue using AWS Systems Manager. The AWS Systems Manager Maintenance Windows feature lets you define a schedule for when to perform potentially disruptive actions on your instances such as patching an operating system, updating drivers, or installing software or patches. Each Maintenance Window has a schedule, a maximum duration, a set of registered targets (the instances that are acted upon), and a set of registered tasks. Although this solution may work, it entails a lot of configuration and effort to implement.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 48: Incorrect
A company has several AWS accounts that are managed using AWS Organizations. The company created only one organizational unit (OU) so all child accounts are members of the Production OU. The Solutions Architects control access to certain AWS services using SCPs that define the restricted services. The SCPs are attached at the root of the organization so that they will be applied to all AWS accounts under the organization. The company recently acquired a small business firm and its existing AWS account was invited to join the organization. Upon onboarding, the administrators of the small business firm cannot apply the required AWS Config rules to meet the parent company’s security policies.
Which of the following options will allow the administrators to update the AWS Config rules on their AWS account without introducing long-term management overhead?
Remove the SCPs on the organization’s root and apply them to the Production OU instead. Create a temporary Onboarding OU that has an attached SCP allowing changes to AWS Config. Add the new account to this temporary OU and make the required changes before moving it to Production OU.
(Correct)
Instead of using a “deny list” to AWS services on the organization’s root SCPs, use an “allow list” to allow only the required AWS services. Temporarily add the AWS Config service on the “allow list” for the principals of the new account and make the required changes.
Update the SCPs applied in the root of the AWS organization and remove the rule that restricts changes to the AWS Config service. Deploy a new AWS Service Catalog to the whole organization containing the company’s AWS Config policies.
Add the new account to a temporary Onboarding organization unit (OU) that has an attached SCP allowing changes to AWS Config. Perform the needed changes while on this temporary OU before moving the new account to Production OU.
(Incorrect)
Explanation
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new AWS accounts and allocate resources, group accounts to organize your workflows, apply policies to accounts or groups for governance, and simplify billing by using a single payment method for all of your accounts.
With AWS Organizations, you can consolidate multiple AWS accounts into an organization that you create and centrally manage. You can create member accounts and invite existing accounts to join your organization. You can organize those accounts into groups and attach policy-based controls.
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled.
An SCP restricts permissions for IAM users and roles in member accounts, including the member account's root user. Any account has only those permissions allowed by every parent above it. If a permission is blocked at any level above the account, either implicitly (by not being included in an Allow policy statement) or explicitly (by being included in a Deny policy statement), a user or role in the affected account can't use that permission, even if the account administrator attaches the AdministratorAccess
IAM policy with */* permissions to the user.
AWS strongly recommends that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has on accounts. Instead, create an OU that you can move your accounts into one at a time, or at least in small numbers, to ensure that you don't inadvertently lock users out of key services.
Therefore, the correct answer is: Remove the SCPs on the organization’s root and apply them to the Production OU instead. Create a temporary Onboarding OU that has an attached SCP allowing changes to AWS Config. Add the new account to this temporary OU and make the required changes before moving it to Production OU. It is not recommended to attach the SCPs to the root of the organization, so it is better to move all the SCPs to the Production OU. This way, the temporary Onboarding OU can have an independent SCP to allow the required changes on AWS Config. Then, you can move the new AWS account to the Production OU.
The option that says: Update the SCPs applied in the root of the AWS organization and remove the rule that restricts changes to the AWS Config service. Deploy a new AWS Service Catalog to the whole organization containing the company’s AWS Config policies is incorrect. Although AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS, this will cause possible problems in the future for the administrators. Removing the AWS Config restriction on the root of the AWS organization's SCP will allow all Admins on all AWS accounts to manage/change/update their own AWS Config rules.
The option that says: Add the new account to a temporary Onboarding organization unit (OU) that has an attached SCP allowing changes to AWS Config. Perform the needed changes while on this temporary OU before moving the new account to Production OU is incorrect. If the SCP applied on the organization's root has a "deny" permission, all OUs under the organization will inherit that rule. You cannot override an explicit "deny" permission with an explicit "allow" applied to the temporary Onboarding OU.
The option that says: Instead of using a “deny list” to AWS services on the organization’s root SCPs, use an “allow list” to allow only the required AWS services. Temporarily add the AWS Config service on the “allow list” for the principals of the new account and make the required changes is incorrect. This is possible, however, it will cause more management problems in the future as you will have to update the "allow list" for any service that users may require in the future.
References:
Check out these AWS Organizations and SCP Comparison Cheat Sheets:
Question 50: Correct
A tech company plans to host a website using an Amazon S3 bucket. The solutions architect created a new S3 bucket called “www.tutorialsdojo.com" in us-west-2 AWS region, enabled static website hosting, and uploaded the static web content files including the index.html file. The custom domain www.tutorialsdojo.com
has been registered using Amazon Route 53 to be associated with the S3 bucket. The next day, a new Route 53 Alias record set was created which points to the S3 website endpoint: http://www.tutorialsdojo.com.s3-website-us-west-2.amazonaws.com
. Upon testing, users cannot see any content on the bucket. Both the domains tutorialsdojo.com
and www.tutorialsdojo.com
do not work properly.
Which of the following is the MOST likely cause of this issue that the Architect should fix?
Route 53 is still propagating the domain name changes. Wait for another 12 hours and then try again.
The site does not work because you have not set a value for the error.html file, which is a required step.
The S3 bucket does not have public read access which blocks the website visitors from seeing the content.(Correct)
The site will not work because the URL does not include a file name at the end. This means that you need to use this URL instead: www.tutorialsdojo.com/index.html
Explanation
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts. To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket. This bucket must have public read access. It is intentional that everyone in the world will have read access to this bucket.
When you configure an Amazon S3 bucket for website hosting, you must give the bucket the same name as the record that you want to use to route traffic to the bucket. For example, if you want to route traffic for example.com
to an S3 bucket that is configured for website hosting, the name of the bucket must be example.com
.
If you want to route traffic to an S3 bucket that is configured for website hosting but the name of the bucket doesn't appear in the Alias Target list in the Amazon Route 53 console, check the following:
- The name of the bucket exactly matches the name of the record, such as tutorialsdojo.com
or www.tutorialsdojo.com
.
- The S3 bucket is correctly configured for website hosting.
In this scenario, the static S3 website does not work because the bucket does not have a public read access.
Therefore, the correct answer is: The S3 bucket does not have public read access which blocks the website visitors from seeing the content. This is the root cause why the static S3 website is inaccessible.
The option that says: The site does not work because you have not set a value for the error.html file, which is a required step is incorrect as the error.html is not required and won't affect the availability of the static S3 website.
The option that says: The site will not work because the URL does not include a file name at the end. This means that you need to use this URL instead: www.tutorialsdojo.com/index.html is incorrect as it is not required to manually append the exact filename in S3.
The option that says: Route 53 is still propagating the domain name changes. Wait for another 12 hours and then try again is incorrect as the Route 53 domain name propagation does not take that long. Remember that Amazon Route 53 is designed to propagate updates you make to your DNS records to its worldwide network of authoritative DNS servers within 60 seconds under normal conditions.
References:
Check out this Amazon S3 Cheat Sheet:
Question 52: Correct
A company currently hosts its online immigration system on one large Amazon EC2 instance with attached EBS volumes to store all of the applicants' data. The registration system accepts the information from the user including documents and photos and then performs automated verification and processing to check if the applicant is eligible for immigration. The immigration system becomes unavailable at times when there is a surge of applicants using the system. The existing architecture needs improvement as it takes a long time for the system to complete the processing and the attached EBS volumes are not enough to store the ever-growing data being uploaded by the users.
Which of the following options is the recommended option to achieve high availability and more scalable data storage?
Use SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.
Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region.
(Correct)
Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.
Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region.
Explanation
In this scenario, you need to overhaul the existing immigration service to upgrade its storage and computing capacity. Since EBS Volumes can only provide limited storage capacity and are not scalable, you should use S3 instead. The system goes down at times when there is a surge of requests which indicates that the existing large EC2 instance could not handle the requests any longer. In this case, you should implement a highly-available architecture and a queueing system with SQS and Auto Scaling.
The option that says: Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region is correct. This option provides high availability and scalable data storage with S3. Auto-scaling of EC2 instances reduces the overall processing time and SQS helps in distributing the tasks to a group of EC2 instances.
The option that says: Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region is incorrect because EBS is not an easily scalable and durable storage solution compared to Amazon S3. Using SQS is more suitable in distributing the tasks to an Auto Scaling group of EC2 instances and not SNS.
The option that says: Use SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect because SNS is not a valid choice in this scenario. Using SQS is more suitable in distributing the tasks to an Auto Scaling group of EC2 instances and not SNS.
The option that says: Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect. Having a large EBS volume attached to each of the EC2 instance of the auto-scaling group is not economical. And it will be hard to sync the growing data across these EBS volumes. You should use S3 instead.
References:
Check out this Amazon S3 Cheat Sheet:
Question 56: Correct
A company has launched a company-wide bug bounty program to find and patch up security vulnerabilities in your web applications as well as the underlying cloud resources. As the solutions architect, you are focused on checking system vulnerabilities on AWS resources for DDoS attacks. Due to budget constraints, the company cannot afford to enable AWS Shield Advanced to prevent higher-level attacks.
Which of the following are the best techniques to help mitigate Distributed Denial of Service (DDoS) attacks for cloud infrastructure hosted in AWS? (Select TWO.)
Use Reserved EC2 instances to ensure that each instance has the maximum performance possible. Use AWS WAF to protect your web applications from common web exploits that could affect application availability.
Use an Application Load Balancer (ALB) to reduce the risk of overloading your application by distributing traffic across many backend instances. Integrate AWS WAF and the ALB to protect your web applications from common web exploits that could affect application availability.
(Correct)
Use S3 as a POSIX-compliant storage instead of EBS Volumes for storing data. Install the SSM agent to all of your instances and use AWS Systems Manager Patch Manager to automatically patch your instances.
Use an Amazon CloudFront distribution for both static and dynamic content of your web applications. Add CloudWatch alerts to automatically look and notify the Operations team for high CPUUtilization
and NetworkIn
metrics, as well as to trigger Auto Scaling of your EC2 instances.
(Correct)
Add multiple Elastic Network Interfaces to each EC2 instance and use Enhanced Networking to increase the network bandwidth.
Explanation
The following options are the correct answers in this scenario as they can help mitigate the effects of DDoS attacks:
- Use an Amazon CloudFront distribution for both static and dynamic content of your web applications. Add CloudWatch alerts to automatically look and notify the Operations team for high CPUUtilization
and NetworkIn
metrics, as well as to trigger Auto Scaling of your EC2 instances.
- Use an Application Load Balancer (ALB) to reduce the risk of overloading your application by distributing traffic across many backend instances. Integrate AWS WAF and the ALB to protect your web applications from common web exploits that could affect application availability.
Amazon CloudFront is a content delivery network (CDN) service that can be used to deliver your entire website, including static, dynamic, streaming, and interactive content. Persistent TCP connections and variable time-to-live (TTL) can be used to accelerate delivery of content, even if it cannot be cached at an edge location. This allows you to use Amazon CloudFront to protect your web application, even if you are not serving static content. Amazon CloudFront only accepts well-formed connections to prevent many common DDoS attacks like SYN floods and UDP reflection attacks from reaching your origin.
Larger DDoS attacks can exceed the size of a single Amazon EC2 instance. To mitigate these attacks, you will want to consider options for load balancing excess traffic. With Elastic Load Balancing (ELB), you can reduce the risk of overloading your application by distributing traffic across many backend instances. ELB can scale automatically, allowing you to manage larger volumes of unanticipated traffic, like flash crowds or DDoS attacks.
Another way to deal with application layer attacks is to operate at scale. In the case of web applications, you can use ELB to distribute traffic to many Amazon EC2 instances that are overprovisioned or configured to auto scale for the purpose of serving surges of traffic, whether it is the result of a flash crowd or an application layer DDoS attack. Amazon CloudWatch alarms are used to initiate Auto Scaling, which automatically scales the size of your Amazon EC2 fleet in response to events that you define. This protects application availability even when dealing with an unexpected volume of requests.
The option that says: Use S3 as a POSIX-compliant storage instead of EBS Volumes for storing data. Install the SSM agent to all of your instances and use AWS Systems Manager Patch Manager to automatically patch your instances is incorrect because using S3 instead of EBS Volumes is mainly for addressing scalability to your storage requirements and not for avoiding DDoS attacks. In addition, Amazon S3 is not a POSIX-compliant storage.
The option that says: Add multiple Elastic Network Interfaces to each EC2 instance and use Enhanced Networking to increase the network bandwidth is incorrect. Even if you add multiple ENIs and are using Enhanced Networking to increase the network throughput of the instances, the CPU of the instance will be saturated with the DDoS requests which will cause the application to be unresponsive.
The option that says: Use Reserved EC2 instances to ensure that each instance has the maximum performance possible. Use AWS WAF to protect your web applications from common web exploits that could affect application availability is incorrect because using Reserved EC2 instances does not provide any additional computing performance compared to other EC2 types.
References:
Check out this Amazon CloudFront Cheat Sheet:
Best practices on DDoS Attack Mitigation:
Question 57: Correct
A multinational financial firm plans to do a multi-regional deployment of its cryptocurrency trading application that’s being heavily used in the US and in Europe. The containerized application uses Kubernetes and has Amazon DynamoDB Global Tables as a centralized database to store and sync the data from two regions.
The architecture has distributed computing resources with several public-facing Application Load Balancers (ALBs). The Network team of the firm manages the public DNS internally and wishes to make the application available through an apex domain for easier access. S3 Multi-Region Access Points are also used for object storage workloads and hosting static assets.
Which is the MOST operationally efficient solution that the Solutions Architect should implement to meet the above requirements?
Set up an AWS Transit Gateway with a multicast domain that targets specific ALBs on the required AWS Regions. Create a public record in Amazon Route 53 using the static IP address of the AWS Transit Gateway.
Set up an AWS Global Accelerator, which has several endpoint groups that target specific endpoints and ALBs on the required AWS Regions. Create a public alias record in Amazon Route 53 that points your custom domain name to the DNS name assigned to your accelerator.
(Correct)
Launch an AWS Global Accelerator with several endpoint groups that target the ALBs in all the relevant AWS Regions. Create an Amazon Route 53 Resolver Inbound Endpoint that points your custom domain name to the CNAME assigned to your accelerator.
Launch an AWS Transit Gateway that targets specific ALBs on the required AWS Regions. Create a CNAME record in Amazon Route 53 that directly points your custom domain name to the DNS name assigned to the AWS Transit Gateway.
Explanation
AWS Global Accelerator is a service in which you create accelerators to improve the performance of your applications for local and global users. Depending on the type of accelerator you choose, you can gain additional benefits:
-With a standard accelerator, you can improve the availability of your internet applications that are used by a global audience. With a standard accelerator, Global Accelerator directs traffic over the AWS global network to endpoints in the nearest Region to the client.
-With a custom routing accelerator, you can map one or more users to a specific destination among many destinations.
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions. The service reacts instantly to changes in health or configuration to ensure that internet traffic from clients is always directed to healthy endpoints.
Custom routing accelerators only support virtual private cloud (VPC) subnet endpoint types and route traffic to private IP addresses in that subnet.
In most scenarios, you can configure DNS to use your custom domain name (such as www.tutorialsdojo.com
) with your accelerator instead of using the assigned static IP addresses or the default DNS name. First, using Amazon Route 53 or another DNS provider, create a domain name, and then add or update DNS records with your Global Accelerator IP addresses. Or you can associate your custom domain name with the DNS name for your accelerator. Complete the DNS configuration and wait for the changes to propagate over the internet. Now when a client makes a request using your custom domain name, the DNS server resolves it to the IP addresses in random order or to the DNS name for your accelerator.
To use your custom domain name with Global Accelerator when you use Route 53 as your DNS service, you create an alias record that points your custom domain name to the DNS name assigned to your accelerator. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com
, and for subdomains, such as www.tutorialsdojo.com
.
Hence, the correct answer is: Set up an AWS Global Accelerator, which has several endpoint groups that target specific endpoints and ALBs on the required AWS Regions. Create a public alias record in Amazon Route 53 that points your custom domain name to the DNS name assigned to your accelerator.
The option that says: Set up an AWS Transit Gateway with a multicast domain that targets specific ALBs on the required AWS Regions. Create a public record in Amazon Route 53 using the static IP address of the AWS Transit Gateway is incorrect because an AWS Transit Gateway is not meant to be used to distribute traffic to multiple ALBs. In addition, a multicast domain is primarily used in delivering a single stream of data to multiple receiving computers simultaneously. Transit Gateway supports routing multicast traffic between subnets of attached VPCs and not ALBs. You have to use an AWS Global Accelerator instead since you can configure an accelerator to have several endpoint groups that point to multiple ALBs.
The option that says: Launch an AWS Transit Gateway that targets specific ALBs on the required AWS Regions. Create a CNAME record in Amazon Route 53 that directly points your custom domain name to the DNS name assigned to the AWS Transit Gateway is incorrect. As mentioned in the above rationale, an AWS Transit Gateway is not suitable to be used to integrate all the ALBs. Creating a CNAME record is also not right since the scenario explicitly mentioned that you have to use the apex domain. Remember that a CNAME record cannot be used for apex domain configuration in Amazon Route 53. You have to create a public alias record instead.
The option that says: Launch an AWS Global Accelerator with several endpoint groups that target the ALBs in all the relevant AWS Regions. Create an Amazon Route 53 Resolver Inbound Endpoint that points your custom domain name to the CNAME assigned to your accelerator is incorrect. Although the use of AWS Global Accelerator is a valid solution, creating a Route53 Resolver Inbound Endpoint is irrelevant. An Inbound Resolver endpoint simply allows DNS queries to your Amazon VPCs from your on-premises network or another VPC.
References:
Check out this AWS Global Accelerator Cheat Sheet:
Question 58: Incorrect
A media company processes and converts its video collection using the AWS Cloud. The videos are processed by an Auto Scaling group of Amazon EC2 instances which scales based on the number of videos on the Amazon Simple Queue Service (SQS) queue. Each video takes about 20-40 minutes to be processed.
To ensure videos are processed, the management has set a redrive policy on the SQS queue to be used as a dead-letter queue. The visibility timeout has been set to 1 hour and the maxReceiveCount
has been set to 1. When there are messages on the dead-letter queue, an Amazon CloudWatch alarm has been set up to notify the development team.
Within a few days of operation, the dead-letter queue received several videos that failed to process. The development received notifications of messages on the dead-letter queue but they did not find any operational errors based on the application logs.
Which of the following options should the solutions architect implement to help solve the above problem?
Configure a higher delivery delay setting on the Amazon SQS queue. This will give time for the consumers more time to pick up the messages on the SQS queue.
Reconfigure the SQS redrive policy and set maxReceiveCount
to 10. This will allow the consumers to retry the messages before sending them to the dead-letter queue.
(Correct)
The videos were not processed because the Amazon EC2 scale-up process takes too long. Set a minimum number of EC2 instances on the Auto Scaling group to solve this.
Some of the videos took longer than 1 hour to process. Update the visibility timeout for the Amazon SQS queue to 2 hours to solve this problem.
(Incorrect)
Explanation
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn't succeed.
Occasionally, producers and consumers might fail to interpret aspects of the protocol that they use to communicate, causing message corruption or loss. Also, the consumer's hardware errors might corrupt message payload. If a message can't be consumed successfully, you can send it to a dead-letter queue (DLQ). Dead-letter queues let you isolate problematic messages to determine why they are failing.
The Maximum receives value determines when a message will be sent to the DLQ. If the ReceiveCount for a message exceeds the maximum receive count for the queue, Amazon SQS moves the message to the associated DLQ (with its original message ID).
As you redrive your messages, the redrive status will show you the most recent message redrive status for your dead-letter queue.
The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times. The maxReceiveCount is the number of times a consumer tries receiving a message from a queue without deleting it before being moved to the dead-letter queue. Setting the maxReceiveCount to a low value, such as 1 would result in any failure to receive a message to cause the message to be moved to the dead-letter queue. Such failures include network errors and client dependency errors.
The redrive allow policy specifies which source queues can access the dead-letter queue. This policy applies to a potential dead-letter queue. You can choose whether to allow all source queues, allow specific source queues, or deny all source queues. The default is to allow all source queues to use the dead-letter queue.
Therefore, the correct answer is: Reconfigure the SQS redrive policy and set maxReceiveCount
to 10. This will allow the consumers to retry the message from the dead-letter queue. This setting ensures that any message that failed to be processed will be sent back to the queue to be picked up by other consumers and re-processed.
The option that says: The videos were not processed because the Amazon EC2 scale-up process takes too long. Set a minimum number of EC2 instances on the Auto Scaling group to solve this is incorrect. The Auto Scaling group responds to the number of messages on the queue, setting a fixed minimum number of instances is not cost-effective when there are no messages on the SQS queue.
The option that says: Some of the videos took longer than 1 hour to process. Update the visibility timeout for the Amazon SQS queue to 2 hours to solve this problem is incorrect. Even though the visibility timeout is set for longer, the failed videos to be processed should be retried again by other consumers. That's why a maxReceiveCount
setting is a much better option.
The option that says: Configure a higher delivery delay setting on the Amazon SQS queue. This will give time for the consumers more time to pick up the messages on the SQS queue is incorrect. This setting does not affect the videos that were already on the queue, picked up for processing, but failed to process completely.
References:
Check out these Amazon SQS and AWS Auto Scaling Cheat Sheets:
Question 59: Correct
A company runs several clusters of Amazon EC2 instances in AWS. An unusual API activity and port scanning in the VPC have been identified by the security team. They noticed that there are multiple port scans being triggered to the EC2 instances from a specific IP address. To fix the issue immediately, the solutions architect has decided to simply block the offending IP address. The solutions architect is also instructed to fortify their existing cloud infrastructure security from the most frequently occurring network and transport layer DDoS attacks.
Which of the following is the most suitable method to satisfy the above requirement in AWS?
Deny access from the IP Address block by adding a specific rule to all of the Security Groups. Use a combination of AWS WAF and AWS Config to protect your cloud resources against common web attacks.
Block the offending IP address using Route 53. Use Amazon Macie to automatically discover, classify, and protect sensitive data in AWS, including DDoS attacks.
Change the Windows Firewall settings to deny access from the IP address block. Use Amazon GuardDuty to detect potentially compromised instances or reconnaissance by attackers, and AWS Systems Manager Patch Manager to properly apply the latest security patches to all of your instances.
Deny access from the IP Address block in the Network ACL. Use AWS Shield Advanced to protect your cloud resources.
(Correct)
Explanation
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Therefore the correct answer is: Deny access from the IP Address block in the Network ACL. Use AWS Shield Advanced to protect your cloud resources.
The option that says: Block the offending IP address using Route 53. Use Amazon Macie to automatically discover, classify, and protect sensitive data in AWS, including DDoS attacks is incorrect because Amazon Macie is just a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. It does not provide security against DDoS attacks. In addition, you cannot block the offending IP address using Route 53. You should use Network ACL for this scenario.
The option that says: Change the Windows Firewall settings to deny access from the IP address block. Use Amazon GuardDuty to detect potentially compromised instances or reconnaissance by attackers, and AWS Systems Manager Patch Manager to properly apply the latest security patches to all of your instances is incorrect. You have to use Network ACL to block the specific IP address to your network and not just change the firewall of your Windows server. Amazon GuardDuty and AWS Systems Manager Patch Manager are not suitable to fortify your AWS Cloud against DDoS attacks.
The option that says: Deny access from the IP Address block by adding a specific rule to all of the Security Groups. Use a combination of AWS WAF and AWS Config to protect your cloud resources against common web attacks is incorrect because it is still better to block the offending IP address on the Network ACL level as you cannot directly deny an IP address in your Security Group. AWS WAF and AWS Config are helpful to improve the security of your cloud infrastructure in AWS but these services are not enough to protect your infrastructure against DDoS attacks. You have to use AWS Shield Advanced in this scenario.
References:
Check out these AWS WAF and AWS Shield Cheat Sheets:
Question 64: Correct
A company is hosting a multi-tier web application in AWS. It is composed of an Application Load Balancer and EC2 instances across three Availability Zones. During peak load, its stateless web servers operate at 95% utilization. The system is set up to use Reserved Instances to handle the steady-state load and On-Demand Instances to handle the peak load. Your manager instructed you to review the current architecture and do the necessary changes to improve the system.
Which of the following provides the most cost-effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load?
Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load.
(Correct)
Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load.
Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.
Explanation
The scenario requires a cost-effective architecture to allow the application to recover quickly, hence, using an Auto Scaling group is a must to handle the peak load and improve both the availability and scalability of the application.
Setting up a diversified allocation strategy for your Spot Fleet is a best practice to increase the chances that a spot request can be fulfilled by EC2 capacity in the event of an outage in one of the Availability Zones. You can include each AZ available to you in the launch specification. And instead of using the same subnet each time, use three unique subnets (each mapping to a different AZ).
Therefore the correct answer is: Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load. The Spot instances are the most cost-effective for handling the temporary peak loads of the application.
The option that says: Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load is incorrect. Even though it uses Auto Scaling, Reserved Instances cost more than Spot instances so it is more suitable to use the latter to handle the peak load.
The following options are incorrect because they did not mention the use of Auto Scaling Groups, which is a requirement for this architecture:
- Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.
- Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
Question 65: Correct
A company has several development teams using AWS CodeCommit to store their source code. With the number of code updates every day, the management is having difficulty tracking if the developers are adhering to company security policies. On a recent audit, the security team found several IAM access keys and secret keys in the CodeCommit repository. This is a big security risk so the company wants to have an automated solution that will scan the CodeCommit repositories for committed IAM credentials and delete/disable the IAM keys for those users.
Which of the following options will meet the company requirements?
Using a development instance, use the AWS Systems Manager Run Command to scan the AWS CodeCommit repository for IAM credentials on a daily basis. If credentials are found, rotate them using AWS Secrets Manager. Notify the user of the violation.
Scan the CodeCommit repositories for IAM credentials using Amazon Macie. Using machine learning, Amazon Macie can scan your repository for security violations. If violations are found, invoke an AWS Lambda function to notify the user and delete the IAM keys.
Download and scan the source code from AWS CodeCommit using a custom AWS Lambda function. Schedule this Lambda function to run daily. If credentials are found, notify the user of the violation, generate new IAM credentials and store them in AWS KMS for encryption.
Write a custom AWS Lambda function to search for credentials on new code submissions. Set the function trigger as AWS CodeCommit push events. If credentials are found, notify the user of the violation, and disable the IAM keys.
(Correct)
Explanation
AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud.
You can configure a CodeCommit repository so that code pushes or other events trigger actions, such as sending a notification from Amazon Simple Notification Service (Amazon SNS) or invoking a function in AWS Lambda. You can create up to 10 triggers for each CodeCommit repository.
Triggers are commonly configured to:
- Send emails to subscribed users every time someone pushes to the repository.
- Notify an external build system to start a build after someone pushes to the main branch of the repository.
Scenarios like notifying an external build system require writing a Lambda function to interact with other applications. The email scenario simply requires creating an Amazon SNS topic. You can create a trigger for a CodeCommit repository so that events in that repository trigger notifications from an Amazon Simple Notification Service (Amazon SNS) topic.
You can also create an AWS Lambda trigger for a CodeCommit repository so that events in the repository invoke a Lambda function. For example, you can create a Lambda function that will scan the CodeCommit code submissions for IAM credentials, and then send out notifications or perform corrective actions.
When you use the Lambda console to create the function, you can create a CodeCommit trigger for the Lambda function. Here is an example of the trigger for all push events:
Therefore, the correct answer is: Write a custom AWS Lambda function to search for credentials on new code submissions. Set the function trigger as AWS CodeCommit push events. If credentials are found, notify the user of the violation, and disable the IAM keys.
The option that says: Using a development instance, use the AWS Systems Manager Run Command to scan the AWS CodeCommit repository for IAM credentials on a daily basis. If credentials are found, rotate them using AWS Secrets Manager. Notify the user of the violation is incorrect. You cannot rotate IAM keys on AWS Secrets Manager. Using the Run Command on a development instance just for scanning the repository is costly. It is cheaper to just write your own Lambda function to do the scanning.
The option that says: Download and scan the source code from AWS CodeCommit using a custom AWS Lambda function. Schedule this Lambda function to run daily. If credentials are found, notify the user of the violation, generate new IAM credentials and store them in AWS KMS for encryption is incorrect. You store encryption keys on AWS KMS, not IAM keys.
The option that says: Scan the CodeCommit repositories for IAM credentials using Amazon Macie. Using machine learning, Amazon Macie can scan your repository for security violations. If violations are found, invoke an AWS Lambda function to notify the user and delete the IAM keys is incorrect. Amazon Macie is designed to use machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie primarily scans Amazon S3 buckets for data security and data privacy.
References:
Check out the AWS CodeCommit Cheat Sheet:
Question 66: Incorrect
A company hosts its multi-tiered web application on a fleet of Auto Scaling EC2 instances spread across two Availability Zones. The Application Load Balancer is in the public subnets and the Amazon EC2 instances are in the private subnets. After a few weeks of operations, the users are reporting that the web application is not working properly. Upon testing, the Solutions Architect found that the website is accessible and the login is successful. However, when the “find a nearby store” function is clicked on the website, the map loads only about 50% of the time when the page is refreshed. This function involves a third-party RESTful API call to a maps provider. Amazon EC2 NAT instances are used for these outbound API calls.
Which of the following options are the MOST likely reason for this failure and the recommended solution?
One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future.
The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it.
This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability.
(Correct)
This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic.
(Incorrect)
Explanation
You can use a NAT device to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but prevent the Internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the Internet or other AWS services, and then sends the response back to the instances. When traffic goes to the Internet, the source IPv4 address is replaced with the NAT device’s address, and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances’ private IPv4 addresses.
You can either use a managed NAT device offered by AWS called a NAT gateway, or you can create your own NAT device in an EC2 instance, referred to here as a NAT instance. The bandwidth of NAT instances depends on the instance size – higher instances size will have high bandwidth capacity. NAT instances are managed by the customer so if the instance goes down, there could be a potential impact on the availability of your application. You have to manually check and fix the NAT instances.
AWS recommends NAT gateways because they provide better availability and bandwidth over NAT instances. The NAT gateway service is also a managed service that does not require your administration efforts. NAT Gateways are highly available. In each Availability Zone, they are implemented with redundancy. It is managed by AWS so you do not have to perform maintenance or monitoring if it is UP. NAT gateways automatically scale in bandwidth so you don’t have to choose instance types.
Check out the full comparison in the table below:
The correct answer is: This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability. This is very likely as we have two subnets in the scenario and NAT instances reside in only one AZ. With a failure rate of 50%, one of the NAT instances must have been down. AWS does not automatically recover the failed NAT instances. AWS recommends using NAT gateways because they provide better availability and bandwidth. Even if NAT gateway is deployed on a single AZ, AWS implements redundancy to ensure that it is always available on that AZ.
The option that says: One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future is incorrect. Network ACLs affect all the subnets associated with it. If there is a misconfigured rule, the other subnets will be affected too, which could result in a 100% failure of requests to the third-party provider.
The option that says: The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it is incorrect. If there is a failure on one availability zone of the third-party provider, the traffic should have stopped sending to that AZ so this failure is most likely caused by a local failure in your VPC.
The option that says: This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic is incorrect. If the NAT instances are overloaded, you will notice inconsistent performance or slowdown for the third-party requests. And this failure should have been gone during off-peak hours. If the failure rate is 50% of the requests, it is most likely that one of the NAT instances is down.
References:
Check out this Amazon VPC Cheat Sheet:
Question 69: Incorrect
An international foreign exchange company has a serverless forex trading application that was built using AWS SAM and is hosted on AWS Serverless Application Repository. They have millions of users worldwide who use their online portal 24/7 to trade currencies. However, they are receiving a lot of complaints that it takes a few minutes for their users to log in to their portal lately, including occasional HTTP 504 errors. As the Solutions Architect, you are tasked to optimize the system and to significantly reduce the time to log in to improve the customers' satisfaction.
Which of the following should you implement in order to improve the performance of the application with minimal cost? (Select TWO.)
Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.
(Correct)
Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
.
(Incorrect)
Use Lambda@Edge to allow your Lambda functions to customize content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users.
(Correct)
Set up multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. Deploy the Lambda function in each region using AWS SAM, in order to handle the requests faster.
Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.
Explanation
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers. You can use Lambda functions to change CloudFront requests and responses at the following points:
- After CloudFront receives a request from a viewer (viewer request)
- Before CloudFront forwards the request to the origin (origin request)
- After CloudFront receives the response from the origin (origin response)
- Before CloudFront forwards the response to the viewer (viewer response)
In the given scenario, you can use Lambda@Edge to allow your Lambda functions to customize the content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users. In addition, you can set up an origin failover by creating an origin group with two origins with one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin fails. This will alleviate the occasional HTTP 504 errors that users are experiencing.
The option that says: Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user is incorrect. Although this may resolve the performance issue, this solution entails a significant implementation cost since you have to deploy your application to multiple AWS regions. Remember that the scenario asks for a solution that will improve the performance of the application with minimal cost.
The option that says: Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
is incorrect because improving the cache hit ratio for the CloudFront distribution is irrelevant in this scenario. You can improve your cache performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content. However, take note that the problem in the scenario is the sluggish authentication process of your global users and not just the caching of the static objects.
References:
Check out these Amazon CloudFront and AWS Lambda Cheat Sheets:
Question 72: Correct
An accounting firm hosts a mix of Windows and Linux Amazon EC2 instances in its AWS account. The solutions architect has been tasked to conduct a monthly performance check on all production instances. There are more than 200 On-Demand EC2 instances running in their production environment and it is required to ensure that each instance has a logging feature that collects various system details such as memory usage, disk space, and other metrics. The system logs will be analyzed using AWS Analytics tools and the results will be stored in an S3 bucket.
Which of the following is the most efficient way to collect and analyze logs from the instances with minimal effort?
Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
Set up and install AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
(Correct)
Set up and install the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Explanation
To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the unified CloudWatch agent which has the following advantages:
- You can collect both logs and advanced metrics with the installation and configuration of just one agent.
- The unified agent enables the collection of logs from servers running Windows Server.
- If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collection of additional system metrics, for in-guest visibility.
- The unified agent provides better performance.
CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.
CloudWatch Logs Insights includes a purpose-built query language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included for several types of AWS service logs.
Therefore, the correct answer is: Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs, then analyze the log data with CloudWatch Logs Insights.
The option that says: Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances is incorrect. Although this is a valid solution, this entails a lot of effort to implement as you have to allocate time to install the AWS CDK to each instance and develop a custom monitoring solution. Traffic Mirroring is simply an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon EC2 instances. As per the scenario, you are specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and not cost-efficient to enable detailed monitoring in CloudWatch in order to meet the requirements since this can be done using CloudWatch Logs.
The option that says: Setting up and installing the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs, then analyzing the log data with CloudWatch Logs Insights is incorrect. Although this is also a valid solution, it is more efficient to use a CloudWatch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an issue with SSM Agent is time-consuming hence, for more efficient instance monitoring, you can use the CloudWatch Agent instead to send the log data to Amazon CloudWatch Logs.
The option that says: Setting up and installing AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically, then setting up a CloudWatch dashboard to properly analyze the log data of all instances is incorrect. AWS Inspector is simply a security assessments service that only helps you in checking for unintended network accessibility of your EC2 instances and for vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon CloudWatch dashboard is not suitable since it's primarily used for scenarios where you have to monitor your resources in a single view, even those resources that are spread across different AWS Regions. It is better to use CloudWatch Logs Insights instead since it enables you to interactively search and analyze your log data.
References:
Check out this Amazon CloudWatch Cheat Sheet:
CloudWatch Agent vs SSM Agent vs Custom Daemon Scripts
Question 2: Skipped
A company has recently released a new mobile game. With the boost in marketing, the mobile game suddenly became viral. The registration webpage is bombarded with user registrations from around the world. The registration website is hosted on a fleet of Amazon EC2 instances created as an Auto Scaling group. This cluster is behind an Application Load Balancer to balance the user traffic. The website contains static content that is loaded differently depending on the user’s device type. With the sudden increase in user traffic, the fleet of Amazon EC2 instances experienced high CPU usage and users are reporting sluggishness on the website.
Which of the following options should the Solutions Architect implement to improve the website response time?
Create a dedicated Auto Scaling group for the different device types and create separate Application Load Balancers (ALB) for each group. Create an Amazon Route 53 entry to route the users to the appropriate ALB depending on their User-Agent HTTP
header.
Create an Amazon S3 bucket to host the static contents. Set this bucket as the origin for an Amazon CloudFront distribution. Write a Lambda@Edge function to parse the User-Agent HTTP
header and serve the appropriate contents based on the user’s device type.
(Correct)
Create an Amazon S3 bucket to host the static contents. Set this bucket as the origin for an Amazon CloudFront distribution. Configure CloudFront to deliver different contents depending on the user’s User-Agent HTTP
header.
Use a Network Load Balancer (NLB) instead of an ALB to distribute the user traffic. Create a dedicated Auto Scaling group for the different device types. Configure the NLB to parse the User-Agent HTTP
header to route the users to the appropriate EC2 Auto Scaling groups.
Explanation
Lambda@Edge is an extension of AWS Lambda, a compute service that lets you execute functions that customize the content that CloudFront delivers. You can author Node.js or Python functions in one Region, US-East-1 (N. Virginia), and then execute them in AWS locations globally that are closer to the viewer, without provisioning or managing servers. Lambda@Edge scales automatically, from a few requests per day to thousands per second. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience.
When you associate a CloudFront distribution with a Lambda@Edge function, CloudFront intercepts requests and responses at CloudFront edge locations. You can execute Lambda functions when the following CloudFront events occur:
- When CloudFront receives a request from a viewer (viewer request)
- Before CloudFront forwards a request to the origin (origin request)
- When CloudFront receives a response from the origin (origin response)
- Before CloudFront returns the response to the viewer (viewer response)
There are many uses for Lambda@Edge processing. For example:
- A Lambda function can inspect cookies and rewrite URLs so that users see different versions of a site for A/B testing.
- CloudFront can return different objects to viewers based on the device they're using by checking the User-Agent header, which includes information about the devices. For example, CloudFront can return different images based on the screen size of their device. Similarly, the function could consider the value of the Referer header and cause CloudFront to return the images to bots that have the lowest available resolution.
- Or you could check cookies for other criteria. For example, on a retail website that sells clothing, if you use cookies to indicate which color a user chose for a jacket, a Lambda function can change the request so that CloudFront returns the image of a jacket in the selected color.
- A Lambda function can generate HTTP responses when CloudFront viewer request or origin request events occur.
- A function can inspect headers or authorization tokens, and insert a header to control access to your content before CloudFront forwards the request to your origin.
- A Lambda function can also make network calls to external resources to confirm user credentials, or fetch additional content to customize a response.
You can configure CloudFront to cache objects based on values in the User-Agent header, but AWS doesn't recommend it. The User-Agent header has many possible values, and caching based on those values would cause CloudFront to forward significantly more requests to your origin. If you do not configure CloudFront to cache objects based on values in the User-Agent
header, CloudFront adds a User-Agent
header with the following value before it forwards a request to your origin: User-Agent = Amazon CloudFront
.
Therefore, the correct answer is: Create an Amazon S3 bucket to host the static contents. Set this bucket as the origin for an Amazon CloudFront distribution. Write a Lambda@Edge function to parse the User-Agent HTTP
header and serve the appropriate contents based on the user’s device type. CloudFront will run the Lamda@Edge function for every request to serve the appropriate content to the user. This will lessen the load on the EC2 instances of the Auto Scaling group.
The option that says: Use a Network Load Balancer (NLB) instead of an ALB to distribute the user traffic. Create a dedicated Auto Scaling group for the different device types. Configure the NLB to parse the User-Agent HTTP
header to route the users to the appropriate EC2 Auto Scaling groups is incorrect. An NLB operates at Layer 4 of the OSI network model so it can't read the User-Agent HTTP
header which is present at Layer 7.
The option that says: Create a dedicated Auto Scaling group for the different device types and create separate Application Load Balancers (ALB) for each group. Create an Amazon Route 53 entry to route the users to the appropriate ALB depending on their User-Agent HTTP
header is incorrect. This is not possible because Amazon Route 53 does not have a way to direct users based on HTTP headers.
The option that says: Create an Amazon S3 bucket to host the static contents. Set this bucket as the origin for an Amazon CloudFront distribution. Configure CloudFront to deliver different contents depending on the user’s User-Agent HTTP
header is incorrect. This may be possible as you can configure CloudFront to cache objects based on values in the Date and User-Agent headers, but AWS doesn't recommend it. These headers have many possible values, and caching based on their values would cause CloudFront to forward significantly more requests to your origin.
References:
Check out these AWS Lambda and Amazon CloudFront Cheat Sheets:
Question 3: Skipped
A technology company runs an industrial chain orchestration software on the AWS cloud. It consists of a web application tier that is currently deployed on a fixed fleet of Amazon EC2 instances. The database tier is deployed on Amazon RDS. The web and database tiers are deployed in the public and private subnet of the VPC respectively. The company wants to improve the service to make it more cost-effective, scalable, highly available and should require minimal human intervention.
Which of the following actions should the solutions architect implement to improve the availability and load balancing of this cloud architecture? (Select TWO.)
Launch a load balancer in front of all the web servers then create a Non-Alias Record in Route 53 which maps to the DNS name of the load balancer.
Create a Non-Alias Record in Route 53 with a Multivalue Answer Routing configuration and add all the IP addresses for your web servers.(Correct)
Place an Application Load Balancer in front of all the web servers. Create a new Alias Record in Route 53 which maps to the DNS name of the load balancer.(Correct)
Create a CloudFront distribution whose origin points to the private IP addresses of your web servers. Also set up a CNAME record in Route 53 mapped to your CloudFront distribution.
Set up a NAT instance in your VPC. Update your route table by creating a default route via the NAT instance with all subnets associated with it. Configure a DNS A Record in Route 53 pointing to the NAT instance's public IP address.
Explanation
Amazon Route 53 alias records provide a Route 53–specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record.
Unlike a CNAME record, you can create an alias record at the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name tutorialsdojo.com
, the zone apex is tutorialsdojo.com
. You can't create a CNAME record for tutorialsdojo.com
, but you can create an alias record for tutorialsdojo.com
that routes traffic to www
.tutorialsdojo.com
(take note of the www subdomain).
You can also type the domain name for the resource. For example:
- CloudFront distribution domain name: dtut0rial5d0j0.cloudfront.net - Elastic Beanstalk environment CNAME: tutorialsdojo.elasticbeanstalk.com - ELB load balancer DNS name:tutorialsdojo-1.us-east-2.elb.amazonaws.com - S3 website endpoint: s3-website.us-east-2.amazonaws.com - Resource record set in this hosted zone: www.tutorialsdojo.com - VPC endpoint: tutorialsdojo.us-east-2.vpce.amazonaws.com - API Gateway custom regional API: d-tut5d0j0c0m.execute-api.us-west-2.amazonaws.com
Multivalue answer routing lets you configure Amazon Route 53 to return multiple values, such as IP addresses for your web servers, in response to DNS queries. You can specify multiple values for almost any record, but multivalue answer routing also lets you check the health of each resource, so Route 53 returns only value for healthy resources. It's not a substitute for a load balancer, but the ability to return multiple health-checkable IP addresses is a way to use DNS to improve availability and load balancing.
The option that says: Place an Application Load Balancer in front of all the web servers. Create a new Alias Record in Route 53 which maps to the DNS name of the load balancer is correct because if the web servers are behind an ELB, the load on the web servers will be uniformly distributed which means that if any of the web servers go offline, the web traffic would be routed to other web servers. In this way, there would be no unnecessary downtime. You can also use Route 53 to set the ALIAS record that points to the ELB endpoint.
The option that says: Create a Non-Alias Record in Route 53 with a Multivalue Answer Routing configuration and add all the IP addresses for your web servers is correct. Although a Multivalue answer routing is not a substitute for a load balancer, its ability to return multiple health-checkable IP addresses can still improve the availability and load balancing of your system.
The option that says: Create a CloudFront distribution whose origin points to the private IP addresses of your web servers. Also set up a CNAME record in Route 53 mapped to your CloudFront distribution is incorrect as it is using Amazon CloudFront, which is directly pointing to the web server as its origin. You should use a Public IP address, not a Private IP address when using an EC2 origin. In addition, if the EC2 instances go down, the entire website would also become unavailable in this scenario.
The option that says: Set up a NAT instance in your VPC. Update your route table by creating a default route via the NAT instance with all subnets associated with it. Configure a DNS A Record in Route 53 pointing to the NAT instance's public IP address is incorrect as a NAT instance is mainly used to allow an EC2 instance launched on a private subnet to access the Internet via a public subnet. In addition, the issue is mainly on the web servers which are hosted on the public subnet and not on the private subnet.
The option that says: Launch a load balancer in front of all the web servers then create a Non-Alias Record in Route 53 which maps to the DNS name of the load balancer is incorrect. Although it is recommended to use a load balancer in front of your EC2 instances, you need to use an Alias Record in Route 53 and not a Non-Alias Record.
References:
Check out this Amazon Route 53 Cheat Sheet:
Question 4: Skipped
A financial startup offers flexible short-term loans of up to $5,000 to its users. Their online portal is hosted in AWS which uses S3 for scalable storage, DynamoDB as a NoSQL database, and a fleet of EC2 instances to host their web servers. To meet the financial regulation, the company is required to undergo a compliance audit.
In this scenario, how will you provide the auditor access to the logs of your AWS resources?
1. Contact AWS and inform them of the upcoming audit activities. 2. AWS will grant required access to the third-party auditor to see the logs.
1. Enable CloudTrail logging to required AWS resources. 2. Create an IAM user with read-only permissions to the required AWS resources. 3. Provide the access credential to the auditor.(Correct)
1. Create an SNS Topic. 2. Configure the SNS to send out an email with the attached CloudTrail log files to the auditor's email every time the CloudTrail delivers the logs to S3.
1. Create an IAM role that has the required permissions for the auditor. 2. Attach the roles to the EC2, S3, and DynamoDB.
Explanation
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.
Therefore the correct answer is:
1. Enable CloudTrail logging to required AWS resources.
2. Create an IAM user with read-only permissions to the required AWS resources.
3. Provide the access credential to the auditor.
The following option is incorrect because you do not need to contact AWS for any audit activities since you can just use CloudTrail:
1. Contact AWS and inform them of the upcoming audit activities.
2. AWS will grant required access to the third-party auditor to see the logs.
You can contact AWS in case you will perform penetration testing to or originating from any of your AWS resources as part of the shared responsibility model, but for audit activities like this, an authorization is not required.
The following option is incorrect because it is a security risk to send the CloudTrail logs via email:
1. Create an SNS Topic.
2. Configure the SNS to send out an email with the attached CloudTrail log files to the auditor's email every time the CloudTrail delivers the logs to S3.
It is best to keep them stored inside an S3 bucket and just provide read access to the bucket to the auditor.
The following option is incorrect because an IAM role should be attached to the IAM user of the auditor but the most preferred way to do this is still to use CloudTrail:
1. Create an IAM role that has the required permissions for the auditor.
2. Attach the roles to the EC2, S3, and DynamoDB.
References:
Check out this AWS CloudTrail Cheat Sheet:
Question 7: Skipped
An enterprise plans to create a new cloud deployment that will be used by several project teams. The network must be designed so that it allows autonomy for the administrators of the individual AWS accounts to modify their route tables freely. However, the company wants to monitor outbound traffic so it is required to have a centralized and controlled egress Internet connection for all accounts. As more teams are expected to join this deployment, the organization is expected to grow into thousands of AWS accounts.
Which of the following options should the Solutions Architect implement to meet the company requirements?
Create a shared services VPC. On this VPC, host the central assets which include a fleet of firewalls that have a route to the public Internet. Have each spoke VPC connect to the central VPC using VPC peering.
Create a centralized shared VPC. On this VPC, create a subnet that will be associated with each AWS account. Use a fleet of proxy servers to control the outbound Internet traffic.
Create a centralized transit VPC. Have the VPCs on each AWS account connect to the transit VPC using a VPN connection. Control the outbound Internet traffic using firewall appliances.
Create a shared transit gateway. Have each spoke VPC connect to the transit gateway. Use a fleet of firewalls, each with a VPN attachment to the transit gateway, to route the outbound Internet traffic.
(Correct)
Explanation
AWS Transit Gateway is a highly available and scalable service used to consolidate the AWS VPC routing configuration for a region with a hub-and-spoke architecture. Each spoke VPC only needs to connect to the Transit Gateway to gain access to other connected VPCs. Transit Gateway across different regions can peer with each other to enable VPC communications across regions. With a large number of VPCs, Transit Gateway provides simpler VPC-to-VPC communication management over VPC Peering.
Transit Gateway enables customers to connect thousands of VPCs. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single Transit Gateway— consolidating and controlling your organization's entire AWS routing configuration in one place. Transit Gateway controls how traffic is routed among all the connected spoke networks using route tables. This hub and spoke model simplifies management and reduces operational costs because VPCs only connect to the Transit Gateway to gain access to the connected networks.
Transit Gateway is a Regional resource and can connect thousands of VPCs within the same AWS Region. You can create multiple Transit Gateways per Region, but Transit Gateways within an AWS Region cannot be peered, and you can connect to a maximum of three Transit Gateways over a single Direct Connect Connection for hybrid connectivity.
If the vendor you choose for egress traffic inspection doesn’t support automation for failure detection, or if you need horizontal scaling, you can use an alternative design. In this design, we don’t create a VPC attachment on the transit gateway for egress VPC, instead, we create an IPsec VPN attachment and create an IPsec VPN from Transit Gateway to the EC2 instances leveraging BGP to exchanges routes.
Deploying a NAT Gateway in every spoke VPC can become expensive because you pay an hourly charge for every NAT Gateway you deploy, so centralizing it could be a viable option. To centralize, you create an egress VPC in the network services account and route all egress traffic from the spoke VPCs via a NAT Gateway sitting in this VPC leveraging Transit Gateway, as shown below.
When you centralize NAT Gateway using Transit Gateway, you pay an extra Transit Gateway data processing charge — compared to the decentralized approach of running a NAT Gateway in every VPC. A transit gateway allows you to route all egress traffic from the spoke VPCs to a central NAT Gateway. It can handle up to thousands of attached spoke VPCs.
Therefore, the correct answer is: Create a shared transit gateway. Have each spoke VPC connect to the transit gateway. Use a fleet of firewalls, each with a VPN attachment to the transit gateway, to route the outbound Internet traffic.
The option that says: Create a centralized transit VPC. Have the VPCs on each AWS account connect to the transit VPC using a VPN connection. Control the outbound Internet traffic using firewall appliances is incorrect. Using a VPN connection for connecting within Amazon VPCs or with the transit is not needed. You can use the AWS network backbone which is much faster than setting up a VPN connection.
The option that says: Create a centralized shared VPC. On this VPC, create a subnet that will be associated with each AWS account. Use a fleet of proxy servers to control the outbound Internet traffic is incorrect. The default limit for shared VPC subnets is 100. Additionally, on this setup, the participants on the shared subnets will not be able to modify their own route tables.
The option that says: Create a shared services VPC. On this VPC, host the central assets which include a fleet of firewalls that have a route to the public Internet. Have each spoke VPC connect to the central VPC using VPC peering is incorrect. There is a default limit of 50 VPC peering for each VPC. This is not enough to handle peering for thousands of AWS accounts.
References:
Check out these AWS Transit Gateway and Amazon VPC Cheat Sheets:
Question 8: Skipped
A company has a hybrid cloud architecture where their on-premises data center and VPC are connected via multiple AWS Direct Connect ports in a single Link Aggregation Group (LAG). They have an on-premises patch management system that automatically applies the patches to the operating systems of their servers and file systems. You were given a task to synchronize the patch baselines being used on-premises to all of the EC2 instances in your VPC, as well as to automate the patching schedule.
Which of the following methods should you implement to meet the above requirement with the LEAST amount of effort?
Use AWS Systems Manager Patch Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Install the SSM Agent to all of your instances and automate the patching schedule by using AWS Systems Manager Maintenance Windows.
(Correct)
Use the AWS Systems Manager State Manager to automate the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define, which includes the OS patches that should be applied in each EC2 instance. Automate the patching schedule by using AWS Systems Manager Distributor, to package and distribute the required patches to your instances.
Use AWS Systems Manager Session Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Automate the patching schedule by using the AWS Systems Manager Maintenance Windows.
Use AWS Systems Manager Patch Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Automate the patching schedule by setting up scheduled jobs using AWS Lambda and AWS Systems Manager Run Command.
Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.
AWS Systems Manager Maintenance Windows let you define a schedule for when to perform potentially disruptive actions on your instances such as patching an operating system, updating drivers, or installing software or patches. Each Maintenance Window has a schedule, a maximum duration, a set of registered targets (the instances that are acted upon), and a set of registered tasks. You can also specify dates that a Maintenance Window should not run before or after, and you can specify the international time zone on which to base the Maintenance Window schedule.
Therefore, the correct answer is: Use AWS Systems Manager Patch Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Install the SSM Agent to all of your instances and automate the patching schedule by using AWS Systems Manager Maintenance Windows.
The option that says: Use AWS Systems Manager Session Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Automate the patching schedule by using the AWS Systems Manager Maintenance Windows is incorrect because the Session Manager is primarily used to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, but not for applying OS patches. Using the AWS Systems Manager Patch Manager is a more appropriate solution to implement.
The option that says: Use AWS Systems Manager Patch Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Automate the patching schedule by setting up scheduled jobs using AWS Lambda and AWS Systems Manager Run Command is incorrect. Although it properly uses AWS Systems Manager Patch Manager, it is still better to use AWS Systems Manager Maintenance Windows instead of manually creating scheduled jobs using AWS Lambda and AWS Systems Manager Run Command. Take note that the scenario specifies that you have to meet the requirement with the LEAST amount of effort, which can be met by using the AWS Systems Manager Maintenance Windows feature. In addition, installing the SSM Agent to all of your instances is also required when using the AWS Systems Manager Patch Manager, which is not mentioned in this option.
The option that says: Use the AWS Systems Manager State Manager to automate the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define, which includes the OS patches that should be applied in each EC2 instance. Automate the patching schedule by using AWS Systems Manager Distributor, to package and distribute the required patches to your instances is incorrect because the AWS Systems Manager State Manager is primarily used as a secure and scalable configuration management service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define. This does not handle patch management, unlike AWS Systems Manager Patch Manager. With the State Manager, you can configure your instances to boot with a specific software at start-up; download and update agents on a defined schedule; configure network settings and many others, but not the patching of your EC2 instances.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 10: Skipped
A leading commercial bank has a hybrid cloud architecture and is using a Volume Gateway under the AWS Storage Gateway service to store their data via the Internet Small Computer Systems Interface (ISCSI). The security team has detected a series of replay attacks to your network, which is basically a form of network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed. After their investigation, they detected that the originator of the attack is trying to intercept the data with an intention to re-transmit it, which is possibly part of a masquerade attack by IP packet substitution.
As a Solutions Architect of the bank, how can you secure your AWS Storage Gateway from these types of attacks?
Replace the current ISCSI Block Interface with an ISCSI Virtual Tape Library Interface.
Replace ISCSI with more secure protocols like Common Internet File System (CIFS) Protocol or Server Message Block (SMB).
Configure a Challenge-Handshake Authentication Protocol (CHAP) to authenticate NFS connections and safeguard your network from replay attacks.
Configure a Challenge-Handshake Authentication Protocol (CHAP) to authenticate iSCSI and initiator connections.(Correct)
Explanation
In AWS Storage Gateway, your iSCSI initiators connect to your volumes as iSCSI targets. Storage Gateway uses Challenge-Handshake Authentication Protocol (CHAP) to authenticate iSCSI and initiator connections. CHAP provides protection against playback attacks by requiring authentication to access storage volume targets. For each volume target, you can define one or more CHAP credentials. You can view and edit these credentials for the different initiators in the Configure CHAP credentials dialog box.
Therefore the correct answer is: Configure a Challenge-Handshake Authentication Protocol (CHAP) to authenticate iSCSl and initiator connections.
The option that says: Replace lSCSl with more secure protocols like Common Internet File System (CIFS) Protocol or Server Message Block (SMB) is incorrect. Replacing ISCSI with CIFS and SMB would be irrelevant since these two do not provide the required security mechanism in the scenario. It is best to use Challenge-Handshake Authentication Protocol (CHAP) instead.
The option that says: Replace the current lSCSl Block Interface with an lSCSl Virtual Tape Library Interface is incorrect. ISCSI Virtual Tape Library Interface is primarily used for Tape Gateways and not for Volume Gateways. It is better to use Challenge-Handshake Authentication Protocol (CHAP) instead.
The option that says: Configure a Challenge-Handshake Authentication Protocol (CHAP) to authenticate NFS connections and safeguard your network from replay attacks is incorrect. CHAP is primarily used to authenticate iSCSI and not NFS.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Question 11: Skipped
A hospital chain in London uses an online central hub for its doctors and nurses. The application interacts with millions of requests per day to fetch various medical data of their patients. The system is composed of a web tier, an application tier, and a database tier that receives large and unpredictable traffic demands. The Solutions Architect must ensure that this infrastructure is highly-available and scalable enough to handle web traffic fluctuations automatically.
Which of the following options should the solutions architect implement to meet the above requirements?
Run the web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring. Run the database tier using RDS with Multi-AZ enabled.
Run the web and application tiers in stateless instances in an autoscaling group, using Amazon ElastiCache Serverless for tier synchronization and CloudWatch for monitoring. Run the database tier using RDS with read replicas, and Multi-AZ enabled.
(Correct)
Run the web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring. Run the database tier using RDS with read replicas.
Run the web and application tiers in stateless instances in an autoscaling group, using Amazon ElastiCache Serverless for tier synchronization and CloudWatch for monitoring. Run the database tier using RDS with Multi-AZ enabled.
Explanation
When users or services interact with an application, they will often perform a series of interactions that form a session. A session is unique data for users that persists between requests while they use the application. A stateless application is an application that does not need knowledge of previous interactions and does not store session information.
For example, an application that, given the same input, provides the same response to any end user, is a stateless application. Stateless applications can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request. Without stored session data, you can simply add more compute resources as needed. When that capacity is no longer required, you can safely terminate those individual resources, after running tasks have been drained. Those resources do not need to be aware of the presence of their peers—all that is required is a way to distribute the workload to them.
In this scenario, the best option is to use a combination of Amazon ElastiCache Serverless, Cloudwatch, and RDS Read Replica.
Therefore, the correct answer is: Run the web and application tiers in stateless instances in an autoscaling group, using Amazon ElastiCache Serverless for tier synchronization and CloudWatch for monitoring. Run the database tier using RDS with read replicas, and Multi-AZ enabled. It uses stateless instances. The web server uses ElastiCache Serverless for read operations and relies on CloudWatch for monitoring traffic fluctuations. When variations in traffic occur, CloudWatch notifies the autoscaling group to perform scale-in or scale-out actions accordingly. ElastiCache Serverless is a robust cache solution that ensures high availability by automatically replicating data across multiple Availability Zones. Furthermore, it leverages read replicas for RDS to efficiently handle read-heavy workloads and utilizes Multi-AZ configurations to ensure high availability.
The option that says: Run the web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring. Run the database tier using RDS with Multi-AZ enabled is incorrect because it uses stateful instances. It also does not use any caching mechanism for web and application tiers, and multi-AZ RDS does not improve read performance.
The option that says: Run the web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring. Run the database tier using RDS with read replicas is incorrect because it uses stateful instances and it does not use any caching mechanism for web and application tiers.
The option that says: Run the web and application tiers in stateless instances in an autoscaling group, using Amazon ElastiCache Serverless for tier synchronization and CloudWatch for monitoring. Run the database tier using RDS with Multi-AZ enabled. is incorrect because multi-AZ RDS only improves Availability, not read performance.
References:
Check out these AWS Cheat Sheets:
Question 13: Skipped
A company runs a popular photo-sharing site hosted on the AWS cloud. There are user complaints about the frequent downtime of the site considering the hefty price for using their service. The company is using a MySQL RDS instance to record user details and other data analytics. A standard S3 storage class bucket is used to store the photos and user metadata, which are frequently accessed only in the first month. The website is also capable of immediately retrieving the images no matter how long they were stored. The RDS instance is always affected and sometimes goes down when there is a problem in the Availability Zone. The solutions architect was tasked to analyze the current architecture and to solve the user complaints about the website. In addition, the solutions architect should also implement a system that automatically discovers, classifies, and protects personally identifiable information (PII) data in the Amazon S3 bucket.
Which of the following options offers the BEST solution for this scenario?
Use Amazon Inspector to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Use a lifecycle policy in S3 to move the old photos to Amazon S3 Glacier Deep Archive after a month. Re-configure the existing database to use RDS Multi-AZ Deployments.
Use Amazon Macie to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Use a lifecycle policy in S3 to move the old photos to Infrequent Access storage class after a month. Re-configure the existing database to use RDS Multi-AZ Deployments.
(Correct)
Use Amazon Inspector to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Replace the S3 Standard bucket with an Infrequent Access storage class. Re-configure the existing database to use RDS Read Replicas.
Use Amazon Macie to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Replace the S3 bucket with EBS Volumes and use Redshift instead of RDS.
Explanation
Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. The fully managed service continuously monitors data access activity for anomalies and generates detailed alerts when it detects the risk of unauthorized access or inadvertent data leaks. Today, Amazon Macie is available to protect data stored in Amazon S3, with support for additional AWS data stores soon.
In this scenario, the best way to solve the issue is to: Use Amazon Macie to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Use a lifecycle policy in S3 to move the old photos to Infrequent Access storage class after a month. Re-configure the existing database to use RDS Multi-AZ Deployments. Keep in mind that the website should be able to immediately retrieve the images no matter how long they were stored. This means that you should not archive the images.
The option that says: Use Amazon Macie to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Replace the S3 bucket with EBS Volumes and use Redshift instead of RDS is incorrect because EBS Volumes are not as scalable as S3 and Redshift is not a suitable storage option as it is mainly used as a petabyte-scale data warehouse service.
The option that says: Use Amazon Inspector to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Use a lifecycle policy in S3 to move the old photos to Amazon S3 Glacier Deep Archive after a month. Re-configure the existing database to use RDS Multi-AZ Deployments is incorrect because although it is right to use Amazon RDS Multi-AZ deployments configuration, the use of Glacier Deep Archive is not preferred for this scenario. Take note that Glacier is primarily used for archiving and the retrieval times are much slower compared to S3. Implementing this solution means that the users will have to wait a long time to retrieve their photos. In addition, Amazon Inspector is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS, especially in Amazon EC2 instances. You have to use Amazon Macie instead.
The option that says: Use Amazon Inspector to automatically discover, classify, and protect personally identifiable information (PII) data in the Amazon S3 bucket. Replace the S3 Standard bucket with an Infrequent Access storage class. Re-configure the existing database to use RDS Read Replicas is incorrect because although it is right to use Infrequent Access storage class, the use of Read Replica is not suitable for this scenario. Although it may improve availability, you are only limited to use read operations when you use Read Replicas. This means when the primary database is down, the Read Replica would not be available to register new users since read operations are only allowed. Moreover, you have to use Amazon Macie and not Amazon Inspector, since this is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
References:
Check out this Amazon S3 Cheat Sheet:
Check out this Amazon Macie Cheat Sheet:
Question 14: Skipped
A company that manages hundreds of AWS client accounts has created a central logging service running on an Auto Scaling group of Amazon EC2 instances. The logging service receives logs from the client AWS accounts through the connectivity provided by AWS PrivateLink. The interface endpoint for this is available on each of the client AWS accounts. The EC2 instances hosting the logging service are spread on multiple subnets with a Network Load Balancer in front to spread the incoming load. Upon testing, the clients are unable to submit logs through the VPC endpoint.
Which of the following solutions will most likely resolve the issue? (Select TWO.)
Ensure that the NACL associated with the logging service subnet allows communication to and from the NLB subnets. Ensure that the NACL associated with the NLB subnets allows communication to and from the EC2 instances subnets running the logging service.
(Correct)
Ensure that the security group attached to the EC2 instances hosting the logging service allows inbound traffic from the NLB’s security group. Also, ensure that the security group attached to the NLB allows inbound traffic from the interface endpoint subnet.
(Correct)
Ensure that the security group attached to the EC2 instances hosting the logging service allows inbound traffic from the IP address block of the clients.
Ensure that the Auto Scaling group is associated with a launch template that includes the latest Amazon Machine Image (AMI) and that the EC2 instances are using instance types that are optimized for log processing.
Ensure that the NACL associated with the logging service subnets allows communication to and from the interface endpoint. Ensure that the NACL associated with the interface endpoint subnet allows communication to and from the EC2 instances running the logging service.
Explanation
When you create an Amazon VPC endpoint interface with AWS PrivateLink, an Elastic Network Interface is created inside of the subnet that you specify. This interface VPC endpoint (interface endpoint) inherits the network ACL of the associated subnet. You must associate a security group with the interface endpoint to protect incoming and outgoing requests.
When you associate a Network Load Balancer with an endpoint service, the Network Load Balancer forwards requests to the registered target as if the target was registered by IP address. In this case, the source IP addresses are the private IP addresses of the load balancer nodes. If you have access to the Amazon VPC endpoint service, you must verify that the security group rules and the rules within the network ACL associated with the Network Load Balancer’s targets:
- Allow communication from the private IP address of the Network Load Balancer.
- Don't allow communication from the IP address of the client or the interface endpoint.
To allow communication between clients and the Amazon VPC endpoint, you must create rules within the network ACL associated with the client’s subnet and the subnet associated with the interface endpoint. Be aware of this limit:
- You cannot use the security groups for clients as a source in the security groups for the targets. Instead, use the client CIDR blocks as sources in the target security groups.
If you register targets by IP address and do not want to grant access to the entire VPC CIDR, you can grant access to the private IP addresses used by the load balancer nodes. There is one IP address per load balancer subnet.
Therefore, the correct answer are:
- Ensure that the NACL associated with the logging service subnet allows communication to and from the NLB subnets. Ensure that the NACL associated with the NLB subnets allows communication to and from the EC2 instances subnets running the logging service.
- Ensure that the security group attached to the EC2 instances hosting the logging service allows inbound traffic from the NLB's security group. Also, ensure that the security group attached to the NLB allows inbound traffic from the interface endpoint subnet.
The option that says: Ensure that the NACL associated with the logging service subnets allows communication to and from the interface endpoint. Ensure that the NACL associated with the interface endpoint subnet allows communication to and from the EC2 instances running the logging service is incorrect because the rules within the network ACL associated with the Network Load Balancer’s targets should not allow direct communication from the IP address of the client or the interface endpoint. A better approach is to ensure that the NACL associated with the NLB subnets allows communication to and from the EC2 instances subnets running the logging service.
The option that says: Ensure that the security group attached to the EC2 instances hosting the logging service allows inbound traffic from the IP address block of the clients is incorrect because the security group attached to the EC2 instances must permit the inbound traffic from the NLB subnet IPs and not the IP address block of the clients. The security group rules associated with the Network Load Balancer’s targets should not allow direct access from the IP address of the client or the interface endpoint.
The option that says: Ensure that the Auto Scaling group is associated with a launch template that includes the latest Amazon Machine Image (AMI) and that the EC2 instances are using instance types that are optimized for log processing is incorrect because the issue described in the scenario is more likely related to connectivity, not the configuration of the EC2 instances. Therefore, while it’s generally good practice to use the latest AMI and appropriate instance types, this may not resolve the specific issue in the scenario. The most likely solutions involve checking the security groups of the interface endpoints and ensuring the Network Load Balancer is correctly configured.
References:
Application Load Balancer vs Network Load Balancer vs Classic Load Balancer:
Question 15: Skipped
An electric utility company deploys smart meters for its customers to easily track their electricity usage. Each smart meter sends data every five minutes to an Amazon API Gateway which is then processed by several AWS Lambda functions before storing to an Amazon DynamoDB table. The Lambda functions take about 5 to 10 seconds to process the data based on the initial deployment testing. As the company’s customer base grew, the solutions architect noticed that the Lambda functions are now taking 60 to 90 seconds to complete the processing. New metrics are also collected from the smart meters which further increased the processing time. Errors began showing when running the Lambda function such as TooManyRequestsException
and ProvisionedThroughputExceededException
error when performing PUT operation on the DynamoDB table.
Which combination of the following actions will resolve these issues? (Select TWO.)
Since the Lambda functions are being overwhelmed with too many requests, increase the payload size from the meters but send the data less frequently to avoid reaching the concurrency limit.
As more customers are sending data, adjust the Write Capacity Unit (WCU) of the DynamoDB table to be able to accommodate all the write requests being processed by the Lambda functions.
(Correct)
The new metrics being collected requires more processing power from the Lambda functions. Adjust the memory allocation for the Lambda function to accommodate the surge.
Set up an Amazon SQS FIFO queue to handle the burst of the data stream from the smart metrics. Trigger the Lambda function to run whenever a message is received on the queue.
Process the data in batches to avoid reaching the write limits to the DynamoDB table. Group the requests from API Gateway by streaming the data into an Amazon Kinesis data stream.
(Correct)
Explanation
In Amazon DynamoDB, the ProvisionedThroughputExceededException error means that you exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. This means that your request rate is too high. The AWS SDKs for DynamoDB automatically retries requests that receive this exception. Your request is eventually successful unless your retry queue is too large to finish.
To solve this, you can increase the write capacity unit (WCU) of your DynamoDB table. Every PutItem request consumes a write capacity unit. A write capacity unit represents one write per second, for an item up to 1 KB in size. For example, suppose that you create a table with 10 write capacity units. This allows you to perform 10 writes per second, for items up to 1 KB in size per second.
In AWS Lambda, the first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it stays active and waits to process additional events. If you invoke the function again while the first event is being processed, Lambda initializes another instance, and the function processes the two events concurrently. Your functions' concurrency is the number of instances that serve requests at a given time. For an initial burst of traffic, your functions' cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region.
When seeing the TooManyRequestsException in AWS Lambda, it is possible that the throttles that you're seeing aren't on your Lambda function. Throttles can also occur on API calls during your function's invocation or on concurrency limits.
With API Gateway, you can send the stream to an Amazon Kinesis data stream on which you can group requests in batches so there will be a decrease in requests in Lambda.
Therefore, the correct answers are:
- As more customers are sending data, adjust the Write Capacity Unit (WCU) of the DynamoDB table to be able to accommodate all the write requests being processed by the Lambda functions.
- Process the data in batches to avoid reaching the write limits to the DynamoDB table. Group the requests from API Gateway by streaming the data into an Amazon Kinesis data stream.
The option that says: The new metrics being collected requires more processing power from the Lambda functions. Adjust the memory allocation for the Lambda function to accommodate the surge is incorrect. Although this can improve the processing power of the Lambda functions, this will not solve the TooManyRequestsException error which is due to reaching the AWS Lambda concurrency execution limits.
The option that says: Since the Lambda functions are being overwhelmed with too many requests, increase the payload size from the meters but send the data less frequently to avoid reaching the concurrency limit is incorrect. Although this will solve the TooManyRequestsException for the Lambda function, you may reach the 10MB payload limit on the API gateway if you aggregate too much data before sending it to API Gateway.
The option that says: Set up an Amazon SQS FIFO queue to handle the burst of the data stream from the smart metrics. Trigger the Lambda function to run whenever a message is received on the queue is incorrect. This action is not recommended because an SQS FIFO queue can only handle 3000 messages per second. The customer base is constantly growing so it is recommended to use Amazon Kinesis to scale beyond this.
References:
Check out these AWS Lambda and Amazon DynamoDB Cheat Sheet:
Question 16: Skipped
A company runs a popular blogging platform that is hosted on AWS. Bloggers from all around the world upload millions of entries per month, and the average blog entry size is 300 KB. The access rate to blog entries drops to a negligible level six months after publishing and after a year, bloggers rarely access a blog. The blog entries have a high update rate during the first 3 months after the blogger has published it and this drops to no updates after 6 months. The company wants to use CloudFront to improve the load times of the blogging platform.
Which of the following is an ideal cloud implementation for this scenario?
Create two different CloudFront distributions: one with US-Europe price class for your US/Europe users and another one with all edge locations included for your remaining users.
Create a CloudFront distribution and set the Restrict Viewer Access Forward Query string to true with a minimum TTL of 0.
You can use one S3 source bucket that is partitioned according to the month a blog entry was submitted, and store the entry in that partition. Create a CloudFront distribution with access permissions to S3 and is restricted only to it.(Correct)
Store two copies of each entry in two different S3 buckets, and let each bucket have its own CloudFront distribution where S3 access is permitted to that CloudFront identity only.
Explanation
You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin.
Typically, CloudFront serves an object from an edge location until the cache duration that you specified passes—that is, until the object expires. After it expires, the next time the edge location gets a user request for the object, CloudFront forwards the request to the origin server to verify that the cache contains the latest version of the object.
Therefore, the correct answer is: You can use one S3 source bucket that is partitioned according to the month a blog entry was submitted, and store the entry in that partition. Create a CloudFront distribution with access permissions to S3 and is restricted only to it. The content is only accessed by CloudFront, and if the content is partitioned at the origin based on the month it was uploaded, you can control the cache behavior accordingly and keep only the latest updated content in the CloudFront cache so that it can be accessed with fast load-time, hence, improving the performance.
The option that says: Store two copies of each entry in two different S3 buckets, and letting each bucket have its own CloudFront distribution where S3 access is permitted to that CloudFront identity only is incorrect. Maintaining two separate buckets is not going to improve the load time for the users.
The option that says: Create a CloudFront distribution and setting the Restrict Viewer Access Forward Query string to true with a minimum TTL of 0 is incorrect. Setting minimum TTL of 0 will enforce loading of the content from origin every time, even if it has not been updated over 6 months.
The option that says: Create two different CloudFront distributions: one with US-Europe price class for your US/Europe users and another one with all edge locations included for your remaining users is incorrect. The location-wise distribution is not going to improve the load time for the users.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 17: SkippedA health insurance company has recently adopted a hybrid cloud architecture which connects their on-premises network and their cloud infrastructure in AWS. They have an ELB which has a set of EC2 instances behind them. As the cloud engineer of the company, your manager instructed you to ensure that the SSL key used to encrypt data is always kept secure at all times. In addition, the application logs should only be decrypted by a handful of key users. In this scenario, which of the following meets all of the requirements?
1. Use the ELB to distribute traffic to a set of EC2 instances.
2. Use TCP load balancing on the ELB and configure your EC2 instances to retrieve the private key from a non-public S3 bucket on boot.
3. Persist your application server logs to a private S3 bucket using SSE.
1. Use the ELB to distribute traffic to a set of EC2 instances.
2. Upload the private key to the EC2 instances and configure it to offload the SSL traffic.
3. Persist your application server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
1. Use the ELB to distribute traffic to a set of EC2 instances.
2. Configure the ELB to perform TCP load balancing.
3. Use an AWS CloudHSM instance to perform the SSL transactions.
4. Persist your application server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
1. Use the ELB to distribute traffic to a set of EC2 instances.
2. Configure the ELB to perform TCP load balancing.
3. Use an AWS CloudHSM instance to perform the SSL transactions.
4. Persist your application server logs to a private S3 bucket using SSE.
(Correct)
Explanation
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.
You can use AWS CloudHSM to offload SSL/TLS processing for your web servers. Using CloudHSM for this processing reduces the burden on your web server and provides extra security by storing your web server's private key in CloudHSM. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are used to confirm the identity of web servers and establish secure HTTPS connections over the Internet.
AWS CloudHSM automates time-consuming HSM administrative tasks for you, such as hardware provisioning, software patching, high availability, and backups. You can scale your HSM capacity quickly by adding and removing HSMs from your cluster on-demand. AWS CloudHSM automatically load balances requests and securely duplicates keys stored in any HSM to all of the other HSMs in the cluster.
CloudHSM provides a better and more secure way of offloading the SSL processing for the web servers and ensures the application logs are durably and securely stored.
In this scenario, the following option is the best choice because it uses CloudHSM and the application server logs are persisted in an S3 bucket with a Server Side Encryption (SSE):
1. Use the ELB to distribute traffic to a set of EC2 instances. 2. Configure the ELB to perform TCP load balancing. 3. Use an AWS CloudHSM instance to perform the SSL transactions. 4. Persist your application server logs to a private S3 bucket using SSE.
The following sets of options are incorrect because the ephemeral volume is just temporary storage and hence, not a suitable option for durable storage:
1. Use the ELB to distribute traffic to a set of EC2 instances. 2. Upload the private key to the EC2 instances and configure it to offload the SSL traffic. 3. Persist your application server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
as well as this option:
1. Use the ELB to distribute traffic to a set of EC2 instances. 2. Configure the ELB to perform TCP load balancing. 3. Use an AWS CloudHSM instance to perform the SSL transactions. 4. Persist your application server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
The following option is incorrect because you should never store sensitive private keys in S3:
1. Use the ELB to distribute traffic to a set of EC2 instances. 2. Use TCP load balancing on the ELB and configure your EC2 instances to retrieve the private key from a non-public S3 bucket on boot. 3. Persist your application server logs to a private S3 bucket using SSE.
References:
Question 21: Skipped
A company has an on-premises identity provider (IdP) used for authenticating employees. The Solutions Architect has created a SAML 2.0 based federated identity solution that integrates with the company IdP. This solution is used to authenticate users’ access to the AWS environment. Upon initial testing, the Solutions Architect has been successfully granted access to the AWS environment through the federated identity web portal. However, other test users who tried to authenticate through the federated identity web portal are not given access to the AWS environment.
Which of the following options must be checked to ensure the proper configuration of identity federation? (Select THREE.)
Ensure that the IAM policy for that user has “Allow” permissions to use SAML federation.
Ensure that the resources on the AWS environment VPC can reach the on-premises IdP using its DNS hostname.
Ensure that the appropriate IAM roles are mapped to company users and groups in the IdP’s SAML assertions.
(Correct)
Check the company’s IdP to ensure that the users are all part of the default AWSFederatedUser
IAM group which is readily available in AWS.
Ensure that the ARN of the SAML provider, the ARN of the created IAM role, and SAML assertion from the IdP are all included when the federated identity web portal calls the AWS STS AssumeRoleWithSAML
API.
(Correct)
Ensure that the trust policy of the IAM roles created for the federated users or groups has set the SAML provider as principal.
(Correct)
Explanation
AWS supports identity federation with SAML 2.0 (Security Assertion Markup Language 2.0), an open standard that many identity providers (IdPs) use. This feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS API operations without having to create an IAM user for everyone in your organization. By using SAML, you can simplify the process of configuring federation with AWS, because you can use the IdP's service instead of writing custom identity proxy code.
IAM federation supports these use cases:
- Federated access to allow a user or application in your organization to call AWS API operations. You use a SAML assertion (as part of the authentication response) that is generated in your organization to get temporary security credentials.
- Web-based single sign-on (SSO) to the AWS Management Console from your organization. Users can sign in to a portal in your organization hosted by a SAML 2.0–compatible IdP, select an option to go to AWS, and be redirected to the console without having to provide additional sign-in information. You can use a third-party SAML IdP to establish SSO access to the console or you can create a custom IdP to enable console access for your external users.
The diagram illustrates the following steps:
The user browses to your organization's portal and selects the option to go to the AWS Management Console. In your organization, the portal is typically a function of your IdP that handles the exchange of trust between your organization and AWS.
The portal verifies the user's identity in your organization.
The portal generates a SAML authentication response that includes assertions that identify the user and include attributes about the user. The portal sends this response to the client browser.
The client browser is redirected to the AWS single sign-on endpoint and posts the SAML assertion.
The endpoint requests temporary security credentials on behalf of the user and creates a console sign-in URL that uses those credentials.
AWS sends the sign-in URL back to the client as a redirect.
The client browser is redirected to the AWS Management Console. If the SAML authentication response includes attributes that map to multiple IAM roles, the user is first prompted to select the role for accessing the console.
Before you can use SAML 2.0-based federation, you must configure your organization's IdP and your AWS account to trust each other. Inside your organization, you must have an IdP that supports SAML 2.0, like Microsoft Active Directory Federation Service (AD FS, part of Windows Server), Shibboleth, or another compatible SAML 2.0 provider. In your organization's IdP, you define assertions that map users or groups in your organization to the IAM roles. Note that different users and groups in your organization might map to different IAM roles. The exact steps for performing the mapping depend on what IdP you're using.
The role or roles that you create in IAM define what federated users from your organization are allowed to do in AWS. When you create the trust policy for the role, you specify the SAML provider that you created earlier as the Principal
. You can additionally scope the trust policy with a Condition
element to allow only users that match certain SAML attributes to access the role.
The option that says: Ensure that the trust policy of the IAM roles created for the federated users or groups has set the SAML provider as principal is correct. In IAM, you create one or more IAM roles for the federated users. In the role's trust policy, you need to set the SAML provider as the principal, which establishes a trust relationship between your organization and AWS.
The option that says: Ensure that the ARN of the SAML provider, the ARN of the created IAM role, and SAMS assertion from the IdP are all included when the federated identity web portal calls the AWS STS AssumeRoleWithSAML
API is correct. These items should all be passed by the client calling the AWS STS AssumeRoleWithSAML
API.
The option that says: Ensure that the appropriate IAM roles are mapped to company users and groups in the IdP’s SAML assertions is correct. In your organization's IdP, you should define assertions that map users or groups in your organization to the IAM roles.
The option that says: Ensure that the IAM policy for that user has “Allow” permissions to use SAML federation is incorrect because test user's permissions are mapped to IAM roles, so we need to take a look at the IAM role policies and not the individual user IAM permission policies.
The option that says: Check the company’s IdP to ensure that the users are all part of the default AWSFederatedUser
IAM group which is readily available in AWS is incorrect because there is no such thing as a default AWSFederatedUser
IAM group. You only need to define assertions in your organization's IdP that map users or groups in your organization to the IAM roles.
The option that says: Ensure that the resources on the AWS environment VPC can reach the on-premises IdP using its DNS hostname is incorrect as this is not a requirement for the identity federation to work.
References:
Check out these AWS Security and Identity Services Cheat Sheet:
Question 22: Skipped
There was a major incident that occurred in your company wherein the web application that you are supporting unexpectedly went down in the production environment. Upon investigation, it was found that a junior DevOps engineer terminated the EC2 instance in production which caused the disruption of service. Only the Solutions Architects should be allowed to stop or terminate instances in the production environment. You also found out that there are a lot of developers who have full access to your production AWS account.
Which of the following options will fix this security vulnerability in your cloud architecture and prevent this kind of failure from happening again? (Select TWO.)
Add tags to the EC2 instances in the production environment and add resource-level permissions to the developers with an explicit deny on terminating the instance which contains the tag.(Correct)
Replace the Security Group of all of the EC2 instances in Production to prevent developers from accessing it.
Modify the associated IAM Role assigned to the developers by removing the policy that allows them to terminate EC2 instances in production.
(Correct)
Attach a PowerUserAccess
AWS managed policy to the developers.
Modify the IAM policy of the developers to require MFA before deleting EC2 instances.
Explanation
To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags you've assigned to it. For example, you could define a set of tags for your account's Amazon EC2 instances that help you track each instance's owner and stack level.
Take note that MFA is just an additional layer of security but it won't totally prevent the users from terminating the EC2 instances.
Adding tags to the EC2 instances in the production environment and adding resource-level permissions to the developers with an explicit deny on terminating the instance which contains the tag is correct because it identifies the instances based on its environment using a tag which creates a resource level permission that explicitly denies anyone from terminating certain instances hosted in production.
Modifying the associated IAM Role assigned to the developers by removing the policy that allows them to terminate EC2 instances in production is correct because changing the IAM Role assigned to the developers to revoke their privilege of terminating EC2 instances will certainly prevent the issue from happening again.
Modifying the IAM policy of the developers to require MFA before deleting EC2 instances is incorrect because MFA is just an additional layer of security given to the users when logging into AWS and accessing the resources. However, an MFA alone cannot prevent the users from performing resource level actions, such as terminating the instance.
Replacing the Security Group of all of the EC2 instances in Production to prevent developers from accessing it is incorrect because a security group is mainly used to secure and control the traffic coming in and out of the EC2 instances. In this scenario, it is best to modify the IAM policy of all of the developers and add tags to the instances in the production environment.
Attaching a PowerUserAccess
AWS managed policy to the developers is incorrect because it will only provide more permissive access to terminate EC2 instances.
References:
Check out these Amazon EC2 and AWS IAM Cheat Sheets:
Question 23: Skipped
A company stores several terabytes of data on an Amazon S3 bucket. The data will be made available to respective partner companies, however, the management doesn’t want the partner companies to access the files directly from Amazon S3 URLs. The solutions architect has been asked to ensure that all confidential files shared via Amazon S3 should only be accessible through CloudFront.
Which of the following options could satisfy this requirement?
Create an Origin Access Control (OAC) and associate it with your CloudFront distribution. Change the permissions on your Amazon S3 bucket so that only the origin access control has read permission.
(Correct)
Write individual policies for each S3 bucket containing the confidential documents that would grant CloudFront access.
Assign an IAM user that is granted access to objects in the S3 bucket to CloudFront.
Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
Explanation
To restrict access to content that you serve from Amazon S3 buckets, you create CloudFront signed URLs or signed cookies to limit access to files in your Amazon S3 bucket, and then you create a special CloudFront user called an origin access control (OAC) and associate it with your distribution. Then you configure permissions so that CloudFront can use the OAC to access and serve files to your users, but users can't use a direct URL to the S3 bucket to access a file there. Taking these steps helps you maintain secure access to the files that you serve through CloudFront.
In general, if you're using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using, for example, CloudFront signed URLs or signed cookies, you also won't want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work.
Typically, if you're using an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the objects in your bucket. This allows anyone to access your objects either through CloudFront or using the Amazon S3 URL. CloudFront doesn't expose Amazon S3 URLs, but your users might have those URLs if your application serves any objects directly from Amazon S3 or if anyone gives out direct links to specific objects in Amazon S3.
Therefore, the correct answer is: Create an Origin Access Control (OAC) and associate it with your CloudFront distribution. Change the permissions on your Amazon S3 bucket so that only the origin access control has read permission. It gives CloudFront exclusive access to the S3 bucket and prevents other users from accessing the public content of S3 directly via S3 URL.
The option that says: Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN) is incorrect. Creating a bucket policy is unnecessary and it does not prevent other users from accessing the public content of S3 directly via S3 URL.
The option that says: Assign an IAM user that is granted access to objects in the S3 bucket to CloudFront is incorrect. This does not give CloudFront exclusive access to the S3 bucket.
The option that says: Write individual policies for each S3 bucket containing the confidential documents that would grant CloudFront access is incorrect. You do not need to create any individual policies for each bucket.
References:
Check out this Amazon CloudFront Cheat Sheet:
S3 Pre-signed URLs vs CloudFront Signed URLs vs Origin Access Control (OAC)
Comparison of AWS Services Cheat Sheets:
Question 27: Skipped
A clothing company is using a proprietary e-commerce platform as their online shopping website. The e-commerce platform is hosted on a fleet of on-demand EC2 instances that are launched in a public subnet. Aside from acting as web servers, these EC2 instances also fetch updates and critical security patches from the Internet. The Solutions Architect was tasked to ensure that the instances can only initiate outbound requests to specific URLs provided by the proprietary e-commerce platform while accepting all inbound requests from the online shoppers.
Which of the following is the BEST solution that the Architect should implement in this scenario?
Create a new NAT Instance in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Instance which will handle the outbound URL restriction.
Implement a Network ACL to all specific URLs by the e-commerce platform with an implicit deny rule.
Create a new NAT Gateway in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Gateway which will handle the outbound URL restriction.
In your VPC, launch a new web proxy server that only allows outbound access to the URLs provided by the proprietary e-commerce platform.
(Correct)
Explanation
Proxy servers usually act as a relay between internal resources (servers, workstations, etc.) and the Internet, and to filter, accelerate and log network activities leaving the private network. One must not confuse proxy servers (also called forwarding proxy servers) with reverse proxy servers, which are used to control and sometimes load-balance network activities entering the private network.
Therefore, the correct answer is: Launch a new web proxy server that only allows outbound access to the URLs provided by the proprietary e-commerce platform in your VPC. It launches a proxy server which filters requests from the client and then only allows certain URLs provided by the proprietary e-commerce platform.
The option that says: Create a new NAT Instance in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Instance which will handle the outbound URL restriction is absolutely wrong considering that the EC2 instances are used as public facing web servers and thus, must be deployed in the public subnet. An instance in private subnet and connected to a NAT Instance will not be able to accept inbound connections to the online shopping website.
The option that says: Create a new NAT Gateway in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Gateway which will handle the outbound URL restriction is incorrect with the same reason as the above option. An instance in private subnet and connected to a NAT Gateway will not be able to accept inbound connections to the online shopping site.
The option that says: Implementing a Network ACL to all specific URLs by the e-commerce platform with an implicit deny rule is incorrect because a network access control list (Network ACL) has limited functionality and cannot filter requests based on URLs.
References:
Check out this Amazon VPC Cheat Sheet:
Question 30: Skipped
A company runs a cryptocurrency analytics website and uses a CloudFront distribution with a custom domain name (tutorialsdojo.com) to speed up the loading time of the site. Since the data being distributed are quite confidential, the management instructed the solutions architect to require HTTPS communication between the viewers (web visitors) and the CloudFront distribution. Additionally, it is required to improve the performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers.
Which of the following are the recommended actions to accomplish the above requirement? (Select TWO.)
Integrate your CloudFront web distribution with Amazon OpenSearch to cache recent requests and improve the performance of your origin servers. Use Kibana to visualize the cache hit-ratio graph in real time.
Use an SSL/TLS certificate provided by AWS Certificate Manager (ACM).(Correct)
Import an SSL/TLS certificate from a third-party certificate authority into a private S3 bucket with versioning and MFA enabled.
Associate your CloudFront web distribution with Lambda@Edge which provides automatic scalability from a few requests per day to thousands of requests per second.
Configure the CloudFront origin to add a Cache-Control max-age directive
to your objects and specify the longest practical value for max-age
.
(Correct)
Explanation
You can configure one or more cache behaviors in your CloudFront distribution to require HTTPS for communication between viewers and CloudFront. You also can configure one or more cache behaviors to allow both HTTP and HTTPS so that CloudFront requires HTTPS for some objects but not for others. The configuration steps depend on which domain name you're using in object URLs:
- If you're using the domain name that CloudFront assigned to your distribution, such as d111111abcdef8.cloudfront.net, you change the Viewer Protocol Policy setting for one or more cache behaviors to require HTTPS communication. In that configuration, CloudFront provides the SSL/TLS certificate.
- If you're using your own domain name, such as tutorialsdojo.com, you need to change several CloudFront settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), or import a certificate from a third-party certificate authority into ACM or the IAM certificate store.
You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution. To increase your cache hit ratio, you can configure your origin to add a Cache-Control max-age
directive to your objects and specify the longest practical value for max-age
. The shorter the cache duration, the more frequently CloudFront forwards another request to your origin to determine whether the object has changed and, if so, to get the latest version.
Therefore, the correct answers are:
-Use an SSL/TLS certificate provided by AWS Certificate Manager (ACM)
-Configure the CloudFront origin to add a Cache-Control max-age directive
to your objects and specifying the longest practical value for max-age.
The option that says: Associate your CloudFront web distribution with Lambda@Edge which provides automatic scalability from a few requests per day to thousands of requests per second is incorrect because Lambda@Edge is an extension of AWS Lambda which lets you execute functions that customize the content that CloudFront delivers. This feature does not improve the cache hit ratio of your CloudFront distribution.
The option that says: Integrate your CloudFront web distribution with Amazon OpenSearch to cache recent requests and improve the performance of your origin servers. Use Kibana to visualize the cache hit-ratio graph in real time is incorrect. Amazon OpenSearch service is used to perform interactive log analytics, real-time application monitoring, and more. This service will not improve the cache hit ratio of your CloudFront distribution. The built-in Kibana support only provides a way to get faster and better insights into your data and thus, not relevant to this scenario.
The option that says: Import an SSL/TLS certificate from a third-party certificate authority into a private S3 bucket with versioning and MFA enabled is incorrect. For an SSL certificate, whether it is public or private, is not recommended to store it on an Amazon S3 bucket.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 31: Skipped
A company hosts its main web application on the AWS cloud which is composed of web servers and database servers. To ensure high availability, the web servers are deployed on an Auto Scaling group of Amazon EC2 instances across multiple Availability Zones with an Application Load Balancer in front. For the database, it is deployed on a Multi-Availability Zone configuration in Amazon RDS. During the RDS maintenance window, the operating system of the primary DB instance undergoes software patching that triggers the failover process.
What would happen to the database during failover?
The RDS DB instance will automatically reboot.
The canonical name record (CNAME) is changed from the primary database to standby database.(Correct)
A new DB instance will be created and immediately replace the primary database.
The IP address of the primary DB instance is switched to the standby DB instance.
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete.
When automatic failover occurs, your application can remain unaware of what's happening behind the scenes. The CNAME record for your DB instance will be altered to point to the newly promoted standby.
Therefore, the correct answer is: The canonical name record (CNAME) is changed from the primary database to standby database.
The option that says: The IP address of the primary DB instance is switched to the standby DB instance is incorrect because the Canonical Name Record (CNAME) will be changed and not the IP address.
The option that says: The RDS DB instance will automatically reboot is incorrect because the RDS instance will not automatically reboot.
The option that says: A new DB instance will be created and immediately replace the primary database is incorrect because there is no new DB instance that will be created.
References:
Check out this Amazon RDS Cheat Sheet:
Question 35: Skipped
A company has built an application that allows painters to upload photos of their creations. The app allows users from North America and European regions to browse the galleries and order their chosen artworks. The application is hosted on a fixed set of Amazon EC2 instances in the us-east-1 region. Using mobile phones, the artists can scan and upload large, high-resolution images of their artworks which are stored in a centralized Amazon S3 bucket also in the same region. After the initial week of operation, the European artists are reporting slow performance on their image uploads.
Which of the following is the best solution to improve the image upload process?
Enable Amazon S3 Transfer Acceleration on the central S3 bucket. Use the s3-accelerate endpoint to upload the images.
(Correct)
Set the centralized Amazon S3 bucket as the custom origin on an Amazon CloudFront distribution. This will use CloudFront’s global edge network to improve the upload speed.
Enable multipart upload on Amazon S3 and redeploy the application to support it. This allows the transmitting of separate parts of the image in parallel.
Increase the upload capacity by creating an AWS Global Accelerator endpoint in front of the Amazon EC2 instances. Create an Auto Scaling Group that can scale automatically based on the users' traffic.
Explanation
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
- You have customers that upload to a centralized bucket from all over the world.
- You transfer gigabytes to terabytes of data on a regular basis across continents.
- You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.
You can enable Transfer Acceleration on a bucket in any of the following ways:
- Use the Amazon S3 console.
- Use the REST API PUT Bucket accelerate operation.
- Use the AWS CLI and AWS SDKs.
You can transfer data to and from the acceleration-enabled bucket by using one of the following s3-accelerate endpoint domain names:
- s3-accelerate.amazonaws.com
– to access an acceleration-enabled bucket.
- s3-accelerate.dualstack.amazonaws.com
– to access an acceleration-enabled bucket over IPv6. Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4.
You can point your Amazon S3 PUT object and GET object requests to the s3-accelerate endpoint domain name after you enable Transfer Acceleration. After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance benefit. However, the accelerate endpoint will be available as soon as you enable Transfer Acceleration.
Therefore, the correct answer is: Enable Amazon S3 Transfer Acceleration on the central S3 bucket. Use the s3-accelerate endpoint to upload the images.
The option that says: Enable multipart upload on Amazon S3 and redeploy the application to support it. This allows the transmitting of separate parts of the image simultaneously is incorrect. Although multipart upload can improve the upload throughput to an S3 bucket, the European users are still limited on their Internet connection to the S3 bucket in the US region. S3 Transfer acceleration uses the AWS backbone network, which can optimize the transfer from far away regions.
The option that says: Set the centralized Amazon S3 bucket as the custom origin on an Amazon CloudFront distribution. This will use CloudFront’s global edge network to improve the upload speed is incorrect. CloudFront distribution is designed for optimizing content delivery and content caching. Although CloudFront supports content uploads via POST, PUT, and other HTTP Methods, there is a limited connection timeout to the origin (60 seconds). If uploads take several minutes, the connection might get terminated. If you want to optimize performance when uploading large files to Amazon S3, it is recommended to use Amazon S3 Transfer Acceleration which can provide fast and secure transfers over long distances. Transfer Acceleration uses Amazon CloudFront's globally distributed edge locations.
The option that says: Increase the upload capacity by creating an AWS Global Accelerator endpoint in front of the Amazon EC2 instances. Create an Auto Scaling Group that can scale automatically based on the users' traffic is incorrect. Although AWS Global Accelerator can help increase network availability and performance, it won't have an effect on this scenario because the images are directly uploaded to the S3 bucket. Increasing the number of EC2 instances does not necessarily improve the S3 upload speeds. Even if the application is configured to use the EC2 instance as a temporary storage for the images, the upload experience of the users will not improve because they are uploading from a different continent.
References:
Check out this AWS Transfer Acceleration Comparison Cheat Sheet:
Question 38: Skipped
An online gambling site is hosted in two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. The first EC2 instance is running a database and the other EC2 instance is a web application that fetches data from the database. You are required to ensure that the two EC2 instances can connect with each other in order for your application to work properly. You also need to track historical changes to the security configurations associated to your instances.
Which of the following options below can meet this requirement? (Select TWO.)
Use Route 53 to ensure that there is proper routing between the two subnets.
Use AWS Systems Manager to track historical changes to the security configurations associated to your instances.
Use AWS Config to track historical changes to the security configurations associated to your instances.
(Correct)
Check and configure the network ACL to allow communication between the two subnets. Ensure that the security groups allow the application host to talk to the database on the right port and protocol.
(Correct)
Ensure that the default route is set to a NAT instance or Internet Gateway (IGW).
Explanation
AWS provides two features that you can use to increase security in your VPC: security groups and network ACLs. Security groups control inbound and outbound traffic for your instances, and network ACLs control inbound and outbound traffic for your subnets. In most cases, security groups can meet your needs; however, you can also use network ACLs if you want an additional layer of security for your VPC.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
The option that says: Check and configure the network ACL to allow communication between the two subnets. Ensure that the security groups allow the application host to talk to the database on the right port and protocol is correct. NACLs and security groups act like firewalls for communication within your instances and subnets.
The option that says: Use AWS Config to track historical changes to the security configurations associated to your instances is correct. AWS Config can help you maintain compliance with your security setting by monitoring and detecting violations on your security groups depending on the rules you have specified.
The option that says: Use Route 53 to ensure that there is proper routing between the two subnets is incorrect. Route 53 can't be used to connect two subnets. You should use Network ACLs and Security Groups instead.
The option that says: Ensure that the default route is set to NAT instance or Internet Gateway (IGW) is incorrect. Neither a NAT instance nor an Internet gateway is needed for the two EC2 instances to communicate.
The option that says: Use AWS Systems Manager to track historical changes to the security configurations associated to your instances is incorrect. Using AWS Systems Manager is not suitable to track historical changes to the security configurations associated to your instances. You have to use AWS Config instead.
References:
Check out these Amazon VPC and AWS Config Cheat Sheets:
Learn more about AWS Config in this 19-minute video:
Question 45: Skipped
A company is running thousands of virtualized Linux and Microsoft Windows servers on its on-premises data center. The virtual servers host a range of Java and PHP applications that are using MySQL and Oracle databases. There are also several department services hosted on an external data center. The company uses SAN storage to provide iSCSI disks to its physical servers. The company wants to migrate its data center into the AWS Cloud but the technical documentation of the systems is incomplete and outdated. The Solutions Architect was tasked to analyze the current environment and estimate the cost of migrating the resources to the cloud.
Which of the following should the Solutions Architect do to effectively plan the cloud migration? (Select THREE.)
Use AWS X-Ray to analyze the applications running in the servers and identify possible errors that may be encountered during the migration.
Use the AWS Cloud Adoption Readiness Tool (CART) to generate a migration assessment report to identify gaps in organizational skills and processes.
(Correct)
Use AWS Migration Hub to discover and track the status of the application migration across AWS and partner solutions.
(Correct)
Use AWS Application Migration Service (MGN) to automate the migration of the on-premises virtual machines to the AWS Cloud.
Use Amazon Inspector to scan and assess the applications deployed on the on-premises virtual machines and save the generated report to an Amazon S3 bucket.
Use AWS Application Discovery Service to gather information about the running virtual machines and running applications inside the servers.
(Correct)
Explanation
The scenario requires tools of services that will help to effectively plan the cloud migration, so the answers should be focused on planning.
AWS Application Discovery Service helps you plan your migration to the AWS cloud by collecting usage and configuration data about your on-premises servers. Application Discovery Service is integrated with AWS Migration Hub, which simplifies your migration tracking as it aggregates your migration status information into a single console. You can view the discovered servers, group them into applications, and then track the migration status of each application from the Migration Hub console in your home region.
Application Discovery Service offers two ways of performing discovery and collecting data about your on-premises servers:
- Agentless discovery can be performed by deploying the AWS Agentless Discovery Connector (OVA file) through your VMware Center.
- Agent-based discovery can be performed by deploying the AWS Application Discovery Agent on each of your VMs and physical servers.
The AWS Cloud Adoption Readiness Tool (CART) helps organizations of all sizes develop efficient and effective plans for cloud adoption and enterprise cloud migrations. This 16-question online survey and assessment report detail your cloud migration readiness across six perspectives, including business, people, process, platform, operations, and security. Once you complete a CART survey, you can provide your contact details to download a customized cloud migration assessment that charts your readiness and what you can do to improve it. This tool is designed to help organizations assess their progress with cloud adoption and identify gaps in organizational skills and processes.
AWS Migration Hub (Migration Hub) provides a single place to discover your existing servers, plan migrations, and track the status of each application migration. The Migration Hub provides visibility into your application portfolio and streamlines planning and tracking. You can visualize the connections and the status of the servers and databases that make up each of the applications you are migrating, regardless of which migration tool you are using. Migration Hub gives you the choice to start migrating right away and group servers while migration is underway or to first discover servers and then group them into applications.
Therefore, the correct answers are:
- Use AWS Application Discovery Service to gather information about the running virtual machines and running applications inside the servers.
- Use the AWS Cloud Adoption Readiness Tool (CART) to generate a migration assessment report to identify gaps in organizational skills and processes.
- Use AWS Migration Hub to discover and track the status of the application migration across AWS and partner solutions.
The option that says: Use AWS Application Migration Service (MGN) to automate the migration of the on-premises virtual machines to the AWS Cloud is incorrect because MGN is primarily used for the actual migration of your on-premises virtual machines to the AWS cloud and not for planning. Take note that in the scenario, the Solutions Architect was tasked to analyze the existing on-premises architecture first before doing the actual migration to AWS.
The option that says: Use AWS X-Ray to analyze the applications running in the servers and identify possible errors that may be encountered during the migration is incorrect because AWS X-Ray is used to debug production and distributed applications such as those built using a microservices architecture. This is not helpful for planning the migration.
The option that says: Use Amazon Inspector to scan and assess the applications deployed on the on-premises virtual machines and save the generated report to an Amazon S3 bucket is incorrect because Amazon Inspector is simply an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. This is not helpful for assessing the applications on the on-premises data center.
References:
Check out the AWS Migration Services Cheat Sheet:
Question 50: Skipped
A company has launched a web service in the cloud that analyzes tweets filtered by keywords. This service is hosted on a fleet of on-demand EC2 instances running in multiple Availability Zones with Auto Scaling, and are load-balanced by an application load balancer. After checking the load balancer logs, the solutions architect noticed that on-demand EC2 instances in one of the AZ's are not receiving requests.
Which of the following option is the most likely cause of this issue?
Amazon EC2 Auto scaling does not span multiple availability zones.
Multi-AZ autoscaling only works in the North Virginia region.
The availability zone that is not receiving traffic was not associated with the application load balancer.
(Correct)
You have to manually add instances in each AZ for them to receive traffic.
Explanation
You can set up your load balancer in EC2-Classic to distribute incoming requests across EC2 instances in a single Availability Zone or multiple Availability Zones. First, launch EC2 instances in all the Availability Zones that you plan to use. Next, register these instances with your load balancer. Finally, add the Availability Zones to your load balancer. After you add an Availability Zone, the load balancer starts routing requests to the registered instances in that Availability Zone. Note that you can modify the Availability Zones for your load balancer at any time. By default, the load balancer routes requests evenly across its Availability Zones. To route requests evenly across the registered instances in the Availability Zones, enable cross-zone load balancing.
If cross-zone load balancing is disabled:
- Each of the two targets in Availability Zone A receives 25% of the traffic.
- Each of the eight targets in Availability Zone B receives 6.25% of the traffic.
This is because each load balancer node can route 50% of its client traffic only to targets in its Availability Zone.
If cross-zone load balancing is enabled, each of the 10 targets receives 10% of the traffic. This is because each load balancer node can route 50% of its client traffic to all 10 targets.
Therefore, the correct answer is: The availability zone that is not receiving traffic was not associated with the application load balancer. Most likely, the reason is that the specific AZ is not added to the ELB.
The option that says: Amazon EC2 Auto Scaling does not span multiple availability zones is incorrect because autoscaling can work with multiple AZs.
The option that says: Multi-AZ autoscaling only works in the North Virginia region is incorrect because autoscaling can be enabled for multi AZ in any single region, not just N. Virginia.
The option that says: You have to manually add instances in each AZ for them to receive traffic is incorrect because instances need not be added manually to AZ.
References:
Check out this AWS Elastic Load Balancing Cheat Sheet:
Question 56: Skipped
A finance company plans to launch a new website to allow users to view tutorials that promote the proper usage of their mobile app. The website contains static media files that are stored on a private Amazon S3 bucket while the dynamic contents are hosted on an AWS Fargate cluster. The AWS Fargate tasks are accepting traffic behind an Application Load Balancer (ALB). To improve user experience, the static and dynamic content are placed behind a CloudFront distribution. An Amazon Route 53 Alias record has already been created to point the website URL to the CloudFront distribution. The company wants to ensure that access to both static and dynamic content is done through CloudFront only.
Which of the following options should the Solutions Architect implement to meet this requirement? (Select TWO.)
Use CloudFront to add a custom header to all origin requests. Using AWS WAF, create a web rule that denies all requests without this custom header. Associate the web ACL to the Application Load Balancer.
(Correct)
Create a network ACL that will allow connections from Amazon CloudFront only. Associate the NACL to the Application Load Balancer subnets.
Configure the Amazon S3 bucket ACL to block all access except requests coming from the Amazon CloudFront distribution.
Use CloudFront to add a custom header to all origin requests. Using AWS WAF, create a web rule that denies all requests without this custom header. Associate the web ACL to the CloudFront distribution.
Create a special CloudFront user called an origin access control (OAC) and associate it with your distribution. Configure the S3 bucket policy to only access from the OAC.
(Correct)
Explanation
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the values of query strings or the IP addresses that requests originate from, CloudFront responds to requests either with the requested content or with an HTTP status code 403(Forbidden)
. You can also configure CloudFront to return a custom error page when a request is blocked.
After you create an AWS WAF web access control list (web ACL), create or update a web distribution to associate the distribution with the web ACL. You can associate as many CloudFront distributions as you want with the same web ACL or with different web ACLs.
On Amazon CloudFront, you can control user access to your private content in two ways:
Restrict access to files in CloudFront caches.
Restrict access to files in your origin by doing one of the following:
- Set up an origin access control (OAC) for your Amazon S3 bucket.
- Configure custom headers for a private HTTP server (a custom origin).
You can secure the content in your Amazon S3 bucket so that users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents someone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. To require that users access your content through CloudFront URLs, you do the following tasks:
Create a special CloudFront user called an origin access control and associate it with your CloudFront distribution.
Give the origin access control permission to read the files in your bucket.
Remove permission for anyone else to use Amazon S3 URLs to read the files.
If you use a custom origin, you can optionally set up custom headers to restrict access. For CloudFront to get your files from a custom origin, the files must be accessible by CloudFront using a standard HTTP (or HTTPS) request. But by using custom headers, you can further restrict access to your content so that users can access it only through CloudFront, not directly. This step isn't required to use signed URLs, but it is recommended. To require that users access content through CloudFront, change the following settings in your CloudFront distributions:
Origin Custom Headers - Configure CloudFront to forward custom headers to your origin.
Viewer Protocol Policy - Configure your distribution to require viewers to use HTTPS to access CloudFront.
Origin Protocol Policy - Configure your distribution to require CloudFront to use the same protocol as viewers to forward requests to the origin.
The option that says: Use CloudFront to add a custom header to all origin requests. Using AWS WAF, create a web rule that denies all requests without this custom header. Associate the web ACL to the Application Load Balancer is correct. After you create an AWS WAF web access control list (web ACL), create or update a web distribution to associate the distribution with the ALB. Associating the Web ACL to the ALB ensures that only requests from CloudFront will reach the ALB. Any request going to the ALB without the custom header will be denied by WAF.
The option that says: Create a special CloudFront user called an origin access control (OAC) and associate it with your distribution. Configure the S3 bucket policy to only access from the OAC is correct. After you take these steps, users can only access your files through CloudFront, not directly from the S3 bucket.
The option that says: Use CloudFront to add a custom header to all origin requests. Using AWS WAF, create a web rule that denies all requests without this custom header. Associate the web ACL to the CloudFront distribution is incorrect. If any new requests are going to CloudFront, they won't have the custom header initially so AWS WAF may block the request immediately. This could deny any new connections to CloudFront. Therefore, you need to associate the web ACL to the ALB, which is after CloudFront adds the custom header.
The option that says: Create a network ACL that will allow connections from Amazon CloudFront only. Associate the NACL to the Application Load Balancer subnets is incorrect. This will limit all resources inside the ALB subnets to accept only traffic from the CloudFront distribution. However, there are no fixed IP addresses for Amazon CloudFront and if you manually add AWS IP addresses, you will have to update the NACL as AWS updates its IP pool.
The option that says: Configure the Amazon S3 bucket ACL to block all access except requests coming from the Amazon CloudFront distribution is incorrect. You can't directly configure a bucket ACL to allow access from Amazon CloudFront only. You will need an origin access control (OAC) for this setup.
References:
Check out these Amazon CloudFront and AWS WAF Cheat Sheets:
Question 67: Skipped
An online stock trading application is deployed to multiple Availability Zones in the us-east-1 region (N. Virginia) and uses RDS to host the database. Considering the massive financial transactions that the trading application handles, the company has hired you to be a consultant to make sure that the system is scalable, highly-available, and disaster resilient. In the event of failure, the Recovery Time Objective (RTO) must be less than 2 hours and the Recovery Point Objective (RPO) must be 10 minutes to meet the compliance requirements set by the regulators.
In this scenario, which Disaster Recovery strategy can be used to achieve the RTO and RPO requirements in the event of system failure? (Select TWO.)
Store hourly database backups to an EC2 instance store volume with transaction logs stored in an S3 bucket every 5 minutes.
Configure your database to use synchronous “source-replica” replication between multiple Availability Zones.
Take 15-minute database backups stored in Glacier with transaction logs stored in S3 every 5 minutes.
Set up an AWS Backup plan for the Amazon RDS database with the continuous backups for point-in-time recovery (PITR) option enabled
(Correct)
Take hourly database backups and export to an S3 bucket with transaction logs stored in S3 every 5 minutes. Set up a Cross-Region Replication (CRR) to another AWS Region.
(Correct)
Explanation
Point-in-time recovery (PITR) is the process of restoring a database to the state it was in at a specified date and time.
When automated backups are turned on for your DB instance, Amazon RDS automatically performs a full daily snapshot of your data. The snapshot occurs during your preferred backup window. It also captures transaction logs to Amazon S3 every 5 minutes (as updates to your DB instance are made). Archiving the transaction logs is an important part of your DR process and PITR. When you initiate a point-in-time recovery, transactional logs are applied to the most appropriate daily backup in order to restore your DB instance to the specific requested time.
With S3 Cross-Region Replication (CRR), you can replicate objects (and their respective metadata and object tags) into other AWS Regions for reduced latency, compliance, security, disaster recovery, and other use cases. S3 CRR is configured to a source S3 bucket and replicates objects into a destination bucket in another AWS Region.
Amazon S3 CRR automatically replicates data between buckets across different AWS Regions. With CRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use CRR to provide lower-latency data access in different geographic regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart. You can use CRR to change account ownership for the replicated objects to protect data from accidental deletion.
AWS Backup is a fully-managed service that makes it easy to centralize and automate data protection across AWS services, in the cloud and on-premises. Using this service, you can configure backup policies and monitor activity for your AWS resources in one place. It allows you to automate and consolidate backup tasks that were previously performed service-by-service and remove the need to create custom scripts and manual processes. With a few clicks in the AWS Backup console, you can automate your data protection policies and schedules.
AWS Backup supports continuous backups and point-in-time recovery (PITR) in addition to snapshot backups.
With continuous backups, you can restore your AWS Backup-supported resource by rewinding it back to a specific time that you choose within 1 second of precision (going back a maximum of 35 days). Continuous backup works by first creating a full backup of your resource and then constantly backing up your resource’s transaction logs. PITR restore works by accessing your full backup and replaying the transaction log to the time that you tell AWS Backup to recover.
In this scenario, you have to use durable storage, database backups, and continuous backups for point-in-time recovery (PITR) to satisfy the RTO and RPO requirements. Hence,
The option that says: Take hourly database backups and export to an S3 bucket with transaction logs stored in S3 every 5 minutes. Set up a Cross-Region Replication (CRR) to another AWS Region is correct as this solution meets the 2-hour RTO as well as the 10-minute RPO requirement.
The option that says: Set up an AWS Backup plan for the Amazon RDS database with the continuous backups for point-in-time recovery (PITR) option enabled is correct as AWS Backup has a native feature that you can enable to satisfy the point-in-time recovery (PITR) requirement.
The option that says: Store hourly database backups to an EC2 instance store volume with transaction logs stored in an S3 bucket every 5 minutes is incorrect because an instance store volume is ephemeral and it is not suitable to store the database backups.
The option that says: Take 15-minute database backups stored in Glacier with transaction logs stored in S3 every 5 minutes is incorrect because the RTO is at least 3 hours, which means that Amazon Glacier is not the ideal solution to use. Note that the standard retrieval time for Glacier is 3 to 5 hours and with that time, you will surely miss your RTO.
The option that says: Configure your database to use synchronous "source-replica" replication between multiple Availability Zones is incorrect because it provides a highly available architecture but it doesn't provide any durable storage nor DB snapshots.
References:
Check out this AWS Well-Architected Framework Cheat Sheet:
RPO and RTO Explained:
Question 70: Skipped
A car parts manufacturing company installed IP cameras along its assembly line. These cameras are part of the quality inspection as they take photos of each car part and find defects by comparing them with a baseline image. To improve the accuracy of detection, the company used Amazon SageMaker to train a machine learning (ML) model that contains baseline images and common defects.
Upon detection, the workers should receive feedback from the Linux server in the on-premises data center that hosts an API for the IP cameras. The company wants to make this solution available even when the factory’s internet connectivity is down.
Which of the following options is the recommended solution for deploying the ML model that meets the company's requirements?
Request for an AWS Snowball Edge Compute Optimized device. This can provide the computing power for the ML training model. Migrate the Linux server to an AmazonEC2 host on the Snowball device. Configure the IP cameras to send the pictures to the Snowball local storage to be processed by the EC2 server.
Deploy the AWS IoT Greengrass client software to another local server. Run ML inference on the Greengrass server from the ML model trained from Amazon SageMaker. Use Greengrass components to interact with the Linux server API whenever a defect is detected.
(Correct)
Leverage AWS IoT Analytics to collect data from the captured images. Run machine learning analysis for each captured image and generate a report that can be sent to the Linux server API for action.
Deploy an AWS Outposts server on the local data center to create an AWS private cloud. Deploy Amazon SageMaker on this server for ML training and leverage Amazon Rekognition with its computer vision capabilities to detect defects among manufactured car parts.
Explanation
AWS IoT Greengrass is an open-source Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy and manage IoT applications on your devices. You can use AWS IoT Greengrass to build software that enables your devices to act locally on the data that they generate, run predictions based on machine learning models, and filter and aggregate device data. AWS IoT Greengrass enables your devices to collect and analyze data closer to where that data is generated, react autonomously to local events, and communicate securely with other devices on the local network.
AWS IoT Greengrass components are software modules that you deploy to Greengrass core devices. Components can represent applications, runtime installers, libraries, or any code that you would run on a device. You can define components that depend on other components. For example, you might define a component that installs Python and then define that component as a dependency of your components that run Python applications.
With AWS IoT Greengrass, you can perform machine learning (ML) inference on your edge devices on locally generated data using cloud-trained models. You benefit from the low latency and cost savings of running local inference yet still take advantage of cloud computing power for training models and complex processing.
AWS IoT Greengrass makes the steps required to perform inference more efficient. You can train your inference models anywhere and deploy them locally as machine learning components. For example, you can build and train deep-learning models in Amazon SageMaker or computer vision models in Amazon Lookout for Vision.
Therefore, the correct answer is: Deploy the AWS IoT Greengrass client software to another local server. Run ML inference on the Greengrass server from the ML model trained from Amazon SageMaker. Use Greengrass components to interact with the Linux server API whenever a defect is detected. AWS IoT Greengrass makes it easy to perform machine learning inference locally on devices, using models that are created, trained, and optimized in the cloud. AWS IoT Greengrass provides prebuilt components for common use cases which includes interacting with external APIs.
The option that says: Deploy an AWS Outposts server on the local data center to create an AWS private cloud. Deploy Amazon SageMaker on this server for ML training and leverage Amazon Rekognition with its computer vision capabilities to detect defects among manufactured car parts is incorrect. Although AWS Outposts support running some AWS services such as ECS, IoT Greengrass, and SageMaker Edge, it does not support running Amazon Rekognition locally.
The option that says: Request for an AWS Snowball Edge Compute Optimized device. This can provide the computing power for the ML training model. Migrate the Linux server to an AmazonEC2 host on the Snowball device. Configure the IP cameras to send the pictures to the Snowball local storage to be processed by the EC2 server is incorrect. Even though the Snowball Edge device has enough computing power, this is not an ideal solution because you will need to return the Snowball device to AWS. It is not intended to stay long-term with the user.
The option that says: Leverage AWS IoT Analytics to collect data from the captured images. Run machine learning analysis for each captured image and generate a report that can be sent to the Linux server API for action is incorrect. AWS IoT Analytics cannot be run locally without internet connectivity.
References:
Question 71: Skipped
A retail company is planning to deploy its business analytics application on AWS to gather insights on customer behaviors and preferences. With a large amount of data that needs to be processed, the application requires 10,000 hours of computing time every month. Since this is for analytics, the company is flexible when it comes to the availability of compute resources and wants the solution to be as cost-effective as possible. Supplementing the analytics application, a reporting service needs to run continuously to distribute analytical reports.
Which of the following is a cost-effective solution that will satisfy the company's requirements?
Deploy the analytics service on AWS App Runner to reduce operational overhead. Configure Auto Scaling to the service to meet the needed capacity. Deploy the reporting service on an Amazon EC2 On-Demand instance.
Deploy the analytics service on a fleet of Amazon EC2 Spot instances in an Auto Scaling group. Configure a custom metric to scale the Spot fleet to meet the needed capacity. Create a container for the reporting service and run it on Amazon ECS with AWS Fargate.
(Correct)
Deploy the analytics service on a combination of Amazon EC2 On-Demand instances and Reserved Instances with a 3-year term. Configure a custom metric in Auto Scaling to scale the fleet to meet the needed capacity. Deploy the reporting service container on AWS App Runner to reduce cost and operational overhead.
Create a container for the analytics application and run it on Amazon ECS with AWS Fargate. Configure a custom metric with Service Auto Scaling to scale the analytics application and meet the needed capacity. Deploy the reporting service on a fleet of Amazon EC2 Spot instances.
Explanation
Amazon EC2 Spot Instances lets you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and test & development workloads. Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, EMR, ECS, CloudFormation, Data Pipeline, and AWS Batch, you can choose how to launch and maintain your applications running on Spot Instances.
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications.
Amazon Elastic Container Service (Amazon ECS) is a shared state, optimistic concurrency system that provides flexible scheduling capabilities for your tasks and containers. Each task that uses the Fargate launch type has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
Amazon ECS provides a service scheduler (for long-running tasks and applications) and the ability to run tasks manually (for batch jobs or single-run tasks), with Amazon ECS placing tasks on your cluster for you. You can specify task placement strategies and constraints that allow you to run tasks in the configuration you choose, such as spread out across Availability Zones.
Therefore, the correct answer is: Deploy the analytics service on a fleet of Amazon EC2 Spot instances in an Auto Scaling group. Configure a custom metric to scale the Spot fleet to meet the needed capacity. Create a container for the reporting service and run it on Amazon ECS with AWS Fargate. With a flexible workload, you can save a lot of costs by using Spot Instances. AWS Fargate is suitable for running a container continuously. AWS Fargate will automatically replace the container if it becomes unhealthy.
The option that says: Create a container for the analytics application and run it on Amazon ECS with AWS Fargate. Configure a custom metric with Service Auto Scaling to scale the analytics application and meet the needed capacity. Deploy the reporting service on a fleet of Amazon EC2 Spot instances is incorrect. The reporting service needs to run continuously so it is not recommended to run it on EC2 Spot instances which can be reclaimed by AWS anytime. Running thousands of hours of workload on an AWS Fargate cluster is more expensive compared to using Spot instances.
The option that says: Deploy the analytics service on AWS App Runner to reduce operational overhead. Configure Auto Scaling to the service to meet the needed capacity. Deploy the reporting service on an Amazon EC2 On-Demand instance is incorrect. AWS App Runner can be used to quickly deploy containerized web applications and APIs, at scale and with no prior infrastructure experience required. However, the reporting service needs to run continuously so it is not cost-effective to run it on an On-Demand instance.
The option that says: Deploy the analytics service on a combination of Amazon EC2 On-Demand instances and Reserved Instances with a 3-year term. Configure a custom metric in Auto Scaling to scale the fleet to meet the needed capacity. Deploy the reporting service container on AWS App Runner to reduce cost and operational overhead is incorrect. Running the reporting service on AWS App Runner is a good choice. However, the On-Demand instances and Reserved instances are not cost-effective in running the analytics application. Since the workload can be flexible and only run once a month, Spot instances are recommended for this scenario.
References:
Check out these AWS Fargate and Amazon EC2 Cheat Sheet:
Question 1: Skipped
A financial services company uses hardware security modules (HSMs) to generate encryption master keys. Since the company application logs include personally identifiable information, encryption is required as part of regulatory compliance. The application logs are going to be stored on a central Amazon S3 bucket and should be encrypted at rest. The security team wants to use the company HSMs to generate the CMK material for encryption on the S3 bucket.
Which of the following options should the solutions architect implement to meet the company requirements?
Using AWS CLI, create a new CMK with AWS-provided key material and use AWS_KMS as the origin of the key. Overwrite this CMK with a generated key from the on-premises HSMs by using the public key and import token provided by AWS. Set a 1-year duration for the CMK automatic key rotation. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads.
Using AWS CLI, create a new CMK with no key material and use EXTERNAL as the origin of the key. Generate a key from the on-premises HSMs and import it as CMK using the public key and import token from AWS. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads.
(Correct)
Request to provision an AWS Direct Connect connection from the on-premises data center to AWS VPC. Ensure that the network addresses do not overlap. Apply an Amazon S3 bucket policy on the central logging bucket to allow only encrypted object uploads. Configure the application to generate a unique CMK for each logging event by querying the on-premises HSMs through the Direct Connection network.
Create a new AWS CloudHSM cluster and set it as the key material source in AWS Key Management Service (KMS) when you generate a new CMK. Set a 1-year duration for the CMK automatic key rotation. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads.
Explanation
You can protect data at rest in Amazon S3 by using three different modes of server-side encryption: SSE-S3, SSE-C, or SSE-KMS.
- SSE-S3 requires that Amazon S3 manage the data and the encryption keys.
- SSE-C requires that you manage the encryption key.
- SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS.
Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs) to encrypt your Amazon S3 objects. AWS KMS encrypts only the object data. Any object metadata is not encrypted.
If you use CMKs, you use AWS KMS via the AWS Management Console or AWS KMS APIs to centrally create CMKs, define the policies that control how CMKs can be used, and audit their usage to prove that they are being used correctly. You can use these CMKs to protect your data in Amazon S3 buckets. When you use SSE-KMS encryption with an S3 bucket, the AWS KMS CMK must be in the same Region as the bucket. If you want to use a customer managed CMK for SSE-KMS, create the CMK before you configure SSE-KMS. Then, when you configure SSE-KMS for your bucket, specify the existing customer managed CMK.
Creating your own customer managed CMK gives you more flexibility and control. For example, you can create, rotate, and disable customer managed CMKs. You can also define access controls and audit the customer managed CMKs that you use to protect your data.
Below is the process of creating/importing a CMK in AWS KMS.
Create a customer master key (CMK) in AWS KMS that has no key material associated.
Download the import wrapping key and import token from KMS.
Import the wrapping key provided by KMS into the HSM.
Create a 256 bit symmetric key on AWS CloudHSM.
Use the imported wrapping key to wrap the symmetric key.
Import the symmetric key into AWS KMS using the import token from step 2.
Terminate your HSM, which triggers a backup. Delete or leave your cluster, depending on your needs.
Therefore, the correct answer is: Using AWS CLI, create a new CMK with no key material and use EXTERNAL as the origin of the key. Generate a key from the on-premises HSMs and import it as CMK using the public key and import token from AWS. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads. You can use the company HSMs as an independent CMK source and import them to AWS KMS by creating a CMK with no material and using EXTERNAL as the origin. You can then apply an S3 bucket policy to further restrict unencrypted object uploads.
The option that says: Create a new AWS CloudHSM cluster and set it as the key material source in AWS Key Management Service (KMS) when you generate a new CMK. Set a 1-year duration for the CMK automatic key rotation. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads is incorrect. You must use the on-premises HSMs as the source for the CMKs so you should not create your own AWS CloudHSM cluster and generate a CMK.
The option that says: Request to provision an AWS Direct Connect connection from the on-premises data center to AWS VPC. Ensure that the network addresses do not overlap. Apply an Amazon S3 bucket policy on the central logging bucket to allow only encrypted object uploads. Configure the application to generate a unique CMK for each logging event by querying the on-premises HSMs through the Direct Connection network is incorrect. This is not a practical solution as using a Direct Connect connection for minimal traffic is not recommended. Additionally, this requires you to reconfigure the logging application.
The option that says: Using AWS CLI, create a new CMK with AWS-provided key material and use AWS_KMS as the origin of the key. Overwrite this CMK with a generated key from the on-premises HSMs by using the public key and import token provided by AWS. Set a 1-year duration for the CMK automatic key rotation. Apply an Amazon S3 bucket policy on the central logging bucket to require AWS KMS as the encryption source and deny unencrypted object uploads is incorrect. You should create a CMK with no key materials with EXTERNAL origin to be able to import your keys from the on-premises HSMs. If you use AWS-provided key material and AWS_KMS as origin, you won't be able to import and overwrite the CMK.
References:
Check out these AWS KMS and Amazon S3 Cheat Sheets:
Question 3: Skipped
A cryptocurrency startup owns multiple AWS accounts which are all linked under AWS Organizations. Due to the financial nature of the business, the DevOps lead has been instructed by the CTO to prepare for IT auditing activities to meet industry compliance requirements.
Which of the following provides the most durable and secure logging solution that can be used to track changes made to all of the company’s AWS resources globally?
1. Launch three new CloudTrail trails using three new S3 buckets to store the logs for the AWS Management console, for AWS SDKs, and for the AWS CLI.
2. Enable MFA Delete and Log Encryption on the S3 bucket.
1. Launch a new CloudTrail trail using the AWS console with one new S3 bucket to store the logs and with the "Enable for all accounts in my organization" checkbox enabled.
2. Enable MFA Delete and Log Encryption on the S3 bucket.
(Correct)
1. Launch a new CloudTrail with one new S3 bucket to store the logs. 2. Configure SNS to send log file delivery notifications to your management system. 3. Enable MFA Delete and Log Encryption on the S3 bucket.
1. Launch a new CloudTrail trail using the AWS console with an existing S3 bucket to store the logs and with the "Apply trail to all regions" checkbox enabled. 2. Enable MFA Delete on the S3 bucket.
Explanation
AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.
CloudTrail is enabled on your AWS account when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily view recent events in the CloudTrail console by going to Event history. You can also enable the tracking of multi-region and global events. By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files.
If you have created an organization in AWS Organizations, you can create a trail that will log all events for all AWS accounts in that organization. This is sometimes referred to as an organization trail. You can also choose to edit an existing trail in the management account and apply it to an organization, making it an organization trail. Organization trails log events for the management account and all member accounts in the organization.
Organization trails are similar to regular trails in many ways. You can create multiple trails for your organization, and choose whether to create an organization trail in all regions or a single region, and what kinds of events you want logged in your organization trail, just as in any other trail.
To create an organization trail, ensure that the "Enable for all accounts in my organization" option is checked when you create a new CloudTrail trail.
To protect your logs, you can encrypt the S3 bucket and add MFA Delete to protect your trail logs from accidental deletions. In this scenario, the following option is the best answer as it provides all of the things mentioned above:
1. Launch a new CloudTrail trail using the AWS console with one new S3 bucket to store the logs and with the "Enable for all accounts in my organization" checkbox enabled.
2. Enable MFA Delete and Log Encryption on the S3 bucket.
The following option is incorrect because although CloudTrail encrypts the data by default using SSE-S3, it is still more secure if you enabled log encryption and use SSE-KMS. Take note that the scenario asked for the most durable and secure logging solution:
1. Launch a new CloudTrail trail using the AWS console with an existing S3 bucket to store the logs and with the "Apply trail to all regions" checkbox
2. Enable MFA Delete on the S3 bucket.
The following option is incorrect because the multi-region option is not enabled which is needed to fetch all CloudTrail trail from all AWS regions:
1. Launch a new CloudTrail with one new S3 bucket to store the logs.
2. Configure SNS to send log file delivery notifications to your management system.
3. Enable MFA Delete and Log Encryption on the S3 bucket.
The following option is incorrect because this option creates too many S3 buckets that are unnecessary whereas all of the events can be easily logged in just a single S3 bucket:
1. Launch three new CloudTrail trails using three new S3 buckets to store the logs for the AWS Management console, for AWS SDKs, and for the AWS CLI.
2. Enable MFA Delete and Log Encryption on the S3 bucket.
References:
Check out this AWS CloudTrail Cheat Sheet:
Question 5: Skipped
A media company runs its new content management system (CMS) on a Windows-based Amazon EC2 instance. This is a test setup with a single instance. After a few weeks of testing, the application will be deployed on a production environment. For high availability, the application will be hosted on at least three Amazon EC2 instances across multiple Availability Zones. The current test EC2 instance has a 1 TB Amazon Elastic Block Store (EBS) volume as its root device. This is where all the static content is stored.
The solutions architect must ensure that all instances will have the same data at all times, for the application to work properly. The filesystem must also support Windows ACLs to control access to file contents. Additionally, all instances must be joined to the company’s Active Directory domain. The solution should have the least amount of management overhead.
Which of the following options should the Solutions Architect implement to meet the company's requirements?
Deploy a new Windows AMI for an Auto Scaling group with a minimum size of three instances and spans across three Availability Zones (AZs). Create an Amazon FSx for Windows File Server file system that will be used for shared storage. Write a user data script to install the CMS application, mount the FSx for Windows File Server file system and join the instances to the AD domain.
(Correct)
Create an Amazon Machine Image (AMI) of the test Amazon EC2 instance. Use the AMI for an Auto Scaling group with a minimum size of three instances that spans three Availability Zones (AZs). Create an Amazon Elastic Filesystem (Amazon EFS) volume. Write a user data script to join the instances on the AD domain and mount the EFS share upon boot-up.
Create an Amazon Machine Image (AMI) of the test Amazon EC2 instance. Use the AMI for an Auto Scaling group with a minimum size of three instances that spans three Availability Zones (AZs). Create an Amazon FSx for Lustre filesystem that will be used for shared storage. Write a user data script to join the instances on the AD domain and mount the EFS share upon boot-up.
Deploy a new Windows AMI for an Auto Scaling group with a minimum size of three instances and spans across three Availability Zones (AZs). Use an Amazon EBS volume with Multi-Attach enabled to allow multiple Amazon EC2 instances to share the volume. Write a user data script to install the CMS application and join the instances to the AD domain.
Explanation
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system. FSx for Windows File Server has the features, performance, and compatibility to easily lift and shift enterprise applications to the AWS Cloud.
Amazon FSx supports a broad set of enterprise Windows workloads with fully managed file storage built on Microsoft Windows Server. Amazon FSx has native support for Windows file system features and for the industry-standard Server Message Block (SMB) protocol to access file storage over a network.
As a fully managed service, FSx for Windows File Server eliminates the administrative overhead of setting up and provisioning file servers and storage volumes. Additionally, Amazon FSx keeps Windows software up to date, detects and addresses hardware failures, and performs backups.
AWS recommends using a staging environment with the same configuration as your production environment. For example, use the same Active Directory (AD) and networking configurations, file system size and configuration, and Windows features, such as data deduplication and shadow copies. Running test workloads in a staging environment that simulates your desired production traffic helps the process run smoothly.
Therefore, the correct answer is: Deploy a new Windows AMI for an Auto Scaling group with a minimum size of three instances and spans across three Availability Zones (AZs). Create an Amazon FSx for Windows File Server file system that will be used for shared storage. Write a user data script to install the CMS application, mount the FSx for Windows File Server file system and join the instances to the AD domain. Amazon FSx for Windows File Server is a scalable file storage that is accessible over SMB protocol. Since it is built on Windows Server, it natively supports administrative features such as user quotas, end-user file restore, and Microsoft Active Directory integration.
The option that says: Create an Amazon Machine Image (AMI) of the test Amazon EC2 instance. Use the AMI for an Auto Scaling group with a minimum size of three instances that spans three Availability Zones (AZs). Create an Amazon Elastic Filesystem (Amazon EFS) volume. Write a user data script to join the instances on the AD domain and mount the EFS share upon boot-up is incorrect. Amazon EFS uses the NFS protocol which is primarily used by Linux AMIs. This filesystem does not support Windows ACLs.
The option that says: Create an Amazon Machine Image (AMI) of the test Amazon EC2 instance. Use the AMI for an Auto Scaling group with a minimum size of three instances that spans three Availability Zones (AZs). Create an Amazon FSx for Lustre filesystem that will be used for shared storage. Write a user data script to join the instances on the AD domain and mount the EFS share upon boot-up is incorrect. Amazon FSx for Lustre is POSIX-compliant file system that runs on Lustre. It can only be used by Linux-based instances.
The option that says: Deploy a new Windows AMI for an Auto Scaling group with a minimum size of three instances and spans across three Availability Zones (AZs). Use an Amazon EBS volume with Multi-Attach enabled to allow multiple Amazon EC2 instances to share the volume. Write a user data script to install the CMS application and join the instances to the AD domain is incorrect. This is not an ideal solution because Multi-Attach EBS volumes can only be attached on instances within the same Availability Zone.
References:
Check out this Amazon FSx Cheat Sheet:
Check out this Amazon EFS, Amazon FSx for Windows and Amazon FSx for Lustre Comparison:
Question 6: Skipped
A tech company in the USA has sold millions of sensors that collect temperature information from different locations in a household. These sensors send data to the IoT application developed by the company which is hosted on the AWS cloud using the domain iot.tutorialsdojotest.com. The domain is registered using Amazon Route 53. The sensors use the MQTT protocol to connect to a custom MQTT broker which is hosted on a large Amazon EC2 instance. After processing the received IoT data, the application then sends the data to an Amazon DynamoDB table for storage.
In the past month, the MQTT broker crashed a few times because it was overloaded by the large amount of data being received. This outage caused sensor data to be lost. The management wants to improve the reliability of the IoT workflow to prevent this from happening again.
Which of the following options is the recommended solution to meet the company's requirements while being cost-effective?
Use AWS IoT Core with MQTT to create a new Data-ATS endpoint. Update the Route 53 DNS zone record to point to the new endpoint and allow the IoT devices to send data using the MQTT protocol. Create an AWS IoT rule to directly insert the data into the Amazon DynamoDB table.
(Correct)
Create a new Data-ATS endpoint in AWS IoT Greengrass. Point the Route 53 DNS zone record to this new endpoint to allow IoT devices on the edge to send data using the MQTT protocol. Create an AWS IoT rule that will invoke a Lambda function to store the received data in the Amazon DynamoDB Table.
Create an Auto Scaling group of Amazon EC2 instances for the message broker. Put the Auto Scaling group behind a Network Load Balancer (NLB) since it supports TCP connection for the MQTT protocol. Update the Route 53 DNS zone record to point to the NLB with an Alias type record. Use Amazon EFS as shared storage for the Auto Scaling group to improve the durability of received data.
Leverage AWS IoT Device Management to update the firmware of the IoT devices. Push a firmware update with a random timer on each device to prevent them from sending data all at once. Set up another Amazon EC2 instance for the MQTT broker in another US region. Create a grouping of devices from IoT Device Management to send the other groups' traffic to this new instance.
Explanation
AWS IoT Core is a managed cloud service that enables connected devices to securely interact with cloud applications and other devices. AWS IoT Core can support many devices and messages and process and route those messages to AWS IoT endpoints and other devices. With AWS IoT Core, your applications can interact with all your devices even when disconnected. AWS IoT Core services connect IoT devices to AWS IoT services and other AWS services. AWS IoT Core includes the device gateway and the message broker, which connect and process messages between your IoT devices and the cloud. AWS IoT lets you select the most appropriate and up-to-date technologies for your solution. To help you manage and support your IoT devices in the field, AWS IoT Core supports these protocols:
-MQTT (Message Queuing and Telemetry Transport)
-MQTT over WSS (WebSockets Secure)
-HTTPS (Hypertext Transfer Protocol - Secure)
-LoRaWAN (Long Range Wide Area Network)
The AWS IoT Core message broker supports devices and clients that use MQTT and MQTT over WSS protocols to publish and subscribe to messages. It also supports devices and clients that use the HTTPS protocol to publish messages.
The AWS IoT Core - data plane endpoints are specific to each AWS account and AWS Region. The Data-ATS endpoint sends and receives data to and from the message broker, Device Shadow, and Rules Engine components of AWS IoT.
You configure AWS IoT rules to route data from your connected things. A rule includes one or more actions that AWS IoT performs when enacting the rule. For example, you can insert data into a DynamoDB table, write data to an Amazon S3 bucket, publish to an Amazon SNS topic, or invoke a Lambda function.
Therefore, the correct answer is: Use AWS IoT Core with MQTT to create a new Data-ATS endpoint. Update the Route 53 DNS zone record to point to the new endpoint and allow the IoT devices to send data using the MQTT protocol. Create an AWS IoT rule to directly insert the data into the Amazon DynamoDB table. AWS IoT Core supports several protocols, including MQTT without the need to provision or manage servers. It can receive the data from IoT devices, and with an IoT rule, you can create an action to insert data into an Amazon DynamoDB table.
The option that says: Create a new Data-ATS endpoint in AWS IoT Greengrass. Point the Route 53 DNS zone record to this new endpoint to allow IoT devices on the edge to send data using the MQTT protocol. Create an AWS IoT rule that will invoke a Lambda function to store the received data in the Amazon DynamoDB Table is incorrect. AWS IoT Greengrass needs a client software that brings intelligence to edge devices. This setup will not improve the data broker's performance on AWS.
The option that says: Create an Auto Scaling group of Amazon EC2 instances for the message broker. Put the Auto Scaling group behind a Network Load Balancer (NLB) since it supports TCP connection for the MQTT protocol. Update the Route 53 DNS zone record to point to the NLB with an Alias type record. Use Amazon EFS as shared storage for the Auto Scaling group to improve the durability of received data is incorrect. This option may be possible; however, it may cost more for using Amazon EFS, the NLB, and multiple EC2 instances. It also increases the operational overhead for managing the instances.
The option that says: Leverage AWS IoT Device Management to update the firmware of the IoT devices. Push a firmware update with a random timer on each device to prevent them from sending data all at once. Set up another Amazon EC2 instance for the MQTT broker in another US region. Create a grouping of devices from IoT Device Management to send the other groups' traffic to this new instance is incorrect. This option may be possible; however, creating another instance in another US region will cost more. It does not mention how to handle the Route 53 record. Additionally, this solution is not easily scalable if more devices will be added in the future.
References:
Question 7: Skipped
A company runs its critical application in an Auto Scaling group of Amazon EC2 instances that uses ElastiCache with Append Only Files (AOF) enabled in multiple AWS regions. Recently, one of the regions experienced a power outage due to a storm which has affected the business revenue.
Assuming that only a short recovery downtime period is allowed, how should the solutions architect maintain site availability in case an event like this occurs again in the future?
Enable Domain Name System Security Extensions (DNSSEC) in your domain. Configure Route 53 to automatically failover the traffic to a secondary group of healthy resources on standby. Configure the 'Evaluate Target Health' attribute to No.
Create a dedicated Transit VPC to directly route multi-VPC traffic over a VPN connection across multiple regions.
Set up a DNS active-active failover using latency based routing policy that resolves to an ELB. Configure the 'Evaluate Target Health' attribute to Yes.
(Correct)
Consolidate all of your VPCs across multiple regions into a single Private Hosted Zone using Route 53.
Explanation
DNS active-active failover allows access to your unhealthy instances to be redirected to active instances. Together with latency-based routing, customers accessing your web servers will be balanced throughout available healthy instances based on latency.
Hence, the correct answer is: Set up a DNS active-active failover using latency based routing policy that resolves to an ELB. Configure the 'Evaluate Target Health' attribute to Yes.
The option that says: Console all of your VPCs across multiple regions into a single Private Hosted Zone using Route 53 is incorrect because private hosted zones are primarily used to specify how you want to route traffic in a single VPC or a group of VPCs, instead of going through the public Internet. This does not provide an active-active failover.
The option that says: Enable Domain Name System Security Extensions (DNSSEC) in your domain. Configure Route 53 to automatically failover the traffic to a secondary group of healthy resources on standby. Configure the 'Evaluate Target Health' attribute to No is incorrect because DNSSEC is primarily used to protect your domain from DNS spoofing or man-in-the-middle attacks. If you set Evaluate Target Health to No, Route 53 continues to route traffic to the records that an alias record refers to even if health checks for those records are failing. Thus, this configuration will lead to unavailability issues on the application.
The option that says: Create a dedicated Transit VPC to directly route multi-VPC traffic over a VPN connection across multiple regions is incorrect because a Transit VPC is not suitable in providing a failover routing for your resources. This set up is more suitable for scenarios where you are designing a Multiple-VPC VPN connection sharing.
References:
Check out this Amazon Route 53 Cheat Sheet:
Question 10: Skipped
A company runs a live flight tracking service hosted on the AWS cloud. The application gets updated every 10 minutes with the latest flight information from every airline. The tracking website has a global audience and uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database. A simple web interface is hosted as static content on an Amazon S3 bucket. The Auto Scaling group is set to trigger a scale-up event at 90% CPU utilization. The average load time of the web page is around 7 seconds but the management wants to bring it down to less than 3 seconds.
Which combination of options will make the page load time faster in the MOST cost-effective way? (Select TWO.)
Add a caching layer using Amazon ElastiCache Service to be used for storing sessions and frequent DB queries.
(Correct)
Create a second installation in another region, and utilize Amazon Route 53's latency-based routing feature to direct requests to the appropriate region.
Scale more frequently by setting the scale up trigger of the Auto Scaling group to 30%.
Replace your existing Auto Scaling group with the AWS Systems Manager State Manager which provides a more effective way to manage and scale your EC2 instances.
Have CloudFront enable caching of re-usable content from your website.(Correct)
Explanation
You can use a content delivery network (CDN) like Amazon CloudFront to improve the performance of your website by securely delivering data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. To improve performance, you can simply configure your website’s traffic to be delivered over CloudFront’s globally distributed edge network by setting up a CloudFront distribution. In addition, CloudFront offers a variety of optimization options.
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Both Redis and MemCached are in-memory, open-source data stores. Memcached, a high-performance distributed memory cache service, is designed for simplicity while Redis offers a rich set of features that make it effective for a wide range of use cases.
In this scenario, you can improve the page load times of your application by using a combination of Amazon ElastiCache, CloudFront, or alternatively, an upgraded RDS instance to increase the read capacity.
The option that says: Add a caching layer using Amazon ElastiCache Service to be used for storing sessions and frequent DB queries is correct. This uses ElastiCache for storing sessions as well as frequent DB queries hence, reducing the load on the database. This should help increase the read performance.
The option that says: Having CloudFront enable caching of re-usable content from your website is correct. This uses CloudFront which is a network of globally distributed edge-locations that caches the content and improves the user experience.
The option that says: Scale more frequently by setting the scale up trigger of the Auto Scaling group to 30% is incorrect. This will increase the number of web server instances but will not reduce the load on the database and hence, will not improve the read performance.
The option that says: Create a second installation in another region, and utilize Amazon Route 53's latency-based routing feature to direct requests to the appropriate region is incorrect. This will not improve read performance. In fact, this setup would add to the cost.
The option that says: Replace your existing Auto Scaling group with the AWS Systems Manager State Manager which provides a more effective way to manage and scale your EC2 instances is incorrect. The AWS Systems Manager State Manager is a secure and scalable service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define. It is not a suitable solution to improve the load times of your web application. This is primarily used to control the configuration detail of your instances in your VPC as well as your servers located in your on-premises data centers, such as server configurations, anti-virus definitions, firewall settings, and many others.
References:
Check out these Amazon CloudFront, Amazon ElastiCache, and Amazon RDS Cheat Sheets:
Question 12: Skipped
A company is developing an application that will allow biologists from around the world to submit plant genomic information and share it with other biologists. The application will expect several submissions every minute and will push about 8KB of genomic data every second to the data platform. This data needs to be processed and analyzed to provide meaningful information back to the biologists. The following are the requirements for the data platform:
-The inbound genomic data must be processed near-real-time and provide analytics.
-The received data must be stored in a flexible, parallel, and durable manner.
-After processing the data, the resulting output must be delivered to a data warehouse.
Which of the following options should the Solutions Architect implement to meet the company's requirements?
Create a delivery stream on Amazon Kinesis Data Firehose to deliver the inbound data to an Amazon S3 bucket. Use a Kinesis client to analyze the stored data. For data warehousing, register the S3 bucket on AWS Lake Formation as a data lake. Use Amazon Quicksight to query the data lake.
Create a stream in Amazon Kinesis Data Streams to collect the inbound data. Use a Kinesis client to analyze the genomic data. After processing, use Amazon EMR to save the results to an Amazon Redshift cluster.
(Correct)
Leverage Amazon API Gateway to accept the inbound data and send it to an Amazon SQS queue. Write an AWS Lambda function that will process the messages on the SQS queue. After processing, use Amazon EMR to save the results to an Amazon Redshift cluster.
Store all inbound data files directly to an Amazon S3 bucket. Use Amazon Kinesis with Amazon SQS to analyze the data stored in the S3 bucket. After processing, send the results to an Amazon Redshift cluster.
Explanation
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.
You can make your streaming data available to multiple real-time analytics applications, to Amazon S3, or to AWS Lambda within 70 milliseconds of the data being collected. KDS is highly durable as it performs synchronous replication of your streaming data across three Availability Zones in an AWS Region and stores that data for up to 365 days to provide multiple layers of protection from data loss.
You can have your Kinesis Applications run real-time analytics on high-frequency event data such as sensor data collected by Kinesis Data Streams, which enables you to gain insights from your data at a frequency of minutes instead of hours or days.
One of the methods of developing custom consumer applications that can process data from KDS data streams is to use the Kinesis Client Library (KCL). KCL helps you consume and process data from a Kinesis data stream by taking care of many of the complex tasks associated with distributed computing. These include load balancing across multiple consumer application instances, responding to consumer application instance failures, checkpointing processed records, and reacting to resharding.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. An Amazon Redshift data warehouse is a collection of computing resources called nodes, which are organized into a group called a cluster. Each cluster runs an Amazon Redshift engine and contains one or more databases. With Redshift, you can query and combine exabytes of structured and semi-structured data across your data warehouse, operational database, and data lake using standard SQL.
Therefore, the correct answer is: Create a stream in Amazon Kinesis Data Streams to collect the inbound data. Use a Kinesis client to analyze the genomic data. After processing, use Amazon EMR to save the results to an Amazon Redshift cluster. Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. Your data can be made available to real-time analytics applications and then saved to Amazon Redshift for data warehousing.
The option that says: Create a delivery stream on Amazon Kinesis Data Firehose to deliver the inbound data to an Amazon S3 bucket. Use a Kinesis client to analyze the stored data. For data warehousing, register the S3 bucket on AWS Lake Formation as a data lake. Use Amazon Quicksight to query the data lake is incorrect. You can't use Amazon Quicksight to query data lakes from AWS Lake Formation. AWS recommends Amazon Reshift as a data warehouse solution.
The option that says: Store all inbound data files directly to an Amazon S3 bucket. Use Amazon Kinesis with Amazon SQS to analyze the data stored in the S3 bucket. After processing, send the results to an Amazon Redshift cluster is incorrect. Storing files to S3 first and then sending a message to an SQS queue takes some time. This solution may not meet the near-real-time requirement. You will also have to write your own Lambda function which increases operational overhead.
The option that says: Leverage Amazon API Gateway to accept the inbound data and send it to an Amazon SQS queue. Write an AWS Lambda function that will process the messages on the SQS queue. After processing, use Amazon EMR to save the results to an Amazon Redshift cluster is incorrect. It is possible to integrate API Gateway with Amazon SQS. However, the messages will not be processed in an orderly manner required for near-real-time analysis.
References:
Check out these Amazon Kinesis and Amazon Redshift Cheat Sheets:
Question 13: Skipped
A company has a web service portal on which users can perform read and write operations to its semi-structured data. The company wants to refactor the current application and leverage AWS-managed services to have more scalability and higher availability. To ensure optimal user experience, the service is expected to respond to short but significant system load spikes. The service must be highly available and fault-tolerant in the event of a regional AWS failure.
Which of the following options is the suitable solution to meet the company's requirements?
Use DocumentDB to store the semi-structured data. Create an edge-optimized Amazon API Gateway and AWS Lambda-based web service, and set it as the origin of a global Amazon CloudFront distribution. Add the company’s domain as an alternate name on the CloudFront distribution. Create an Amazon Route 53 Alias record pointed to the CloudFront distribution.
Create a primary Amazon S3 bucket to store the semi-structured data. Enable S3 Cross-Region Replication to sync objects to the backup region. Create an Amazon API Gateway and AWS Lambda-based web service on each region, and set them as the origin for the two Amazon CloudFront distributions. Add the company’s domain as an alternate name on the CloudFront distributions. Create Amazon Route 53 Alias records pointed to each distribution using a failover routing policy.
Create an Amazon DynamoDB global table to store the semi-structured data on two Regions. Use on-demand capacity mode to allow DynamoDB scaling. Run the web service on an Auto Scaling Amazon ECS Fargate cluster on each region. Place each Fargate cluster behind their own Application Load Balancer (ALB). Create Amazon Route 53 Alias records pointed to each ALB in using latency routing policy with health checks enabled.
(Correct)
Create an Amazon Aurora global database to store the semi-structured data in two Regions. Configure Auto Scaling replicas on both regions. Run the web service on an Auto Scaling group of Amazon EC2 instances on both regions using the user data script to download the application code. Place each Auto Scaling group behind their own Application Load Balancer (ALB). Create a single Amazon Route 53 Alias record pointed to each ALB in using a multi-value answer routing policy with health checks enabled.
Explanation
Amazon DynamoDB is a fast, fully-managed NoSQL database service that makes it simple and cost effective to store and retrieve any amount of data, and serve any level of request traffic. DynamoDB helps offload the administrative burden of operating and scaling a highly-available distributed database cluster. This storage alternative meets the latency and throughput requirements of highly demanding applications by providing single-digit millisecond latency and predictable performance with seamless throughput and storage scalability.
DynamoDB stores structured data in tables, indexed by primary key, and allows low-latency read and write access to items ranging from 1 byte up to 400 KB. DynamoDB supports three data types (number, string, and binary), in both scalar and multi-valued sets. It supports document stores such as JSON, XML, or HTML in these data types. Tables do not have a fixed schema, so each data item can have a different number of attributes. The primary key can either be a single-attribute hash key or a composite hash-range key.
Global tables build on the global Amazon DynamoDB footprint to provide you with a fully managed, multi-region, and multi-active database that delivers fast, local, read and write performance for massively scaled, global applications. Global tables replicate your DynamoDB tables automatically across your choice of AWS Regions.
DynamoDB global tables are ideal for massively scaled applications with globally dispersed users. In such an environment, users expect very fast application performance. Global tables provide automatic multi-active replication to AWS Regions worldwide. They enable you to deliver low-latency data access to your users no matter where they are located.
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment.
Therefore, the correct answer is: Create an Amazon DynamoDB global table to store the semi-structured data on two Regions. Use on-demand capacity mode to allow DynamoDB scaling. Run the web service on an Auto Scaling Amazon ECS Fargate cluster on each region. Place each Fargate cluster behind their own Application Load Balancer (ALB). Create Amazon Route 53 Alias records pointed to each ALB in using latency routing policy with health checks enabled. DynamoDB global table offers a multi-active database on multiple regions. AWS Fargate clusters scale up or down significantly faster than EC2 instances because they are only containers and can be provisioned quickly. Latency routing policy with health checks enabled ensures that users are sent to the fastest healthy region based on their location.
The option that says: Use DocumentDB to store the semi-structured data. Create an edge-optimized Amazon API Gateway and AWS Lambda-based web service, and set it as the origin of a global Amazon CloudFront distribution. Add the company’s domain as an alternate name on the CloudFront distribution. Create an Amazon Route 53 Alias record pointed to the CloudFront distribution is incorrect. DocumentDb does not offer a native cross-region replication or multi-region operation. You can only automate snapshots to another region which can take some time to restore in the event of regional failure.
The option that says: Create a primary Amazon S3 bucket to store the semi-structured data. Enable S3 Cross-Region Replication to sync objects to the backup region. Create an Amazon API Gateway and AWS Lambda-based web service on each region, and set them as the origin for the two Amazon CloudFront distributions. Add the company’s domain as an alternate name on the CloudFront distributions. Create Amazon Route 53 Alias records pointed to each distribution using a failover routing policy is incorrect. Although you can store structured and unstructured data, there is a delay of up to 15 minutes in Cross-Region Replication. In the event of a failover, some data might not have been replicated yet on the secondary region.
The option that says: Create an Amazon Aurora global database to store the semi-structured data in two Regions. Configure Auto Scaling replicas on both regions. Run the web service on an Auto Scaling group of Amazon EC2 instances on both regions using the user data script to download the application code. Place each Auto Scaling group behind their own Application Load Balancer (ALB). Create a single Amazon Route 53 Alias record pointed to each ALB in using a multi-value answer routing policy with health checks enabled is incorrect. Scaling EC2 instances can take a few minutes because the EBS volumes need to be provisioned and the OS needs to load along with the user script. This scaling time might not be quick enough for the expected short but significant spikes on the system load.
References:
Check out these Amazon DynamoDB and AWS Fargate Cheat Sheets:
Question 14: Skipped
A business news portal is visited by thousands of readers each day to check on the latest hot topics in the world of business and technology. The news portal runs on a fleet of Spot EC2 instances behind an Application Load Balancer (ALB). Readers can also submit their comments in every article. Currently, the system's database is running on an on-premises data center, and the CTO is concerned that the content delivery time is not meeting company objectives. The portal's page load time is of utmost importance for the company to maintain its daily visitors.
Which of the following options would allow the solutions architect to quickly and cost-effectively modify the current infrastructure to reduce latency for their customers?
Create a CloudFront web distribution to speed up the delivery of data to their readers around the globe. Migrate the entire portal to an S3 bucket and then enable static web hosting. Set the S3 bucket as the origin of your CloudFront distribution.
Add an in-memory datastore using Amazon ElastiCache for Redis to reduce the burden on the database. Enable Redis replication to scale database reads and to have highly available clusters.
(Correct)
Migrate your database on-premises to Amazon Aurora using the AWS Database Migration Service (DMS) and AWS Schema Conversion Tool (SCT). Create Aurora Replicas across Availability Zones and reconfigure the web servers to query from the Aurora database instead.
Replace the on-premises database of the news portal with a fast, scalable full-text search engine using Amazon ES by setting up an ELK stack (Elasticsearch, Logstash, and Kibana). Use a CloudFront web distribution to speed up the delivery of data to your users across the globe.
Explanation
The success of the website and business is significantly affected by the speed at which you deliver content. Even the most optimized database query or remote API call is going to be noticeably slower than retrieving a flat key from an in-memory cache.
The primary purpose of an in-memory key-value store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Most data stores have areas of data that are frequently accessed but seldom updated. Additionally, querying a database is always slower and more expensive than locating a key in a key-value pair cache. By caching such query results, you pay the price of the query once and then are able to quickly retrieve the data multiple times without having to re-execute the query.
With the cache inside the VPC along with the web servers and application servers, the application doesn’t have to constantly go from AWS to the local data center. This change makes the physical distance between servers irrelevant. The new architecture should greatly improve the customer experience.
Therefore, the correct answer is: Use an in-memory cache based on Amazon ElastiCache to reduce network latency and to offload the database pressure. The main issue in this problem involves latency to the backend database. This solution dramatically reduces data retrieval latency. It also scales request volume considerably, because Amazon ElastiCache can deliver extremely high request rates, measured at over 20 million per second.
The option that says: Migrate your database on-premises to Amazon Aurora using the AWS Database Migration Service (DMS) and AWS Schema Conversion Tool (SCT). Create Aurora Replicas across Availability Zones and reconfigure the web servers to query from the Aurora database instead is incorrect. There is no guarantee that the database engine currently being used by the news portal is supported by Amazon Aurora. It is also important to take note that the question asks for a quick and cost-effective solution. Migrating the database may take a long amount of time, and the number of requests being sent each day can rack up costs.
The option that says: Replace the on-premises database of the news portal with a fast, scalable full-text search engine using Amazon ES by setting up an ELK stack (Elasticsearch, Logstash, and Kibana). Use a CloudFront web distribution to speed up the delivery of data to your users across the globe is incorrect. Elasticsearch is commonly used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases but not as a full-fledged database. A search engine could possibly be beneficial to the news portal as an additional layer but not as its sole data source. This solution might provide some advantages when they are both integrated together.
The option that says: Create a CloudFront web distribution to speed up the delivery of data to their readers around the globe. Migrate the entire portal to an S3 bucket and then enable static web hosting. Set the S3 bucket as the origin of your CloudFront distribution is incorrect. It is stated in the scenario that the readers can also submit their comments in every article, which means that the news portal is a dynamic website and not static. Hence, using S3 is not suitable for this scenario which is why this option is incorrect, even though it mentions the use of CloudFront web distribution.
References:
Check out this Amazon ElastiCache Cheat Sheet:
Question 16: Skipped
A travel booking company runs its main web application on the AWS cloud. Its trip planner website provides timetables, travel alerts, and other public transportation information for trains, buses, ferries, and trams. The front-end tier is composed of an ALB in front of an Auto Scaling group of Amazon EC2 instances deployed across 3 Availability Zones and a Multi-AZ RDS for its database tier. When there are sporting events and popular concerts to be held in a city, the usage of the trip planner application spikes which causes the application servers to reach utilization of over 90%. The solutions architect must ensure that the website can quickly recover in the event that one of its Availability Zones failed during its peak usage.
Which of the following is the most cost-effective architectural design that should be implemented for this website to maintain high availability?
To have the most cost-effective architecture, replace all of the Reserved and On-Demand EC2 instances with Spot instances across all Availability Zones. Configure an Auto Scaling group in one of the AZs for scalability.
Increase the capacity and scaling thresholds of the Auto Scaling group to allow the application servers to scale up across all Availability Zones, which will lower the aggregate utilization of the EC2 instances. Use Reserved Instances to handle the steady-state load and a combination of On-Demand and Spot Instances to process the peak load. When the peak usage is over, scale down the number of the On-Demand and Spot instances.(Correct)
Deploy one On-Demand EC2 instance and two Spot EC2 Instances in each of the 3 Availability Zones. In case that one Availability Zone fails, the remaining two Availability Zones can handle the peak load.
Deploy six Reserved and Spot EC2 Instances in each of the 3 Availability Zones. In this way, the remaining two Availability Zones can handle the load left behind by the Availability Zone that went down.
Explanation
Remember that Spot Instances are the most cost-effective type of instances that you can choose. However, this is not suitable for applications with steady state usage.
Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing capacity for up to 90% off the On-Demand price. Spot instances are recommended for:
- Applications that have flexible start and end times
- Applications that are only feasible at very low compute prices
- Users with urgent computing needs for large amounts of additional capacity
On-Demand instances are recommended for:
- Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
- Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
- Applications being developed or tested on Amazon EC2 for the first time
Reserved Instances are recommended for:
- Applications with steady state usage
- Applications that may require reserved capacity
- Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs
The option that says: Increase the capacity and scaling thresholds of the Auto Scaling group to allow the application servers to scale up across all Availability Zones, which will lower the aggregate utilization of the EC2 instances. Use Reserved Instances to handle the steady-state load and a combination of On-Demand and Spot Instances to process the peak load. When the peak usage is over, scale down the number of the On-Demand and Spot instances is correct because by using Auto Scaling, you allow the application servers to scale up across all Availability Zones and handle the additional load. The combination of On-Demand and Spot Instances to process the peak load is a cost-effective solution because when the peak usage is over, you can scale down the number of your instances to save costs.
The option that says: Deploy six Reserved and Spot EC2 Instances in each of the 3 Availability Zones. In this way, the remaining two Availability Zones can handle the load left behind by the Availability Zone that went down is incorrect because having 6 Reserved Instances are still costly even if you add a Spot instance on each Availability Zone. You should also use an Auto Scaling group in order to properly scale your compute resources in accordance with your incoming traffic.
The option that says: Deploy one On-Demand EC2 instance and two Spot EC2 Instances in each of the 3 Availability Zones. In case that one Availability Zone fails, the remaining two Availability Zones can handle the peak load is incorrect because although this is a cost-effective solution, it doesn't use Auto Scaling and hence, the availability of the website is at risk. If the On-Demand instance fails then the Availability Zone is only left with 2 Spot instances, which would not be able to handle the steady state usage.
The option that says: To have the most cost-effective architecture, replace all of the Reserved and On-Demand EC2 instances with Spot instances across all Availability Zones. Configure an Auto Scaling group in one of the AZs for scalability is incorrect because although this is the most cost-effective solution, it is also the most unstable one considering that a Spot instance can be interrupted and hence, the availability of the website is compromised.
References:
Check out this Amazon EC2 Cheat Sheet:
Question 17: Skipped
A cryptocurrency trading platform uses a Lambda function which has recently been integrated with DynamoDB Streams as its event source. Whenever there is a new deployment, the incoming traffic to the function must be shifted in two increments using CodeDeploy. Ten percent of the incoming traffic should be shifted to the new version and then the remaining 90 percent should be deployed five minutes later. It is also required to trace the event source that invoked the Lambda function including the downstream calls that the function made.
Which of the following options should the solutions architect implement to satisfy this requirement?
Configure a Canary
deployment configuration for your Lambda function. Enable active tracing to integrate AWS X-Ray to your AWS Lambda function.
(Correct)
Configure a Rolling with additional batch
deployment configuration for your Lambda function and use X-Ray to trace the event source and downstream calls.
Configure a Linear
deployment configuration for your Lambda function and use AWS Config to trace the event source and downstream calls.
Configure an All-at-once
deployment configuration for your Lambda function and use AWS Config to trace the event source and downstream calls.
Explanation
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy.
When you deploy to an AWS Lambda compute platform, the deployment configuration specifies the way traffic is shifted to the new Lambda function versions in your application.
In a Canary deployment configuration, the traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.
Therefore, the correct answer is: Configure a Canary
deployment configuration for your Lambda function. Enable active tracing to integrate AWS X-Ray to your AWS Lambda function.
The option that says: Configure an All-at-once
deployment configuration for your Lambda function and using AWS Config to trace the event source and downstream calls is incorrect. If you use this deployment configuration, the traffic is shifted from the original Lambda function to the updated Lambda function version all at once. In addition, you can't use AWS Config to trace the event source and downstream calls. You have to use X-Ray instead and this can be done by simply enabling active tracing.
The option that says: Configure a Linear
deployment configuration for your Lambda function and using AWS Config to trace the event source and downstream calls is incorrect. A deployment configuration will cause the traffic to be shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.
The option that says: Configure a Rolling with additional batch
deployment configuration for your Lambda function and using X-Ray to trace the event source and downstream calls is incorrect. Although X-Ray can be used to trace the event source and downstream calls,Rolling with additional batch is only applicable in Elastic Beanstalk and not for Lambda.
References:
Check out this AWS CodeDeploy Cheat Sheet:
Question 18: Skipped
A company has a gaming store platform hosted in its on-premises data center for a whole variety of digital games. The application just experienced downtime last week due to a large burst in web traffic caused by a year-end sale on almost all of the games. Due to the success of the previous promotion, the CEO has planned to do the same in a few weeks, which will drive similar unpredictable bursts in web traffic. The solutions architects are looking to find ways to quickly improve the infrastructure's ability to handle unexpected increases in traffic. The web application is currently made up of a 2-tier web tier which consists of a load balancer and several web app servers, as well as a database tier that hosts an Oracle database.
Which of the following infrastructure changes should the team implement to avoid any further incidences of downtime considering that the new announcement will be done in a few weeks?
Create an AMI that can be used to launch new EC2 web servers. Then create an Auto Scaling group which will use the AMI to scale the web tier. Finally, place an Application Load Balancer to distribute traffic between your on-premises servers and servers running in AWS.
Set up a CloudFront distribution to cache objects from a custom origin to offload traffic from your on-premises environment. Customize your object cache behavior, and choose a time-to-live that will determine how long objects will reside in the cache.
(Correct)
Set up an Amazon S3 bucket for website hosting. Migrate your DNS to Route 53 using zone import, and use DNS failover to failover to the hosted website in S3.
Migrate your environment to AWS by using AWS VM Import to quickly convert your web server into an AMI. Then set up an Auto Scaling group that uses the imported AMI. Also, create an RDS read replica and migrate the Oracle database to an RDS instance through replication.
Explanation
Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds. CloudFront is integrated with AWS – including physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code close to your viewers. In this scenario, the major points of consideration are: your application may get unpredictable bursts of traffic, you need to improve the current infrastructure in the shortest period possible, and your web servers that are on-premises.
CloudFront caches content at edge locations for a period of time that you specify. If a visitor requests content that has been cached for longer than the expiration date, CloudFront checks the origin server to see if a newer version of the content is available. If a newer version is available, CloudFront copies the new version to the edge location. Changes that you make to the original content are replicated to edge locations as visitors request the content.
Since the time period at hand is short, instead of migrating the app to AWS, you need to consider different ways where the performance would improve without doing much modification to the existing infrastructure.
Therefore, the correct answer is: Set up a CloudFront distribution to cache objects from a custom origin to offload traffic from your on-premises environment. Customize your object cache behavior, and choose a time-to-live that will determine how long objects will reside in the cache. CloudFront is a highly scalable, highly available content delivery service, which can perform excellently even in case of sudden unpredictable burst of traffic. Plus, the only change you need to make is the on-premises load balancer as the custom origin of the CloudFront distribution.
The option that says: Create an AMI that can be used to launch new EC2 web servers. Then create an Auto Scaling group which will use the AMI to scale the web tier. Finally, place an Application Load Balancer to distribute traffic between your on-premises servers and servers running in AWS is incorrect. ELB cannot do load balancing to your on-premises instances if it is not connected to your VPC either through a DirectConnect connection or a VPN.
The option that says: Migrate your environment to AWS by using AWS VM Import to quickly convert your web server into an AMI. Then set up an Auto Scaling group that uses the imported AMI. Also, create an RDS read replica and migrate the Oracle database to an RDS instance through replication is incorrect. You are supposed to improve the current situation at the shortest time possible. Migrating to AWS would be more time consuming than simply setting up the CloudFront distribution.
The option that says: Set up an Amazon S3 bucket for website hosting. Migrate your DNS to Route 53 using zone import, and use DNS failover to failover to the hosted website in S3 is incorrect. You cannot host dynamic websites on S3 bucket. Also, this option provides insufficient infrastructure set up options.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 20: Skipped
A leading aerospace engineering company is experiencing high growth and demand on their highly available and fault-tolerant cloud services platform that is hosted in AWS. The technical lead of your team has asked you to virtually extend two existing on-premises data centers into AWS cloud to support an online flight-tracking service that is used by a lot of airline companies. The online service heavily depends on existing, on-premises resources located in multiple data centers and static content that is served from an S3 bucket. To meet the requirement, you launched a dual-tunnel VPN connection between your CGW and VGW.
In this scenario, which component of your cloud architecture represents a potential single point of failure, which you should consider changing to make the solution more highly available?
Create another Customer Gateway in a different data center and set up another dual-tunnel VPN connection.(Correct)
Create another Virtual Gateway in a different AZ and create another dual-tunnel VPN connection.
Create a second Virtual Gateway in a different AZ and a Customer Gateway in a different data center. Create another dual-tunnel connection to ensure high-availability and fault-tolerance.
Set up a NAT Gateway in a different data center and set up another dual-tunnel VPN connection.
Explanation
In this question, you will easily get confused if you do not know the basics of VPC and other AWS fundamentals. You can eliminate the obviously wrong answers and then just choose between the remaining options.
Remember that only one virtual private gateway (VGW) can be attached to a VPC at a time.
A Site-to-Site VPN connection offers two VPN tunnels between a virtual private gateway or a transit gateway on the AWS side, and a customer gateway (which represents a VPN device) on the remote (on-premises) side. A virtual private gateway is the VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You create a virtual private gateway and attach it to the VPC from which you want to create the Site-to-Site VPN connection.
The correct answer is: Create another Customer Gateway in a different data center and setting up another dual-tunnel VPN connection. This will ensure high availability for your online flight-tracking service.
The option that says: Create another Virtual Gateway in a different AZ and create another dual-tunnel VPN connection is incorrect. There can only be one VGW attached to a VPC at a given time.
The option that says: Create a second Virtual Gateway in a different AZ and a Customer Gateway in a different data center. Create another dual-tunnel connection to ensure high-availability and fault-tolerance is incorrect. There can only be one VGW attached to a VPC at a given time.
The option that says: Setting up a NAT Gateway in a different data center and setting up another dual-tunnel VPN connection is incorrect. You don't need to use a NAT gateway in this situation. NAT is basically used to enable EC2 instances launched in the private subnet to access the Internet while blocking incoming public requests to the VPC.
References:
Check out this Amazon VPC Cheat Sheet:
Question 21: Skipped
A company hosts an internal web portal on a fleet of Amazon EC2 instances that allows access to confidential files stored in an encrypted Amazon S3 bucket. Because the files contain sensitive information, the company does not want any files to traverse the public Internet. Bucket access should be restricted to only allow the web portal’s EC2 instances. To comply with the requirements, the Solutions Architect created an Amazon S3 VPC endpoint and associated it with the web portal’s VPC.
Which of the following actions should the Solutions Architect take to fully comply with the company requirements?
Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket on the current region. Apply an Amazon S3 bucket policy that only allows access from the VPC private subnets. Update the VPC’s Network Access Control List (NACL) to deny other EC2 instances from accessing the gateway prefix list.
Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket. Apply an Amazon S3 bucket policy that only allows access from the VPC endpoint. Update the VPC’s Network Access Control List (NACL) to deny other EC2 instances from accessing the gateway prefix list.
Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket. Create an IAM role that grants access to the S3 bucket and attach it to the application EC2 instances. Apply an Amazon S3 bucket policy that only allows access from the VPC endpoint and those using the IAM role.
(Correct)
Apply an Amazon S3 bucket policy that includes the aws:SourceIp
condition to deny all access except those coming from the application EC2 instances IP addresses. Update the route table for the VPC to ensure that the VPC endpoint is associated only with the application instances subnets.
Explanation
When you create an interface or gateway endpoint, you can attach an endpoint policy to it that controls access to the service to which you are connecting. Endpoint policies must be written in JSON format. Not all services support endpoint policies.
A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint. If you do not attach a policy when you create an endpoint, we attach a default policy for you that allows full access to the service. If a service does not support endpoint policies, the endpoint allows full access to the service. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate policy for controlling access from the endpoint to the specified service.
You cannot attach more than one policy to an endpoint. However, you can modify the policy at any time. If you do modify a policy, it can take a few minutes for the changes to take effect.
Your endpoint policy can be like any IAM policy; however, take note of the following:
- Your policy must contain a Principal element.
- The size of an endpoint policy cannot exceed 20,480 characters.
When you create or modify an endpoint, you specify the VPC route tables that are used to access the service via the endpoint. A route is automatically added to each of the route tables with a destination that specifies the AWS prefix list ID of the service (pl-xxxxxxxx
), and a target with the endpoint ID (vpce-xxxxxxxx
).
The following example bucket policy blocks traffic to the bucket unless the request is from specified VPC endpoints (aws:sourceVpce
):
- To use this policy with the aws:sourceVpce
condition, you must attach a VPC endpoint for Amazon S3. The VPC endpoint must be attached to the route table of the EC2 instance's subnet, and in the same AWS Region as the bucket.
- To allow users to perform S3 actions on the bucket from the VPC endpoints or IP addresses, you must explicitly allow the user-level permissions. You can explicitly allow user-level permissions on either an AWS Identity and Access Management (IAM) policy or another statement in the bucket policy.
Therefore, the correct answer is: Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket. Create an IAM role that grants access to the S3 bucket and attach it to the application EC2 instances. Apply an Amazon S3 bucket policy that only allows access from the VPC endpoint and those using the IAM role. This ensures that traffic to the S3 bucket are all coming from the VPC endpoint and that the application EC2 instances are the only ones allowed to access it.
The option that says: Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket. Apply an Amazon S3 bucket policy that only allows access from the VPC endpoint. Update the VPC’s Network Access Control List (NACL) to deny other EC2 instances from accessing the gateway prefix list is incorrect. The gateway prefix list ID should be added to the route table in the VPC to allow access for the specific subnet, and not on the NACL.
The option that says: Apply an Amazon S3 bucket policy that includes the aws:SourceIp
condition to deny all access except those coming from the application EC2 instances IP addresses. Update the route table for the VPC to ensure that the VPC endpoint is associated only with the application instances subnets is incorrect. The aws:SourceIp
is used for specifying external IP addresses (from the public Internet or from within the VPC). You cannot use the aws:SourceIp
condition in your bucket policies for Amazon S3 requests coming from a VPC endpoint. When you associate a VPC endpoint to your VPC, the route tables are automatically updated to include the AWS prefix list ID.
The option that says: Create a VPC endpoint policy that restricts access to the specific Amazon S3 bucket on the current region. Apply an Amazon S3 bucket policy that only allows access from the VPC private subnets. Update the VPC’s Network Access Control List (NACL) to deny other EC2 instances from accessing the gateway prefix list is incorrect. You cannot input subnet IDs as restrictions on the bucket policies. You should use VPC endpoint or source IPs instead.
References:
Check out these Amazon VPC and Amazon S3 Cheat Sheets:
Question 28: Skipped
A financial company is building a new online document portal system that allows its employees and developers to upload yearly and bi-annual corporate earnings report files to a private S3 bucket in which other confidential corporate files will also be stored. You are working as a Solutions Architect and you were instructed to create the private S3 bucket as well as the IAM users for the application developers to start their work. You assigned the required policies in IAM to the developers that allows them read and write access to the S3 bucket. After a few weeks, they have completed the new online portal and hosted it on a fleet of Spot EC2 instances. One of the application developers created a pre-signed URL that points to the correct S3 bucket and after a few testing, he has successfully uploaded the files from his laptop using the generated URL. He then made the necessary code change to the online portal to generate the pre-signed URL to upload the files in S3. However, after a few days, the development team complained that they cannot upload the files anymore using the online portal. Which of the following options are valid reasons for this behavior? (Select TWO.)
The application developers do not have access to either read or upload objects to the S3 bucket.
The expiration date of the pre-signed URL is incorrectly set to expire too quickly and thus, may have already expired when they used it.
(Correct)
The ACL of the S3 bucket blocks the online portal and prevents the developers from uploading any files.
The required AWS credentials in the ~/.aws/credentials
configuration file located on the EC2 instances of the online portal were misconfigured
(Correct)
There was a recent change in the S3 bucket that allows object versioning which invalidates all presigned URLs.
Explanation
In this scenario, the main issue is that the online portal cannot upload files to the S3 bucket but the application developers can successfully upload files on their laptops. Take note that in this scenario, the online portal is deployed to a group of EC2 instances and it was not mentioned that you attached an IAM Role to these instances nor added security credentials in the ~/.aws/credentials
configuration file.
With all of these data in mind, we can deduce that the online portal is generating pre-signed URLs that are set to have an overly tight expiration date which causes the issue. In addition, there might be no security credentials added in the EC2 instances that host the online portal considering that it is not mentioned in the scenario. Remember that this is required to properly generate the pre-signed URLs.
Therefore, the correct answers are:
- The expiration date of the pre-signed URL is incorrectly set to expire too quickly and thus, may have already expired when they used it.
- The required AWS credentials in the ~/.aws/credentials
configuration file located on the EC2 instances of the online portal were misconfigured
The option that says: There was a recent change in the S3 bucket that allows object versioning which invalidates all presigned URLs is incorrect. Enabling object versioning in S3 will not hinder uploads that are done via a pre-signed URL.
The options that says: The application developers do not have access to either read or upload objects to the S3 bucket is incorrect. You have already provided the IAM role.
The option that says: The ACL of the S3 bucket blocks the online portal and prevents the developers from uploading any files is incorrect based on the fact that one developer has managed to successfully upload one file in the S3 bucket. The S3 ACL of the bucket is not an issue here.
References:
Check out this Amazon S3 Cheat Sheet:
Question 29: Skipped
A company is running its main web service in a fleet of Amazon EC2 instances in the us-east-1 AWS Region. The EC2 instances are launched by an Auto Scaling group behind an Application Load Balancer (ALB). The EC2 instances are spread across multiple Availability Zones. The MySQL database is hosted on an Amazon EC2 instance in a private subnet. To improve the resiliency of the web service in case of a disaster, the Solutions Architect must design a data recovery strategy in another region using the available AWS services to lessen the operational overhead. The target RPO is less than a minute and the target RTO is less than 5 minutes. The Solutions Architect has started to provision the ALB and the Auto Scaling group on the us-west-2 region.
Which of the following steps should be implemented next to achieve the above requirements?
Migrate the database from the Amazon EC2 instance to an Amazon Aurora global database. Set the us-east-1 region as the primary database and the us-west-2 region as the secondary database. Configure Amazon Route 53 DNS entry with health checks and failover routing policy to the us-west-2 region.
(Correct)
Migrate the database from the Amazon EC2 instance to an Amazon RDS for MySQL instance. Enable Multi-AZ deployment for this database. Configure Amazon Route 53 DNS entry with failover routing policy to the us-west-2 region.
Migrate the database from the Amazon EC2 instance to an Amazon RDS for MySQL instance. Set the us-east-1 region database as the master and configure a cross-Region read replica to the us-west-2 region. Configure Amazon Route 53 DNS entry with health checks and failover routing policy to the us-west-2 region.
Create a snapshot of the current Amazon EC2 database instance and restore the snapshot to the us-west-2 region. Configure the new EC2 instance as MySQL standby database of the us-east-1 instance. Configure Amazon Route 53 DNS entry with failover routing policy to the us-west-2 region.
Explanation
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.
Critical workloads with a global footprint, such as financial, travel, or gaming applications, have strict availability requirements and may need to tolerate a region-wide outage. Traditionally this required difficult tradeoffs between performance, availability, cost, and data integrity. Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to read and write capabilities in less than 1 minute.
Aurora Global Database lets you easily scale database reads across the world and place your applications close to your users. Your applications enjoy quick data access regardless of the number and location of secondary regions, with typical cross-region replication latencies below 1 second. If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan.
Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource. Each health check that you create can monitor one of the following:
- The health of a specified resource, such as a web server
- The status of other health checks
- The status of an Amazon CloudWatch alarm
Therefore, the correct answer is: Migrate the database from the Amazon EC2 instance to an Amazon Aurora global database. Set the us-east-1 region as the primary database and the us-west-2 region as the secondary database. Configure Amazon Route 53 DNS entry with health checks and failover routing policy to the us-west-2 region.
The option that says: Migrate the database from the Amazon EC2 instance to an Amazon RDS for MySQL instance. Set the us-east-1 region database as the master and configure a cross-Region read replica to the us-west-2 region. Configure Amazon Route 53 DNS entry with health checks and failover routing policy to the us-west-2 region is incorrect. Although this is possible, there is no automatic way to promote the read replica on the backup region as the master database. You need to manually configure this, and when you do, the RDS instance will reboot. In this case, you might exceed the RPO of 1 minute and RTO of 5 minutes.
The option that says: Migrate the database from the Amazon EC2 instance to an Amazon RDS for MySQL instance. Enable Multi-AZ deployment for this database. Configure Amazon Route 53 DNS entry with failover routing policy to the us-west-2 region is incorrect. Multi-AZ deployment will protect you from outages on single AZ’s only. It will not protect your database from regional outages.
The option that says: Create a snapshot of the current Amazon EC2 database instance and restore the snapshot to the us-west-2 region. Configure the new EC2 instance as MySQL standby database of the us-east-1 instance. Configure Amazon Route 53 DNS entry with failover routing policy to the us-west-2 region is incorrect. Although this is a possible solution, the requirement is to use the available AWS services for lower operational overhead. This requires extra management effort to set up, configure and manage the database on the EC2 instance, instead of using a managed Amazon RDS database. Moreover, it won't be able to satisfy the requirement of providing a 1-minute RPO and 5-minute RTO.
References:
Check out these Amazon Aurora Cheat Sheets:
Question 32: Skipped
A company stores confidential files on an Amazon S3 bucket. There was a recent production incident in the company in which the files that are stored in an S3 bucket were accidentally made public. This has caused data leakage that affected the company revenue. The management has instructed the solutions architect to come up with a solution to safeguard the S3 bucket. The solution should only allow private files to be uploaded to the S3 bucket and no file should have a public read or public write access.
Which of the following options should the solutions architect implement to meet the above requirements with MINIMAL effort?
Use the s3-bucket-public-read-prohibited
and s3-bucket-public-write-prohibited
managed rules in AWS Config to restrict all users from uploading publicly accessible and writable files to the S3 bucket.
Set up a policy that restricts all s3:PutObject
actions of the user to have a private
canned ACL only which prohibits any public access to the uploaded objects.
Set up AWS Organizations and create a new Service Control Policy (SCP) that will deny public objects from being uploaded to the Amazon S3 bucket. Attach the SCP to the AWS account.
Enable Amazon S3 Block Public Access in the S3 bucket.
(Correct)
Explanation
Amazon S3 provides Block Public Access settings for buckets and accounts to help you manage public access to Amazon S3 resources. By default, new buckets and objects don't allow public access, but users can modify bucket policies or object permissions to allow public access. Amazon S3 Block Public Access provides settings that override these policies and permissions so that you can limit public access to these resources.
With Amazon S3 Block Public Access, account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.
Therefore, the correct answer is: Enable Amazon S3 Block Public Access in the S3 bucket. It provides a way to meet the requirements with minimal effort.
When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner's account has a Block Public Access setting. If there is an existing Block Public Access setting that prohibits the requested access, then Amazon S3 rejects the request. Amazon S3 Block Public Access provides four settings. These settings are independent and can be used in any combination, and each setting can be applied to a bucket or to an entire AWS account.
If a bucket has Block Public Access settings that are different from its owner's account, Amazon S3 applies the most restrictive combination of the bucket-level and account-level settings. Thus, when Amazon S3 evaluates whether an operation is prohibited by a Block Public Access setting, it rejects any request that would violate either a bucket-level or an account-level setting.
The option that says: Set up a policy that restricts all s3:PutObject
actions of the user to have a private
canned ACL only which prohibits any public access to the uploaded objects is incorrect. Although this solution is possible, it entails a lot of effort to set up an IAM policy that restricts the user from uploading public objects. Using the Amazon Block Public Access is a more suitable solution for this scenario.
The option that says: Use the s3-bucket-public-read-prohibited
and s3-bucket-public-write-prohibited
managed rules in AWS Config to restrict all users from uploading publicly accessible and writable files to the S3 bucket is incorrect. This solution with AWS Config will only notify you and your team of public objects in the S3 bucket. It would not be able to restrict any user from uploading public objects.
The option that says: Set up AWS Organizations and create a new Service Control Policy (SCP) that will deny public objects from being uploaded to the Amazon S3 bucket, then attaching the SCP to the AWS account is incorrect. Although you can satisfy the requirement using a service control policy (SCP), it still entails a lot of effort to implement. Remember that the scenario asks you to meet the requirements with minimal effort. Enabling the Amazon S3 Block Public Access in the S3 bucket is still the easiest one to implement. An SCP is primarily used to determine what services and actions can be delegated by administrators to the users and roles in the accounts that the SCP is applied to.
References:
Check out this Amazon S3 Cheat Sheet:
Question 38: Skipped
A company recently patched a vulnerability in its web application hosted on AWS. The solutions architect was tasked to improve the security of the company’s AWS resources as well as secure the web applications from common web vulnerabilities and cyber attacks. One example is a Distributed Denial of Service attack (DDoS) in which there is numerous incoming traffic coming from many different locations that simultaneously target the company web application and floods the network with bogus requests.
Which of the following options are recommended strategies for reducing DDoS attack surface and minimizing the blast radius in the cloud infrastructure? (Select TWO.)
Strictly implement Multi-Factor Authentication (MFA) in AWS. Use a combination of Amazon Fraud Detector, AWS Config, and Trusted Advisor to fortify your AWS resources.
Allow versioning in your S3 bucket. Ensure that the OS of all of your EC2 instances are properly patched using Systems Manager Patch Manager.
Configure the Network Access Control Lists (ACLs) to only allow the required ports to your network. Identify and block common DDoS request patterns to effectively mitigate a DDoS attack by using AWS WAF.
(Correct)
Add Elastic Load Balancing and Auto Scaling to your EC2 instances to improve availability and scalability. Use extra large EC2 instances to accommodate a surge of incoming traffic caused by a DDoS attack and utilize AWS Systems Manager Session Manager to filter all client-side web sessions to your instances.
Always add a security group that only allows certain ports and authorized servers and protects your origin servers by putting it behind a CloudFront distribution. Enable AWS Shield Advanced which provides enhanced DDoS attack detection and monitoring for application-layer traffic to your AWS resources.
(Correct)
Explanation
Another important consideration when architecting on AWS is to limit the opportunities that an attacker may have to target your application. For example, if you do not expect an end-user to interact directly with certain resources, you will want to make sure that those resources are not accessible from the Internet. Similarly, if you do not expect end-users or external applications to communicate with your application on certain ports or protocols, you will want to make sure that traffic is not accepted. This concept is known as attack surface reduction. Resources that are not exposed to the Internet are more difficult to attack, which limits the options an attacker might have to target the availability of your application.
AWS Shield is a managed DDoS protection service that is available in two tiers: Standard and Advanced. AWS Shield Standard applies always-on detection and inline mitigation techniques, such as deterministic packet filtering and priority-based traffic shaping, to minimize application downtime and latency. AWS Shield Standard is included automatically and transparently to your Elastic Load Balancing load balancers, Amazon CloudFront distributions, and Amazon Route 53 resources at no additional cost. When you use these services that include AWS Shield Standard, you receive comprehensive availability protection against all known infrastructure layer attacks. Customers who have the technical expertise to manage their own monitoring and mitigation of application layer attacks can use AWS Shield together with AWS WAF rules to create a comprehensive DDoS attack mitigation strategy.
The following options are both correct as they are best practices for reducing the DDoS attack surface, which then limits the extent to which your application is exposed:
1. Always add a security group that only allows certain ports and authorized servers and protect your origin servers by putting it behind a CloudFront distribution. Enable AWS Shield Advanced which provides enhanced DDoS attack detection and monitoring for application-layer traffic to your AWS resources.
2. Configure the Network Access Control Lists (ACLs) to only allow the required ports to your network. Identify and block common DDoS request patterns to effectively mitigate a DDoS attack by using AWS WAF.
Using AWS Shield together with AWS WAF rules provide a comprehensive DDoS attack mitigation strategy.
The option that says: Add Elastic Load Balancing and Auto Scaling to your EC2 instances to improve availability and scalability. Use extra large EC2 instances to accommodate a surge of incoming traffic caused by a DDoS attack and utilize AWS Systems Manager Session Manager to filter all client-side web sessions to your instances is incorrect. Although it improves the scalability of your network in case of an ongoing DDoS attack, it simply absorbs the heavy application layer traffic and doesn't minimize the attack surface in your cloud architecture. In addition, AWS Systems Manager Session Manager is primarily used to provide secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys, but not to filter client-side web sessions.
The options that say: Allow versioning in your S3 bucket. Ensure that the OS of all of your EC2 instances are properly patched using Systems Manager Patch Manager and Strictly implement Multi-Factor Authentication (MFA) in AWS. Use a combination of Amazon Fraud Detector, AWS Config, and Trusted Advisor to fortify your AWS resources are incorrect because MFA, as well as the Versioning feature in S3, don't minimize the DDoS attack surface area. Although it is recommended that all of your instances are properly patched using the Systems Manager Patch Manager, it is still not enough to protect your cloud infrastructure against DDoS attacks. In addition, Amazon Fraud Detector is used in the detection of potentially fraudulent activities online. This service will not help you minimize the blast radius of a security attack.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 39: Skipped
A leading commercial bank has a hybrid network architecture and is extensively using AWS for its day-to-day operations. The bank uses an Amazon S3 bucket to store sensitive bank records. It has versioning enabled and does not have any encryption. The new solutions architect for the company was asked to implement Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C) for the Amazon S3 bucket to ensure data inside it is secured both at rest and in transit.
Which of the following options should the solutions architect implement to achieve the company requirements? (Select TWO.)
For presigned URLs, specify the algorithm using the x-amz-server-side-encryption-customer-key-MD5
request header
For presigned URLs, specify the algorithm using the x-amz-server-side-encryption-customer-algorithm
request header
(Correct)
Use WSS (WebSocket Secure)
Only use the S3 console to upload and update objects with SSE-C encryption.
For Amazon S3 REST API calls, use the following HTTP Request Headers: x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
(Correct)
Explanation
Server-side encryption is about protecting data at rest. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys. With the encryption key you provide as part of your request, Amazon S3 manages both the encryption as it writes to disks, and decryption when you access your objects. Therefore, you don't need to maintain any code to perform data encryption and decryption. The only thing you do is manage the encryption keys you provide.
When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory. Amazon S3 will reject any requests made over HTTP when using SSE-C. For security reasons, it is recommended that you consider any key you send erroneously using HTTP to be compromised. You should discard the key and rotate as appropriate.
At the time of object creation—that is, when you are uploading a new object or making a copy of an existing object—you can specify if you want Amazon S3 to encrypt your data by adding the x-amz-server-side-encryption
header to the request. Set the value of the header to the encryption algorithm AES256
that Amazon S3 supports. Amazon S3 confirms that your object is stored using server-side encryption by returning the response header x-amz-server-side-encryption
.
Therefore, the following options are correct:
- For Amazon S3 REST API calls, you have to include the following HTTP Request Headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
- For presigned URLs, you should specify the algorithm using the x-amz-server-side-encryption-customer-algorithm
request header.
The option that says: Using WSS (WebSocket Secure) is incorrect as you have to use HTTPS and not WSS.
The option that says: Specifying the algorithm using the x-amz-server-side-encryption-customer-key-MD5
request header for presigned URLs is incorrect. You should use the x-amz-server-side-encryption-customer-algorithm
request header instead.
The option that says: Only use the S3 console to upload and update objects with SSE-C encryption is incorrect because you should use S3 REST APIs instead of the S3 web console.
References:
Check out this Amazon S3 Cheat Sheet:
Question 42: Skipped
A company has a CRM application that uses a MySQL database hosted in Amazon RDS, and a central data warehouse that runs on Amazon Redshift. There is a batch analytics process that runs every day and reads data from RDS. During the execution of the batch analytics, the RDS utilization spikes up, which results in the CRM application becoming unresponsive. The top management dashboard must also be updated with new data right after the batch analytics processing completes. However, the dashboard is on another system running on-premises and cannot be modified directly. The only way to update the dashboard is to send an email with the new data to the dashboard system via SMTP, which will then be parsed and processed to update the dashboard with the latest data.
How would the solutions architect optimize this scenario to solve performance issues and automate the process as much as possible?
Consider using Amazon Redshift as the main OLTP transactional database instead of RDS for the batch analytics and use Redshift Spectrum to run SQL queries directly against Exabytes of structured or unstructured data in S3 without the need for unnecessary data movement. Utilize Amazon SNS to notify the on-premises system to update the dashboard.
Add read replicas for the RDS database to speed up batch analytics and use Amazon SNS to notify the on-premises system to update the dashboard.
(Correct)
Consider using Amazon Redshift instead of Amazon RDS as the database for the CRM application. Use Amazon SQS to notify the on-premises system to update the dashboard.
Add read replicas for the RDS database to speed up batch analytics and use Amazon SQS to notify the on-premises system to update the dashboard.
Explanation
In this scenario, the use of Amazon RDS Read Replicas is the best option. It provides enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.
Therefore, the correct answer is: Adding read replicas for the RDS database to speed up batch analytics and using Amazon SNS to notify the on-premises system to update the dashboard. It uses Read Replicas which improves the read performance, and it uses SNS which automates the process of notifying the on-premises system to update the dashboard.
The option that says: Consider using Amazon Redshift as the main OLTP transactional database instead of RDS for the batch analytics and use Redshift Spectrum to run SQL queries directly against Exabytes of structured or unstructured data in S3 without the need for unnecessary data movement. Utilize Amazon SNS to notify the on-premises system to update the dashboard is incorrect because Redshift is primarily used for OLAP scenarios whereas RDS is used for OLTP scenarios. Hence, replacing RDS with Redshift is not a valid solution. Although using Redshift Spectrum to run SQL queries is valid, it is still incorrect to replace RDS with Redshift as your main OLTP database.
The option that says: Consider using Amazon Redshift instead of Amazon RDS as the database for the CRM application. Use Amazon SQS to notify the on-premises system to update the dashboard is incorrect because Redshift is used for OLAP scenarios whereas RDS is used for OLTP scenarios. Hence, replacing RDS with Redshift is not a solution.
The option that says: Adding read replicas for the RDS database to speed up batch analytics and using Amazon SQS to notify the on-premises system to update the dashboard is incorrect because SQS is not a service to be used for sending the notification.
References:
Check out this Amazon RDS Cheat Sheet:
Question 44: Skipped
A company has recently migrated its core application to the AWS Cloud. The application allows users to upload scanned forms through a web application hosted on a fleet of Amazon EC2 instances. The application connects to a backend database hosted on Amazon RDS for PostgreSQL. The user metadata are stored on the database while the scanned forms are stored on an Amazon S3 bucket.
For each uploaded form, the application sends a notification to an Amazon SNS topic to which the team members are subscribed. Then, one of the team members will log in, validate the forms, and manually extracts relevant data from the scanned forms. This information is then submitted to another system using an API. The management wants to improve this process by automation to reduce human effort, increase efficiency and maintain high accuracy.
Which of the following options is the recommended solution to meet the company's requirements?
Add another tier to the application by using AWS Step Functions and AWS Lambda to facilitate the different stages of processing. Implement an artificial intelligence and machine learning (AL/ML) service using Amazon Rekognition and Amazon Transcribe to parse information from the scanned forms. Store the output in Amazon DocumentDB. Update the application to parse data from the DocumentDB table and send it to the other system via API call.
Add another tier to the application by deploying an artificial intelligence and machine learning (AL/ML) model trained using Amazon SageMaker service. Use this model to perform optical character recognition (OCR) on the uploaded forms. Store the output in another Amazon S3 bucket. Update the application to parse data from the Amazon S3 bucket and send it to the other system via API call.
Add another tier to the application by deploying an open-source optical character recognition (OCR) software on Amazon Elastic Kubernetes (Amazon EKS) to process the uploaded forms. Store the output in another Amazon S3 bucket. Parse the output and send it to an Amazon DynamoDB table. Update the application to get data from the DynamoDB table and send it to the other system via API call.
Add another tier to the application by using AWS Step Functions and AWS Lambda to facilitate the different stages of processing. Use a combination of Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) and parse data from the scanned forms. Store the output in another Amazon S3 bucket. Update the application to parse data from the Amazon S3 bucket and send it to the other system via API call.
(Correct)
Explanation
Amazon Textract is a fully managed machine learning service that automatically extracts printed text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
Amazon Textract helps you add document text detection and analysis to your applications. Using Amazon Textract, you can do the following:
-Detect typed and handwritten text in a variety of documents, including financial reports, medical records, and tax forms.
-Extract text, forms, and tables from documents with structured data using the Amazon Textract Document Analysis API.
-Specify and extract information from documents using the Queries feature within the Amazon Textract Analyze Document API.
-Process invoices and receipts with the AnalyzeExpense API.
-Process ID documents, such as driver's licenses and passports issued by the U.S. government, using the AnalyzeID API.
-Upload and process mortgage loan packages through automatic routing of the document pages to the appropriate Amazon Textract analysis operations using the Analyze Lending workflow.
Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents.
Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data. The service identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic.
Therefore, the correct answer is: Add another tier to the application by using AWS Step Functions and AWS Lambda to facilitate the different stages of processing. Use a combination of Amazon Textract and Amazon Comprehend to perform optical character recognition (OCR) and parse data from the scanned forms. Store the output in another Amazon S3 bucket. Update the application to parse data from the Amazon S3 bucket and send it to the other system via API call. Amazon Textract uses machine learning service that automatically extracts printed text, handwriting, and other data from scanned documents. Amazon Comprehend uses machine learning to find insights and relationships in text.
The option that says: Add another tier to the application by deploying an open-source optical character recognition (OCR) software on Amazon Elastic Kubernetes (Amazon EKS) to process the uploaded forms. Store the output in another Amazon S3 bucket. Parse the output and send it to an Amazon DynamoDB table. Update the application to get data from the DynamoDB table and send it to the other system via API call is incorrect. This may be possible but is not recommended as it will add operation overhead. The Amazon Textract and Comprehend services are better suited for this scenario.
The option that says: Add another tier to the application by using AWS Step Functions and AWS Lambda to facilitate the different stages of processing. Implement an artificial intelligence and machine learning (AL/ML) service using Amazon Rekognition and Amazon Transcribe to parse information from the scanned forms. Store the output in Amazon DocumentDB. Update the application to parse data from the DocumentDB table and send it to the other system via API call is incorrect. Amazon Rekognition is used for computer vision to extract information from images and videos, not for scanned text forms. Amazon Transcribe is used for automatic speech recognition or adding speech-to-text capabilities to applications.
The option that says: Add another tier to the application by deploying an artificial intelligence and machine learning (AL/ML) model trained using Amazon SageMaker service. Use this model to perform optical character recognition (OCR) on the uploaded forms. Store the output in another Amazon S3 bucket. Update the application to parse data from the Amazon S3 bucket and send it to the other system via API call is incorrect. This may be possible, however, this will require you to create and train a model for performing OCR. AWS already offers Textract and Comprehend services which are better suited for this scenario.
References:
Check out these AWS Textract and Amazon Comprehend Cheat Sheets:
Question 45: Skipped
An analytics company provides big data services to various clients worldwide. For performance-testing activities, a Big Data Analytics application is using an Elastic MapReduce cluster which will only be run once. The cluster is designed to ingest 20 TB of data with a total of 30 EC2 instances and is expected to run for about 48 hours.
Which of the following options is the most cost-effective architecture to implement for this scenario without sacrificing data integrity?
Use On-Demand instances for the core nodes. Use Reserved EC2 instances for the master node and Spot EC2 instances for the task nodes.
For both the master and core nodes, use Reserved EC2 instances. For the task nodes, use Spot EC2 instances.
Use a combination of On-Demand instance and Spot Instance types for both the master and core nodes. Use On-Demand EC2 instances for the task nodes.
For both the master and core nodes, use On-Demand EC2 instances. For the task nodes, use Spot EC2 instances.(Correct)
Explanation
When you set up a cluster, you choose a purchasing option for EC2 instances. You can choose On-Demand Instances, Spot Instances, or both. With On-Demand Instances, you pay for compute capacity by the hour. Optionally, you can have these On-Demand Instances use Reserved Instance or Dedicated Instance purchasing options.
With Reserved Instances, you make a one-time payment for an instance to reserve capacity. Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. After you purchase a Reserved Instance, if all of the following conditions are true, Amazon EMR uses the Reserved Instance when a cluster launches:
- An On-Demand Instance is specified in the cluster configuration that matches the Reserved Instance specification.
- The cluster is launched within the scope of the instance reservation (the Availability Zone or Region).
- The Reserved Instance capacity is still available.
Spot Instances in Amazon EMR provide an option for you to purchase Amazon EC2 instance capacity at a reduced cost as compared to On-Demand purchasing. The disadvantage of using Spot Instances is that instances may terminate unpredictably as prices fluctuate.
With the uniform instance group configuration, you can have up to a total of 48 task instance groups. The ability to add instance groups in this way allows you to mix EC2 instance types and pricing options, such as On-Demand Instances and Spot Instances. This gives you the flexibility to respond to workload requirements in a cost-effective way.
With the instance fleet configuration, the ability to mix instance types and purchasing options are built-in, so there is only one task instance fleet.
Because Spot Instances are often used to run task nodes, Amazon EMR has default functionality for scheduling YARN jobs so that running jobs do not fail when task nodes running on Spot Instances are terminated. Amazon EMR does this by allowing application master processes to run only on core nodes. The application master process controls running jobs and needs to stay alive for the life of the job.
On-Demand instances cost more than Spot instances, which is why it is better to use Spot Instances for the task nodes to save costs. However, using Spot Instances for the master and core nodes is not recommended since a Spot Instance can potentially become unavailable during the 48-hour process.
Hence, this option is the correct answer: For both the master and core nodes, use On-Demand EC2 instances. For the task nodes, use Spot EC2 instances.
Remember that in this scenario, the processing will only take 48 hours. That is why it is not suitable to use Reserved Instances since the minimum reservation for this instance purchasing option is 1 year.
Therefore, the following options are incorrect:
- Use On-Demand instances for the core nodes. Use Reserved EC2 instances for the master node and Spot EC2 instances for the task nodes.
- For both the master and core nodes, use Reserved EC2 instances. For the task nodes, use Spot EC2 instances.
The option that says: Use a combination of On-Demand instance and Spot Instance types for both the master and core nodes. Use On-Demand EC2 instances for the task nodes is incorrect because using On-Demand EC2 instances for the task nodes is not cost-efficient. A better setup is to use Spot Instances for the task nodes.
References:
Check out this Amazon EMR Cheat Sheet:
Question 46: Skipped
A company runs an application in a fleet of Amazon EC2 instances in the us-east-2 region. A database server is hosted on the on-premises data center which complies with the BASE (Basically Available, Soft state, Eventual consistency) model rather than the ACID (Atomicity, Consistency, Isolation, Durability) consistency model. The on-premises network has a 10 GB AWS Direct Connect connection to the Amazon VPC in us-east-2. The application relies on this database for normal operations. Whenever there are lots of database write requests, the application behavior becomes erratic.
Which of the following options should the solutions architect implement to improve the performance of the application in a cost-effective way?
Create an Amazon RDS multi-AZ instance that will synchronize with the on-premises database server using Amazon EventBridge. Redirect the write operations of the application to the Amazon RDS endpoint via the Amazon Elastic Transcoder service.
Create a Hadoop cluster using Amazon Elastic Map Reduce (EMR) and use the S3DistCp tool to synchronize data between the on-premises database and the Hadoop cluster.
Update the application to write to an Amazon DynamoDB table. Feed the table to an Amazon EMR cluster and create a map function that will update the on-premises database for every table update.
Create an Amazon SQS queue and develop a consumer process to flush the queue to the on-premises database server. Update the application to enable writing to the SQS queue.
(Correct)
Explanation
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Since the application relies on an eventual consistency model, there should be no problem on adding an SQS queue in front of the database.
Decoupling message queuing from the database improves database availability and enables greater message queue scalability. It also provides a more cost-effective use of the database, and mitigates backpressure created when database performance is constrained by message management.
You can use standard message queues in many scenarios, as long as your application can process messages that arrive more than once and out of order, for example:
- Decouple live user requests from intensive background work: Let users upload media while resizing or encoding it.
- Allocate tasks to multiple worker nodes: Process a high number of credit card validation requests.
- Batch messages for future processing: Schedule multiple entries to be added to a database.
Therefore, the correct answer is: Create an Amazon SQS queue and develop a consumer process to flush the queue to the on-premises database server. Update the application to enable writing to the SQS queue. Since the application follows the eventual consistency model, an SQS can be used to temporarily hold the write requests, while a worker process flushes the queue to the on-premises database.
The option that says: Create a Hadoop cluster using Amazon Elastic Map Reduce (EMR) and use the S3DistCp tool to synchronize data between the on-premises database and the Hadoop cluster is incorrect. S3DistCp tool is used to copy large amounts of data from Amazon S3 into HDFS. It is not suitable for synchronizing data to an on-premises database.
The option that says: Create an Amazon RDS multi-AZ instance that will synchronize with the on-premises database server using Amazon EventBridge. Redirect the write operations of the application to the Amazon RDS endpoint via the Amazon Elastic Transcoder service is incorrect. The application is using the BASE consistency model so an SQL-based database, such as an Amazon RDS, may not be compatible with the data to be written by the application. Moreover, you cannot directly synchronize your on-premises database server using Amazon EventBridge. Amazon Elastic Transcoder is simply a media transcoding service in AWS; thus, it can't be used to send write operations to your Amazon RDS endpoint.
The option that says: Update the application to write to an Amazon DynamoDB table. Feed the table to an Amazon EMR cluster and create a map function that will update the on-premises database for every table update is incorrect. This may be possible but creating a DynamoDB table with a high WCU and an EMR cluster significantly increases operational costs compared to using an SQS queue.
References:
Check out this Amazon SQS Cheat Sheet:
Question 49: Skipped
A world-renowned logistics company runs its global enterprise e-commerce platform on the AWS cloud. The company has built a multi-tier web application running in a VPC that uses an Elastic Load Balancer in front of both the web tier and the app tier, with static assets served directly from an Amazon S3 bucket. It uses a combination of Amazon RDS and DynamoDB for the dynamic data and then archiving nightly into an Amazon S3 bucket for further processing with Amazon Elastic MapReduce. After a routine audit, the company found questionable log entries and suspected that someone is attempting to gain unauthorized access to the system. The solutions architect has been tasked to improve the security of the architecture from DDoS, SQL injection, and HTTP flood attacks as well as from bad bots (content scrapers).
Which of the following approach provides the MOST suitable and scalable solution to protect the infrastructure from these kinds of security attacks?
Create an identical application stack that acts as a standby environment in another AWS region by using an AWS CloudFormation template. Use AWS CloudFormation StackSets to deploy the new stack and configure the security groups as well as network ACLs of the EC2 instances. Use Amazon Macie to protect the data stored in the Amazon S3 bucket. Create a Route 53 failover routing policy and configure an active-passive failover.
Insert the identified suspect's source IP as an explicit inbound deny to the network ACL rules of the web tier's subnet. Set up AWS Config to periodically audit the network ACLs and ensure that the blacklisted IP addresses are always in place.
Set up AWS WAF and AWS Shield Advanced on all web endpoints. Launch AWS WAF rules against SQL injection and other common web exploits.
(Correct)
Establish an AWS Direct Connect (DX) connection to the VPC through a Direct Connect partner. Configure Internet connectivity to filter the traffic in hardware Web Application Firewall (WAF) and then reroute the traffic through the DX connection into the application. Use the company's wide area network (WAN) to send traffic over the DX connection.
Explanation
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.
Hence, the correct answer is: Set up AWS WAF and AWS Shield Advanced on all web endpoints. Launch AWS WAF rules against SQL injection and other common web exploits.
The option that says: Create an identical application stack that acts as a standby environment in another AWS region by using an AWS CloudFormation template. Use AWS CloudFormation StackSets to deploy the new stack and configure the security groups as well as network ACLs of the EC2 instances. Use Amazon Macie to protect the data stored in the Amazon S3 bucket. Create a Route 53 failover routing policy and configure an active-passive failover is incorrect because this solution doesn't provide the necessary security protection against DDoS and other web vulnerability attacks. It only provides a disaster recovery plan in the event that your primary environment goes down. You have to set up AWS WAF and AWS Shield Advanced instead.
The option that says: Insert the identified suspect's source IP as an explicit inbound deny to the network ACL rules of the web tier's subnet. Set up AWS Config to periodically audit the network ACLs and ensure that the blacklisted IP addresses are always in place is incorrect. Even though blocking certain IPs will mitigate the risk, the attacker could maneuver the IP address and circumvent the IP check by NACL, and it does not prevent attacks from new sources of threat.
The option that says: Establish an AWS Direct Connect (DX) connection to the VPC through a Direct Connect partner. Configure Internet connectivity to filter the traffic in hardware Web Application Firewall (WAF) and then reroute the traffic through the DX connection into the application. Use the company's wide area network (WAN) to send traffic over the DX connection is incorrect. Although this option could work, the setup is very complex and it is not a cost-effective solution. Using the AWS Shield Advanced and AWS WAF combination is still the better solution.
References:
Check out these AWS WAF and Shield Cheat Sheets:
Question 52: Skipped
A company runs a suite of web applications in AWS. The application is hosted in an Auto Scaling group of On-Demand Amazon EC2 instances behind an Application Load Balancer that handles traffic from multiple web domains. The solutions architect is responsible for securing the system by allowing multiple domains to serve SSL traffic without the need to re-authenticate and re-provision a new certificate whenever a new domain name is added. This change of architecture from HTTP to HTTPS will help improve the SEO and Google search ranking of the web application.
Which of the following options are valid solutions to meet the above requirements? (Select TWO.)
Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
(Correct)
Add a Subject Alternative Name (SAN) for each additional domain to your certificate.
Use a wildcard certificate to handle multiple sub-domains and different domains.
Use a Gateway Load Balancer instead of an Application Load Balancer. Upload all SSL certificates of the domains and use Server Name Indication (SNI).
Create a new CloudFront web distribution and configure it to serve HTTPS requests using dedicated IP addresses in order to associate your alternate domain names with a dedicated IP address in each CloudFront edge location.
(Correct)
Explanation
SNI Custom SSL relies on the SNI extension of the Transport Layer Security protocol, which allows multiple domains to serve SSL traffic over the same IP address by including the hostname to which the viewers are trying to connect.
You can host multiple TLS-secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These features are provided at no additional charge.
You can use your own SSL certificates with Amazon CloudFront at no additional charge with Server Name Indication (SNI) Custom SSL. Most modern browsers support SNI and provide an efficient way to deliver content over HTTPS using your own domain and SSL certificate. Amazon CloudFront delivers your content from each edge location and offers the same security as the Dedicated IP Custom SSL feature.
The option that says: Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI) is correct. You can upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
The option that says: Create a new CloudFront web distribution and configure it to serve HTTPS requests using dedicated IP addresses in order to associate your alternate domain names with a dedicated IP address in each CloudFront edge location is correct. You can configure Amazon CloudFront to require viewers to interact with your content over an HTTPS connection using the HTTP to HTTPS Redirect feature. If you configure CloudFront to serve HTTPS requests using SNI, CloudFront associates your alternate domain name with an IP address for each edge location. The IP address to your domain name is determined during the SSL/TLS handshake negotiation and isn’t dedicated to your distribution.
The option that says: Use a wildcard certificate to handle multiple sub-domains and different domains is incorrect. A wildcard certificate can only handle multiple sub-domains but not different domain names.
The option that says: Add a Subject Alternative Name (SAN) for each additional domain to your certificate is incorrect. Although using Subject Alternative Name (SAN) is correct, you will still have to reauthenticate and reprovision your certificate every time you add a new domain. One of the requirements in the scenario is that you should not have to reauthenticate and reprovision your certificate; hence, this solution is incorrect.
The option that says: Use a Gateway Load Balancer instead of an Application Load Balancer. Upload all SSL certificates of the domains and use Server Name Indication (SNI) is incorrect because a Gateway Load Balancer does not support SNI.
References:
Check out this Amazon CloudFront Cheat Sheet:
SNI Custom SSL vs Dedicated IP Custom SSL:
Comparison of AWS Services Cheat Sheets:
Question 53: Skipped
A visual effects studio has over 40-TB worth of video files stored in the company's on-premises tape library. The tape drives are managed by a Media Asset Management (MAM) solution. The video files contain a variety of footage which includes faces, objects, sceneries, cars, and many others. The company wants to automatically build a metadata library for the video files based on these objects. This will then be used as a catalog for the search feature of the MAM solution. The company already has a catalog of people’s photos and names that appeared on the video footage. The company wants to migrate all the video files of the MAM solution to AWS so a Direct Connect connection was provisioned from the on-premises data center to AWS to facilitate this.
Which of the following is the MOST suitable implementation that will meet the company's requirements?
Securely upload the files to an Amazon S3 bucket using AWS Transfer for SFTP. Create an Amazon EC2 instance that will run GluonCV libraries to generate metadata information from the video files in the S3 bucket. Store the catalog of people’s faces and names in the Amazon EBS volume to be used by GluonCV. After processing the videos, push the generated metadata to the MAM solution search catalog.
Create a stream in Amazon Kinesis Video Streams that will ingest the videos from the MAM system and store the videos to an Amazon S3 bucket. Configure the MAM solution to stream the videos into Kinesis Video Streams. Use Amazon Rekognition to build a collection based on the videos by using the catalog of people’s faces and names. Set up a stream consumer that will retrieve the generated metadata and then push it to the MAM solution search catalog.
Provision an AWS Storage Gateway – file gateway appliance on the on-premises data center. Configure the MAM solution to extract the video files from the current tape archives and move them to the file gateway share which is then synced to Amazon S3. Use Amazon Rekognition to build a collection based on the videos by using the catalog of people’s faces and names. Create an AWS Lambda function that will invoke Rekognition to pull the video files from the S3 bucket, retrieve the generated metadata and then push it to the MAM solution search catalog.
(Correct)
Configure the MAM solution to extract the video files from the current tape archives and move them to an Amazon S3 bucket using AWS DataSync. Use an Amazon SageMaker Jupyter notebook instance to build a collection based on the videos by using the catalog of people’s faces and names. Create an AWS Lambda function that will invoke Amazon SageMaker to pull the video files from the S3 bucket, retrieve the generated metadata, and then push it to the MAM solution search catalog.
Explanation
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security.
AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions. A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). A file gateway simplifies file storage in Amazon S3, integrates with existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.
Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content.
Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Therefore, the correct answer is: Provision an AWS Storage Gateway – file gateway appliance on the on-premises data center. Configure the MAM solution to extract the video files from the current tape archives and move them to the file gateway share which is then synced to Amazon S3. Use Amazon Rekognition to build a collection based on the videos by using the catalog of people’s faces and names. Create an AWS Lambda function that will invoke Rekognition to pull the video files from the S3 bucket, retrieve the generated metadata and then push it to the MAM solution search catalog.
The option that says: Configure the MAM solution to extract the video files from the current tape archives and move them to an Amazon S3 bucket using AWS DataSync. Use an Amazon SageMaker Jupyter notebook instance to build a collection based on the videos by using the catalog of people’s faces and names. Create an AWS Lambda function that will invoke Amazon SageMaker to pull the video files from the S3 bucket, retrieve the generated metadata and then push it to the MAM solution search catalog is incorrect. An Amazon SageMaker notebook instance is simply a machine learning (ML) compute instance running the Jupyter Notebook App. It is not capable of doing image detection or recognition. Jupyter notebooks are primarily used to prepare your data and write code to train models. A better solution for this is to use Amazon Rekognition.
The option that says: Create a stream in Amazon Kinesis Video Streams that will ingest the videos from the MAM system and store the videos to an Amazon S3 bucket. Configure the MAM solution to stream the videos into Kinesis Video Streams. Use Amazon Rekognition to build a collection based on the videos by using the catalog of people’s faces and names. Set up a stream consumer that will retrieve the generated metadata and then push it to the MAM solution search catalog is incorrect. This is not cost-effective and will require more changes to the existing MAM solution. In addition, it is not stated in the scenario that the MAM solution supports video streaming directly to Kinesis Video Stream. Most of the time, you need to set up a custom Kinesis Video Streams Producer client in order to send data to a Kinesis Video Stream.
The option that says: Securely upload the files to an Amazon S3 bucket using AWS Transfer for SFTP. Create an Amazon EC2 instance that will run GluonCV libraries to generate metadata information from the video files in the S3 bucket. Store the catalog of people’s faces and names in the Amazon EBS volume to be used by GluonCV. After processing the videos, push the generated metadata to the MAM solution search catalog is incorrect. Uploading files to Amazon S3 is secured by default because it is using HTTPS. Unless there is a requirement for using the SFTP protocol, AWS Transfer for SFTP is not recommended for this scenario. This is not cost-effective and will require more management overhead in maintaining and configuring the GluonCV libraries on the EC2 instance.
References:
Check out these Amazon Rekognition and Storage Gateway Cheat Sheets:
Question 56: Skipped
The www.tutorialsdojonews.com website is using the WordPress platform that runs on a fleet of Amazon EC2 instances behind an application load balancer to deliver news around the globe. There are a lot of customers complaining about the slow loading time of the website. The solutions architect has created a CloudFront distribution and set the ALB as the origin to improve the read performance. After several days, the IT Security team reported that the setup is not secure and it should enable end-to-end HTTPS connections from the user's browser to the origin via CloudFront.
Which of the following options should the solutions architect implement to satisfy the above requirements?
Configure CloudFront to use its default certificate. Configure the CloudFront distribution to redirect HTTP to HTTPS protocol. For the origin, generate a new SSL certificate on AWS Certificate Manager.
Use a self-signed certificate in both the origin and CloudFront.
Use third-party CA certificate on both the origin and CloudFront.
Configure the CloudFront distribution to redirect HTTP to HTTPS protocol. Generate a new SSL certificate on AWS Certificate Manager and use it as the CloudFront distribution and origin certificate.
(Correct)
Explanation
The certificate issuer you must use depends on whether you want to require HTTPS between viewers and CloudFront or between CloudFront and your origin:
HTTPS between viewers and CloudFront
- You can use a certificate that was issued by a trusted certificate authority (CA) such as Comodo, DigiCert, Symantec or other third-party providers.
- You can use a certificate provided by AWS Certificate Manager (ACM)
HTTPS between CloudFront and a custom origin
- If the origin is not an ELB load balancer, such as Amazon EC2, the certificate must be issued by a trusted CA such as Comodo, DigiCert, Symantec or other third-party providers.
- If your origin is an ELB load balancer, you can also use a certificate provided by ACM.
If you're using your own domain name, such as tutorialsdojo.com, you need to change several CloudFront settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), or import a certificate from a third-party certificate authority into ACM or the IAM certificate store.
Therefore, the correct answer is: Configure the CloudFront distribution to redirect HTTP to HTTPS protocol. Generate a new SSL certificate on AWS Certificate Manager and use it as the CloudFront distribution and origin certificate. You can use ACM to generate a valid SSL certificate for your ALB and CloudFront distribution.
The option that says: Use a third-party CA certificate on both the origin and CloudFront is incorrect. You can use a third-party CA certificate on both the custom origin and CloudFront, however, it is cost-effective to just use an ACM-generated SSL as it also supports automatic renewal.
The options that says: Use a self-signed certificate in both the origin and CloudFront is incorrect. The website is hosted in Amazon EC2 and the certificate must be issued by a trusted CA such as Comodo, DigiCert, Symantec, or other third-party providers.
The option that says: Configure CloudFront to use its default certificate. Configure the CloudFront distribution to redirect HTTP to HTTPS protocol. For the origin, generate a new SSL certificate on AWS Certificate Manager is incorrect. You cannot use the default certificate in CloudFront since the website is using a custom domain (www.tutorialsdojonews.com). You can generate a valid SSL certificate for your domain on AWS Certificate Manager.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 57: Skipped
A company is hosting its three-tier web application on the us-east-1 region of AWS. The web and application tiers are stateless and both are running on their own fleet of On-Demand Amazon EC2 instances, each with its respective Auto Scaling group. The database tier is running on an Amazon Aurora database with about 40 TB of data. As part of the business continuity strategy of the company, the Solutions Architect must design a disaster recovery plan in case the primary region fails. The application requires an RTO of 30 minutes and the data tier requires an RPO of 5 minutes.
Which of the following options should the Solution Architect implement to achieve the company requirements in a cost-effective manner? (Select TWO.)
Set up a cross-Region read replica of the Amazon Aurora database to the backup region. Promote this read replica as the master database in case of a disaster in the primary region.
(Correct)
For a quick recovery time, set up a hot-standby of web and application tier on the backup region. Redirect the traffic to the backup region in case of a disaster in the primary region.
Use AWS Backup to create a backup job that will copy the EC2 EBS volumes and RDS data to an Amazon S3 bucket in another region. Restore the backups in case of a disaster in the primary region.
Schedule a daily snapshot of the Amazon EC2 instances for the web and application tier. Copy the snapshot to the backup region. Restore the backups in case of a disaster in the primary region.
(Correct)
Configure an automated snapshot of the Amazon Aurora database every 5 minutes. Quickly restore the database on the backup region in case of a disaster in the primary region.
Explanation
Amazon EC2 EBS volumes are the primary persistent storage option for Amazon EC2. You can use this block storage for structured data, such as databases, or unstructured data, such as files in a file system on a volume. With Amazon EBS, you can create point-in-time snapshots of volumes, which we store for you in Amazon S3. After you create a snapshot and it has finished copying to Amazon S3, you can copy it from one AWS Region to another, or within the same Region.
Snapshots are useful if you want to back up your data and logs across different geographical locations at regular intervals. In case of disaster, you can restore your applications using point-in-time backups stored in the secondary Region. This minimizes data loss and recovery time.
You can create cross-region read replicas for Amazon Aurora. This allows you to serve read traffic from your users in different geographic regions and increases your application’s responsiveness. This feature also provides you with improved disaster recovery capabilities in case of regional disruptions. You can seamlessly migrate your database from one region to another by creating a cross-region read replica and promoting it to be the new primary database.
You can create an Amazon Aurora MySQL DB cluster as a read replica in a different AWS Region than the source DB cluster. You can promote an Aurora MySQL read replica to a standalone DB cluster. When you promote an Aurora MySQL read replica, its DB instances are rebooted before they become available. Typically, you promote an Aurora MySQL read replica to a standalone DB cluster as a data recovery scheme if the source DB cluster fails.
When restoring Amazon Aurora snapshots, or point-in-time restore, the restoration may take several minutes to hours. Long restore times are caused by long-running transactions in the source database at the time the backup was taken.
The option that says: Schedule a daily snapshot of the Amazon EC2 instances for the web and application tier. Copy the snapshot to the backup region. Restore the backups in case of a disaster in the primary region is correct. The web and application tiers are stateless, meaning they don’t have any important data stored on them. Therefore, copying the daily snapshot of the EC2 instance to the backup region will suffice. The RTO of 30 minutes is ample time to spawn new EC2 instances on the backup region.
The option that says: Set up a cross-Region read replica of the Amazon Aurora database to the backup region. Promote this read-replica as the master database in case of a disaster in the primary region is correct. Given that the RPO for the data tier is 5 minutes, it is better to create a cross-Region read-replica on the backup region. The primary DB instance will asynchronously replicate the data to the Read Replica. So in the event that the primary DB failed, the Read Replica contains the updated data. You can also quickly promote this as the master DB instance in case of a disaster in the primary region. You don’t have to wait for a long database snapshot restore time too, which might exceed the 30-minute RTO requirement.
The option that says: For a quick recovery time, set up a hot-standby of web and application tier on the backup region. Redirect the traffic to the backup region in case of a disaster in the primary region is incorrect. Although this is possible, this is not the most cost-effective solution as it entails a significant number of resources that are continuously running. With an RTO of 30 minutes, you can quickly restore backups of the EC2 snapshots of the web and application tier instead of running a hot-standby environment. Take note that in Disaster Recovery, a "hot-standby" means that the application runs in the DR region. Because it's always running, you will incur a significant amount of cost.
The option that says: Configure an automated snapshot of the Amazon Aurora database every 5 minutes. Quickly restore the database on the backup region in case of a disaster in the primary region is incorrect. Restoring 40 TB of data may not be possible if you have an RTO requirement of 30 minutes. Depending on how busy the database was during the time the snapshot was taken, the restoration process may take longer than 30 minutes to complete. Moreover, automated backups only occur once every day during the defined backup window. You can't configure it to run every 5 minutes.
The option that says: Use AWS Backup to create a backup job that will copy the EC2 EBS volumes and RDS data to an Amazon S3 bucket in another region. Restore the backups in case of a disaster in the primary region is incorrect. This may be possible, but the restoration time from the RDS backup may take more time than the required 30 minutes of RTO. The highest backup frequency in AWS Backup is every 12 hours only and not every 5-minutes. Thus, it can only provide a maximum RPO of 12 hours. A better solution is to use a Read Replica with a replication latency of only about a few minutes, providing a higher RPO.
References:
Check out these Amazon EBS and Amazon RDS Cheat Sheets:
Question 61: Skipped
A company plans to release a public beta of its new video game. The release package is approximately 5GB in size. Based on previous releases and community feedback, millions of users from around the world are expected to download the new game. Currently, the company has a Linux-based, FTP website that lists the files which are hosted on its on-premises data center. Public Internet users are able to download the game via the FTP website. However, the company wants a new solution that is cost-effective and will allow faster download performance for its users regardless of their location.
Which of the following options is the recommended solution to meet the company’s requirements?
Host the FTP service on an Auto Scaling group of Amazon EC2 instances. Save the game files on the mounted Amazon EFS volume on each instance. Place the Auto Scaling group behind a Network Load Balancer. Create an Amazon Route 53 entry pointing to the NLB. Publish the Route 53 entry as the FTP website URL to allow users to download the game package.
Create an Amazon S3 bucket with website hosting enabled and upload the game package on it. Create an Amazon CloudFront distribution with the S3 bucket as the origin. Create an Amazon Route 53 entry pointing to the CloudFront distribution. Publish the Route 53 entry as the FTP website URL to allow users to download the game package.
(Correct)
Create an Amazon S3 bucket with website hosting enabled and upload the game package on it. To improve cost-effectiveness, enable the “Requestor Pays” option for the S3 bucket. Create an Amazon Route 53 entry pointing to the S3 bucket. Publish the Route 53 entry as the FTP website URL to allow users to download the game package.
Host the FTP service on an Auto Scaling group of Amazon EC2 instances. Save the game files on the mounted Amazon EBS volumes on each instance. Place the Auto Scaling group behind an Application Load Balancer. Create an Amazon Route 53 entry pointing to the ALB. Publish the Route 53 entry as the FTP website URL to allow users to download the game package.
Explanation
You can use Amazon S3 to host a static website. Hosting a static website on Amazon S3 delivers a highly performant and scalable website at a fraction of the cost of a traditional web server. To host a static website on Amazon S3, configure an Amazon S3 bucket for website hosting and upload your website content. Using the AWS Management Console, you can configure your Amazon S3 bucket as a static website without writing any code. Depending on your website requirements, you can also use some optional configurations, including redirects, web traffic logging, and custom error documents.
When you configure a bucket as a static website, you must enable static website hosting, configure an index document, and set permissions.
You can use Amazon CloudFront to improve the performance of your Amazon S3 website. CloudFront makes your website files (such as HTML, images, and video) available from data centers around the world (known as edge locations). When a visitor requests a file from your website, CloudFront automatically redirects the request to a copy of the file at the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.
CloudFront caches content at edge locations for a period of time that you specify. If a visitor requests content that has been cached for longer than the expiration date, CloudFront checks the origin server to see if a newer version of the content is available. If a newer version is available, CloudFront copies the new version to the edge location. Changes that you make to the original content are replicated to edge locations as visitors request the content.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
- Using a REST API endpoint as the origin, with access restricted by an origin access identity (OAI)
- Using a website endpoint as the origin, with anonymous (public) access allowed. Helpful when creating public FTP websites to allow public users to download files.
- Using a website endpoint as the origin, with access restricted by a Referer header
- Using AWS CloudFormation to deploy a REST API endpoint as the origin, with access restricted by an OAI and a custom domain pointing to CloudFront
Therefore, the correct answer is: Create an Amazon S3 bucket with website hosting enabled and upload the game package on it. Create an Amazon CloudFront distribution with the S3 bucket as the origin. Create an Amazon Route 53 entry pointing to the CloudFront distribution. Publish the Route 53 entry as the FTP website URL to allow users to download the game package. Storing the game package on an Amazon S3 bucket is very cost-effective. Using CloudFront will ensure that users will have a consistently high download performance regardless of their location.
The option that says: Host the FTP service on an Auto Scaling group of Amazon EC2 instances. Save the game files on the mounted Amazon EBS volumes on each instance. Place the Auto Scaling group behind an Application Load Balancer. Create an Amazon Route 53 entry pointing to the ALB. Publish the Route 53 entry as the FTP website URL to allow users to download the game package is incorrect. This is possible but very expensive as you have to attach EBS volumes to each instance. Additionally, this is limited to only one region. Other users from around the world may experience a slower download speed.
The option that says: Host the FTP service on an Auto Scaling group of Amazon EC2 instances. Save the game files on the mounted Amazon EFS volume on each instance. Place the Auto Scaling group behind a Network Load Balancer. Create an Amazon Route 53 entry pointing to the NLB. Publish the Route 53 entry as the FTP website URL to allow users to download the game package is incorrect. This is not recommended because using EFS is more expensive than using an S3 bucket. A Network Load Balancer is also very expensive.
The option that says: Create an Amazon S3 bucket with website hosting enabled and upload the game package on it. To improve cost-effectiveness, enable the “Requestor Pays” option for the S3 bucket. Create an Amazon Route 53 entry pointing to the S3 bucket. Publish the Route 53 entry as the FTP website URL to allow users to download the game package is incorrect. This is not possible because the users will need their own AWS accounts in order to download an object from a bucket that has "Requestor Pays" enabled.
References:
Check out these Amazon S3 and Amazon CloudFront Cheat Sheets:
Question 62: Skipped
A company recently developed a web application that processes customer behavioral data and stores the results in a DynamoDB table. The application is expected to receive a high usage load. To ensure that data is not lost when DynamoDB write requests are throttled, the solutions architect must reduce the load taken by the table.
Which of the following is the MOST cost-effective strategy for reducing the load on the DynamoDB table?
Replicate the DynamoDB table to another AWS region using global tables.
Provision higher write-capacity units (WCUs) to your DynamoDB table.
Use an SQS queue to decouple messages from the application and the database.
(Correct)
Provision more DynamoDB tables to absorb the load.
Explanation
Queuing is a commonly used solution for separating computation components in a distributed processing system. It is a form of the asynchronous communication system used in serverless and microservices architectures. Messages wait in a queue for processing, and leave the queue when received by a single consumer.
Amazon Simple Queue Service (Amazon SQS) is a scalable message queuing system that stores messages as they travel between various components of your application architecture. Amazon SQS enables web service applications to quickly and reliably queue messages that are generated by one component and consumed by another component. A queue is a temporary repository for messages that are awaiting processing.
In the below diagram, the goal is to smooth out the traffic from the application, so that the load process into DynamoDB is much more consistent. The key service used to achieve this is Amazon SQS, which holds all the items until a loader process stores the data in DynamoDB. The architecture looks like this:
Therefore, the correct answer is: Use an SQS queue to decouple messages from the application and the database. Data can be lost if the application fails to store it in DynamoDB due to throttling. Amazon SQS can reduce the load by temporarily holding the data until the DynamoDB throttling subsides. It is scalable as well as cost-efficient.
The option that says: Replicate the DynamoDB table to another AWS region using global tables is incorrect. The global table is a DynamoDB feature that is primarily used for applications requiring multi-region fault tolerance. It won't reduce the load on the existing DynamoDB table.
The option that says: Provision higher write-capacity units (WCUs) to your DynamoDB table is incorrect because increasing the write capacity is an expensive option.
The option that says: Provision more DynamoDB tables to absorb the load is incorrect. While it is possible to create another table for the application to use, it is an anti-pattern and will require a lot of development overhead to keep the data from the tables in sync, not to mention when joining queries. Remember that JOIN operations are not possible in DynamoDB so you need to implement this at the application level. Furthermore, this is not a cost-efficient solution.
References:
Check out this Amazon SQS Cheat Sheet:
Question 65: Skipped
A supermarket chain has a team that handles branded credit card transactions from major card schemes such as Mastercard, Visa, Discover, and AMEX. The company requested an external auditor to audit its AWS environment as part of the Payment Card Industry Data Security Standard (PCI DSS) security compliance. The auditor, operating from their own AWS account, has requested read-only access to the AWS resources across all the company's accounts in order to conduct the necessary checks.
Which of the following options is the recommended action to give the auditor the required access?
Create a new IAM User which has an access key ID and a secret access key for API calls that can be used by the auditor. Attach a read-only permissions to the IAM user.
Give the auditor each of your AWS users' username and password in your VPC and let the auditor use those credentials to login to a specific account and conduct the audit.
Create an Active Directory account for the auditor and use identity federation for SSO to let the auditor log in to your AWS environment and conduct the audit.
Create an IAM role in each AWS account that requires auditing, with a trust policy that lists the auditor's ARN as a principal. Assign this role read-only permissions to access necessary resources.
(Correct)
Explanation
In this scenario, it is recommended that you create an IAM Role that contains the required permissions needed by the auditor. This specific role can be revoked from the user once the compliance activities end.
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user.
Therefore, the correct answer is: Create an IAM role in each AWS account that requires auditing, with a trust policy that lists the auditor's ARN as a principal. Assign this role read-only permissions to access necessary resources.
The options that says: Give the auditor each of your AWS users' username and password in your VPC and let the auditor use those credentials to login to a specific account and conduct the audit is incorrect. Providing the username and password is in itself a major security breach that will result in compliance failure.
The options that says: Create an Active Directory account for the auditor and use identity federation for SSO to let the auditor log in to your AWS environment and conduct the audit is incorrect. While using Active Directory and SSO is a secure approach, this method is typically used for integrating an organization's internal user management system with AWS, which is not necessary for an external auditor.
The options that says: Create a new IAM User which has an access key ID and a secret access key for API calls that can be used by the auditor. Attach a read-only permissions to the IAM user is incorrect. Creating a new IAM user for the auditor with read-only permissions is a valid approach, but it's not as secure and manageable as using roles. IAM users with long-term credentials can pose a security risk if those credentials are compromised.
References:
Check out this AWS IAM Cheat Sheet:
Question 66: Skipped
A company is running a serverless backend API service on AWS. It has several AWS Lambda functions written in Python and an Amazon API Gateway that is configured to invoke the functions. The company wants to secure the API endpoint by ensuring that only authorized IAM users or roles can access the Amazon API Gateway endpoint. The Solutions Architect was also tasked to provide the ability to inspect each request end-to-end to check the latency of the request and to generate service maps.
Which of the following implementation will fulfill the above company requirements?
Generate a new client certificate on Amazon API Gateway. Distribute this certificate to all AWS users or roles that require access to the API endpoint. Ensure that each user will pass the client certification for every request made to the API endpoint. Trace and analyze each user request on API Gateway using Amazon CloudWatch Logs.
Write a separate AWS Lambda function that will act as a custom authorizer. For every call to the API gateway, require the client to pass the access key and secret key. Use the Lambda function to validate the key/secret pair against a valid IAM user. Trace and analyze each user request on API Gateway by using AWS X-Ray.
Configure authorization to use AWS_IAM
for the API Gateway method. Create the IAM users or roles that have the execute-api:Invoke
permission to the ARN of the API resource. Enable request signing with AWS Signature for every call to the API endpoint. Trace and analyze each user request on API Gateway by using AWS X-Ray.
(Correct)
Ensure that the API Gateway resource is secured by only returning the company’s domain in Access-Control-Allow-Origin headers and enabling Cross-origin resource sharing (CORS). Create the IAM users or roles that have the execute-api:Invoke
permission to the ARN of the API resource. Trace and analyze each user request on API Gateway using Amazon CloudWatch Logs.
Explanation
API Gateway supports multiple mechanisms for controlling and managing access to your API.
You can use the following mechanisms for authentication and authorization:
Resource policies let you create resource-based policies to allow or deny access to your APIs and methods from specified source IP addresses or VPC endpoints.
Standard AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods. IAM roles and policies can be used for controlling who can create and manage your APIs, as well as who can invoke them.
IAM tags can be used together with IAM policies to control access.
Endpoint policies for interface VPC endpoints allow you to attach IAM resource policies to interface VPC endpoints to improve the security of your private APIs.
Lambda authorizers are Lambda functions that control access to REST API methods using bearer token authentication—as well as information described by headers, paths, query strings, stage variables, or context variables request parameters. Lambda authorizers are used to control who can invoke REST API methods.
Amazon Cognito user pools let you create customizable authentication and authorization solutions for your REST APIs. Amazon Cognito user pools are used to control who can invoke REST API methods.
You can enable IAM authentication for an API method in the API Gateway console. Then, you can use IAM policies and resource policies to designate permissions for your API's users. Here are the general steps to implement this:
In the API Gateway console, enable IAM authentication for your API method by going to Settings > Authorization, and choosing AWS_IAM.
Deploy the API to ensure the changes are applied.
Grant API authorization to a group of IAM users by creating an IAM policy document with the required permissions.
For the IAM policy, ensure that it allows the execute-api:Invoke
action and the resource is set to the ARN of the API gateway method.
Attach this policy to the required IAM users or IAM roles.
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. AWS X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
Therefore, the correct answer is: Configure authorization to use AWS_IAM
for the API Gateway method. Create the IAM users or roles that have the execute-api:Invoke
permission to the ARN of the API resource. Enable request signing with AWS Signature for every call to the API endpoint. Trace and analyze each user request on API Gateway by using AWS X-Ray.
The option that says: Ensure that the API Gateway resource is secured by only returning the company’s domain in Access-Control-Allow-Origin headers and enabling Cross-origin resource sharing (CORS). Create the IAM users or roles that have the execute-api:Invoke
permission to the ARN of the API resource. Trace and analyze each user request on API Gateway using Amazon CloudWatch Logs is incorrect because CORS does not provide any IAM authentication capability. Although API Gateway can send execution logs to CloudWatch Logs, you can only view the logs with it for monitoring purposes. It does not provide end-to-end tracing or inspection for the request.
The option that says: Write a separate AWS Lambda function that will act as a custom authorizer. For every call to the API gateway, require the client to pass the access key and secret key. Use the Lambda function to validate the key/secret pair against a valid IAM user. Trace and analyze each user request on API Gateway by using AWS X-Ray is incorrect. Creating a custom Lambda authorizer will require more work compared to just using the AWS_IAM authorizer. Sending AWS access and secret key as part of each HTTPS call is not recommended from a security standpoint since these are sensitive security credentials.
The option that says: Generate a new client certificate on Amazon API Gateway. Distribute this certificate to all AWS users or roles that require access to the API endpoint. Ensure that each user will pass the client certification for every request made to the API endpoint. Trace and analyze each user request on API Gateway using Amazon CloudWatch Logs is incorrect. Using client certificates for authentication does not provide any IAM authentication capability and although API Gateway can send execution logs to CloudWatch Logs, the logs do not provide end-to-end tracing or inspection for request.
References:
Check out the AWS X-Ray and Amazon API Gateway Cheat Sheets:
Question 67: Skipped
A company has scheduled to launch a promotional sale on its e-commerce platform. The company’s web application is hosted on a fleet of Amazon EC2 instances in an Auto Scaling group. The database tier is hosted on an Amazon RDS for PostgreSQL DB instance. This is a large event so the management expects a sudden spike and unpredictable user traffic for the duration of the event. New users are also expected to register and participate in the event so there will be a lot of database writes during the event. The Solutions Architect has been tasked to create a solution that will ensure all submissions are committed to the database without changing the underlying data model.
Which of the following options is the recommended solution for this scenario?
To minimize any changes on the application or the current infrastructure, manually scale the current DB instance to a significantly larger instance size before the event. Choose a larger instance size depending on the anticipated user traffic, and scale down after the event is completed.
Decouple the application and database tier by creating an Amazon SQS queue between them. Create an AWS Lambda function that picks up the messages on the SQS queue and writes them into the database.
(Correct)
Instead of using Amazon RDS, migrate the database to an Amazon DynamoDB table instead. Utilize the built-in automatic scaling in DynamoDB to scale the database based on user traffic.
Create an Amazon ElastiCache for Memcached cluster between the application and database tier. The cache will temporarily store the user submissions until the database is able to commit those entries.
Explanation
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.
Amazon SQS leverages the AWS cloud to dynamically scale based on demand. SQS scales elastically with your application so you don’t have to worry about capacity planning and pre-provisioning. There is no limit to the number of messages per queue, and standard queues provide nearly unlimited throughput.
You can use Amazon Simple Queue Service (SQS) to trigger AWS Lambda functions. Lambda functions can act as message consumers. Message consumers are processes that make the ReceiveMessage API call on SQS. Messages from queues can be processed either in batch or as a single message at a time. Each approach has its advantages and disadvantages.
- Batch processing: This is where each message can be processed independently, and an error on a single message is not required to disrupt the entire batch. Batch processing provides the most throughput for processing, and also provides the most optimizations for resources involved in reading messages.
- Single message processing: Single message processing is commonly used in scenarios where each message may trigger multiple processes within the consumer. In case of errors, the retry is confined to the single message.
Therefore, the correct answer is: Decouple the application and database tier by creating an Amazon SQS queue between them. Create an AWS Lambda function that picks up the messages on the SQS queue and writes them into the database. This is an excellent scenario for which Amazon SQS is designed for. The SQS queue can scale reliably to hold user submissions to ensure they will be written to the database. The SQS queue is highly durable which ensures that you will not lose any submissions.
The option that says: To minimize any changes on the application or the current infrastructure, manually scale the current DB instance to a significantly larger instance size before the event. Choose a larger instance size depending on the anticipated user traffic, and scale down after the event is completed is incorrect. There is no mention in the question if the database is Multi-AZ configuration. If Multi-AZ is not enabled, this will result in a brief downtime as the database is being scaled up or down. Additionally, since the user traffic is unpredictable, there is no guarantee the larger instance can handle the user traffic.
The option that says: Instead of using Amazon RDS, migrate the database to an Amazon DynamoDB table instead. Utilize the built-in automatic scaling in DynamoDB to scale the database based on user traffic is incorrect. This is not recommended for the scenario as changing the database requires major changes in the application and database layers.
The option that says: Create an Amazon ElastiCache for Memcached cluster between the application and database tier. The cache will temporarily store the user submissions until the database is able to commit those entries is incorrect. Amazon ElastiCache is designed for caching frequent requests to the database and not for holding data that is waiting to be written to the database.
References:
Check out the Amazon SQS Cheat Sheet:
Question 68: Skipped
A company has multiple database servers hosted on extra-large Reserved Amazon EC2 instances which are all deployed to a private subnet. A single NAT instance is in place to allow the servers to fetch data from the Internet. The solutions architect noticed that whenever there is a new database patch update, the processing takes a lot of time which results in request time-outs. As a workaround, the developers just manually re-run the database patch update on the servers that failed to complete the process the first time.
What could be the possible root cause of the issue and what steps should the solutions architect implement to solve it?
The timeout behavior of a NAT instance is that, when there is a connection time out, it sends a FIN packet to resources behind the NAT instance to close the connection. It does not attempt to continue the connection which is why some database updates are failing. For better performance, use a NAT Gateway instead.(Correct)
There is no Virtual Private Gateway attached to the VPC that links up to the Customer Gateway of the database provider. Simply add the missing gateway and the issue will be resolved
There is no Internet Gateway (IGW) attached to the VPC. Simply add an IGW and the issue will be resolved.
The database servers are not in a Placement Group, which means that the inter-instance communications are not optimal. This is causing the timeout issue. Place all the database servers on either a Spread or a Cluster type Placement group to fix the problem.
Explanation
A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.
The NAT gateway replaces the source IPv4 address of the instances with the private IP address of the NAT gateway. When sending response traffic to the instances, the NAT device translates the addresses back to the original source IPv4 addresses.
When you create a NAT gateway, you specify one of the following connectivity types:
Public – (Default) Instances in private subnets can connect to the internet through a public NAT gateway, but cannot receive unsolicited inbound connections from the internet. You create a public NAT gateway in a public subnet and must associate an elastic IP address with the NAT gateway at creation. You route traffic from the NAT gateway to the internet gateway for the VPC. Alternatively, you can use a public NAT gateway to connect to other VPCs or your on-premises network. In this case, you route traffic from the NAT gateway through a transit gateway or a virtual private gateway.
Private – Instances in private subnets can connect to other VPCs or your on-premises network through a private NAT gateway. You can route traffic from the NAT gateway through a transit gateway or a virtual private gateway. You cannot associate an elastic IP address with a private NAT gateway. You can attach an internet gateway to a VPC with a private NAT gateway, but if you route traffic from the private NAT gateway to the internet gateway, the internet gateway drops the traffic.
Take note of the following difference between a NAT Instance and a NAT Gateway when handling a timeout:
NAT Instance - When there is a connection time out, a NAT instance sends a FIN packet to resources behind the NAT instance to close the connection.
NAT Gateway - When there is a connection time out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet).
Therefore, the correct answer is: The timeout behavior of a NAT instance is that, when there is a connection time out, it sends a FIN packet to resources behind the NAT instance to close the connection. It does not attempt to continue the connection which is why some database updates are failing. For better performance, use a NAT Gateway instead.
The option that says: There is no Internet Gateway (IGW) attached to the VPC. Simply add an IGW and the issue will be resolved is incorrect. If there is no Internet Gateway attached to the VPC, any communication to the outside internet will fail, which is not the case here.
The option that says: There is no Virtual Private Gateway attached to the VPC that links up to the Customer Gateway of the database provider. Simply add the missing gateway and the issue will be resolved is incorrect. A Virtual Private Gateway is used for VPN connections. This will not resolve the issue for this scenario.
The option that says: The database servers are not in a Placement Group, which means that the inter-instance communications are not optimal. This is causing the timeout issue. Place all the database servers on either a Spread or a Cluster type Placement group to fix the problem is incorrect. Placement groups are just allocation of servers to be as close together as possible. This does not improve the performance of the NAT instance.
References:
Check out this Amazon VPC Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 69: Skipped
The European Organization for Nuclear Research, also known as CERN, is a research organization that operates the largest particle accelerator in the world and generates terabytes of experimental data every day. A group of data scientists is planning to use an Elastic MapReduce cluster for their data analysis, which will only be run once. The cluster is designed to ingest 300 TB of data with a total of 200 EC2 instances and is expected to run for about 8 hours. The resulting data set must be stored temporarily until it is permanently stored in their AWS Redshift database.
Which of the following options is the best and most cost-effective solution to satisfy the above requirements?
Use Reserved EC2 instances for both the master and core nodes and use Spot EC2 instances for the task nodes.
Use a combination of On-Demand instance and Spot instance types for both the master and core nodes. Use Spot EC2 instances for the task nodes.
Use On-Demand EC2 instances for both the master and core nodes and use Spot EC2 instances for the task nodes.(Correct)
Use Reserved EC2 instances for the master node; On-Demand instances for the core nodes; and use Spot EC2 instances for the task nodes.
Explanation
In this scenario, the scientists are doing a one-time processing of their data in 8 hours. Hence, a Reserved instance is not a suitable type to use as this project is not for the long term.
Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data using EC2 instances. When using Amazon EMR, you don’t need to worry about installing, upgrading, and maintaining Spark software (or any other tool from the Hadoop framework). You also don’t need to worry about installing and maintaining underlying hardware or operating systems. Instead, you can focus on your business applications and use Amazon EMR to remove the undifferentiated heavy lifting.
The central component of Amazon EMR is the cluster. A cluster is a collection of Amazon Elastic Compute Cloud (Amazon EC2) instances. Each instance in the cluster is called a node. Each node has a role within the cluster, referred to as the node type. Amazon EMR also installs different software components on each node type, giving each node a role in a distributed application like Apache Hadoop.
The node types in Amazon EMR are as follows:
Master node: A node that manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes for processing. The master node tracks the status of tasks and monitors the health of the cluster. Every cluster has a master node, and it's possible to create a single-node cluster with only the master node.
Core node: A node with software components that run tasks and store data in the Hadoop Distributed File System (HDFS) on your cluster. Multi-node clusters have at least one core node.
Task node: A node with software components that only runs tasks and does not store data in HDFS. Task nodes are optional.
For optimizing costs and performance in choosing the instance types:
Master node: Unless your cluster is very short-lived and the runs are cost-driven, avoid running your Master node on a Spot Instance. We suggest this because a Spot interruption on the Master node terminates the entire cluster. Alternatively to On-Demand, you can set up the Master node on a Spot Block. You do so by setting the defined duration of the node and failing over to On-Demand if the Spot Block capacity is unavailable.
Core nodes: Avoid using Spot Instances for Core nodes if the jobs on the cluster use HDFS. That prevents a situation where Spot interruptions cause data loss for data that was written to the HDFS volumes on the instances.
Task nodes: Use Spot Instances for your task nodes by selecting up to five instance types that match your hardware requirement. Amazon EMR fulfills the most suitable capacity by price and capacity availability.
Therefore, the correct answer is: Use On-Demand EC2 instances for both the master and core nodes and use Spot EC2 instances for the task nodes.
The option that says: Use a combination of On-Demand instance and Spot instance types for both the master and core nodes. Use Spot EC2 instances for the task nodes is incorrect. It is not recommended to use a Spot instance for the Master node. Also, if dealing with a large amount of data shared on the cluster, you don't want to use spot instances for the Core nodes as it may cause data loss or interruption on your processing.
The option that says: Use Reserved EC2 instances for both the master and core nodes and use Spot EC2 instances for the task nodes is incorrect. The scenario is not a long-term project so a Reserved instance is not a suitable type to use in this case.
The option that says: Use Reserved EC2 instances for the master node; On-Demand instances for the core nodes; and use Spot EC2 instances for the task nodes is incorrect. The scenario is not a long-term project so a Reserved instance is not a suitable type to use in this case.
References:
Check out this Amazon EMR Cheat Sheet:
Question 70: Skipped
A medical firm uses an image analysis application that extracts data from multiple images. The input stream analyzes a batch of images and for each file, it writes the result data to an output stream of files. The number of input files per day grows and peaks for a few hours in a day. The application is hosted on an Amazon EC2 instance with a large EBS volume that hosts the input data, but the results still take almost 20 hours per day to be processed.
Which of the following solutions can be implemented to reduce the processing time and improve the availability of the application?
Store I/O files in S3 instead and use SQS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the number of SNS notifications.
Store I/O files in S3 instead and use SQS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the length of your SQS queue.(Correct)
Store I/O files in an EBS Provisioned IOPS volume, and use SNS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the number of SNS notifications.
Store I/O files in an EBS Provisioned IOPS volume, and use SNS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the length of your SQS queue.
Explanation
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.
Therefore, the correct answer is: Store I/O files in S3 instead and use SQS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the length of your SQS queue. It provides high availability and can store the massive amount of data. Auto-scaling of EC2 instances reduces the overall processing time and SQS helps in distributing the commands/tasks to the group of EC2 instances.
The option that says: Store I/O files in an EBS Provisioned IOPS volume, and use SNS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the number of SNS notifications is incorrect because EBS is for storage and SNS is only for notification, not for scaling.
The option that says: Store I/O files in an EBS Provisioned IOPS volume, and use SNS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the length of your SQS queue is incorrect because EBS is only for block storage and not for scaling.
The option that says: Store I/O files in S3 instead and use SQS to facilitate a group of hosts working in parallel. Include the hosts in an auto scaling group that scales accordingly to the number of SNS notifications is incorrect because SNS is only for notification and not for scaling.
References:
Check out these AWS Auto Scaling and Amazon SQS Cheat Sheets:
Question 71: Skipped
A company wants to migrate its on-premises application to the AWS cloud. Due to limited manpower, the company wants to utilize fully managed AWS services as much as possible. This way, there will be less maintenance work after the migration. The application processes large files containing sensitive information so the company has the following requirements:
- Data encryption at rest and in transit are both required on all files that will be processed by the application.
- The storage solution must be highly durable and available.
- The company must be able to use its own encryption key and then periodically rotated for improved security.
- Amazon Redshift Spectrum will be used to analyze the migrated data.
Which of the following should the Solutions Architect implement to achieve these requirements?
Create an Amazon S3 bucket to store all data. Enable server-side encryption with AWS KMS (SSE-KMS). Apply a bucket policy that enforces HTTPS only connections to the S3 bucket.
(Correct)
Provision an AWS Storage Gateway – File Gateway device in the on-premises data center. Enable encryption on the file share with AWS KMS. The data will be transferred securely to an Amazon S3 bucket with SSE-S3 encryption.
Store the data files on an Amazon DynamoDB table. Leverage on the default SSL connection settings of the DynamoDB table. Use AWS KMS to encrypt the table and enable automatic key rotation.
Store the data files in an Amazon EC2 instance with an encrypted EBS volume. Use AWS KMS to encrypt the EBS volume and enable automatic key rotation.
Explanation
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data availability, security, and performance. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applications for companies all around the world.
Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. You can set default encryption on a bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS).
SSE-S3 requires that Amazon S3 manage the data and the encryption keys.
SSE-C requires that you manage the encryption key.
SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS.
When you configure your bucket to use default encryption with SSE-KMS, you can also enable an S3 Bucket Key to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and reduce the cost of encryption. When you configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects, AWS KMS generates a bucket-level key that is used to create a unique data key for objects in the bucket. This bucket key is used for a time-limited period within Amazon S3, reducing the need for Amazon S3 to make requests to AWS KMS to complete encryption operations.
By default, Amazon S3 allows both HTTP and HTTPS requests. To comply with the s3-bucket-ssl-requests-only
rule (only accepting HTTPS connections), your bucket policy should explicitly deny access to HTTP requests. To determine HTTP or HTTPS requests in a bucket policy, use a condition that checks for the key "aws:SecureTransport"
. When this key is set to true
, this means that the request is sent through HTTPS. To be sure to comply with the s3-bucket-ssl-requests-only
rule, create a bucket policy that explicitly denies access when the request meets the condition "aws:SecureTransport": "false"
. This policy explicitly denies access to HTTP requests.
Therefore, the correct answer is: Create an Amazon S3 bucket to store all data. Enable server-side encryption with AWS KMS (SSE-KMS). Apply a bucket policy that enforces HTTPS only connections to the S3 bucket.
The option that says: Provision an AWS Storage Gateway – File Gateway device in the on-premises data center. Enable encryption on the file share with AWS KMS. The data will be transferred securely to an Amazon S3 bucket with SSE-S3 encryption is incorrect. Although this is possible, the encryption key on the S3 with SSE-S3 is managed by AWS and not the customer. This also entails more management overhead because you need to manage your file gateway. Less maintenance is needed if data is just sent directly to an encrypted S3 bucket.
The option that says: Store the data files on an Amazon DynamoDB table. Leverage on the default SSL connection settings of the DynamoDB table. Use AWS KMS to encrypt the table and enable automatic key rotation is incorrect. The application processes large documents that may not fit on an Amazon DynamoDB table in which each entry is limited to 400 KB only.
The option that says: Store the data files in an Amazon EC2 instance with an encrypted EBS volume. Use AWS KMS to encrypt the EBS volume and enable automatic key rotation is incorrect as this entails more management overhead as you need to provision your own EC2 instance. Amazon S3 is also more durable compared to a single Amazon EC2 instance with EBS volume.
References:
Check out the Amazon S3 and AWS KMS Cheat Sheet:
Question 74: Skipped
A company is running a financial modeling application on the AWS cloud. The application tier runs on an Auto Scaling group of Amazon EC2 instances. A separate EC2 cluster with a fixed number of instances is hosting the 200 TB of financial data in a shared file system. The application reads and processes the data on the shared filesystem to generate an overall financial report, which takes about 72 hours to complete. This whole process only needs to run at the end of each month, but the storage tier instances are running continuously to retain all the data in the shared file system.
As the storage tier takes up a large percentage of operational costs, the management wants to reduce the cost of the storage tier while maintaining the high-performance access needed by the application during its 72-hour run.
Which of the following options should the solutions architect implement that will have the largest overall cost reduction?
For the data tier, create an Amazon S3 bucket and move the objects of the existing shared file system to it. Use S3 Intelligent-Tiering Storage class to save costs. Use lazy-loading on an Amazon FSx for Lustre filesystem to import the contents of the S3 bucket. Use this filesystem as shared storage for the application tier EC2 instances for the duration of the job and delete it once the job is completed.
(Correct)
For the data tier, create an Amazon S3 bucket and move the objects of the existing shared file system to it. Use S3 Glacier Instant Retrieval Storage class to save costs. Use lazy-loading on an Amazon FSx for Lustre filesystem to import the contents of the S3 bucket. Use this filesystem as shared storage for the application tier EC2 instances for the duration of the job and delete it once the job is completed.
For the data tier, create an Amazon EFS filesystem and move the objects of the existing shared file system to it. Since the data will only be used once a month, use EFS Standard–Infrequent Access (IA) class to save costs. Mount this filesystem as shared storage for the application tier EC2 instance for the duration of the job.
For the data tier, create a large EBS volume with Multi-Attach enabled. Move the objects of the existing shared file system to it. This will save cost since only 1 EBS volume needs to be mounted on all application tier EC2 instances for the duration of the job. The EBS volume will be retained once the job is completed.
Explanation
Amazon FSx for Lustre is a large-scale, distributed parallel file system powering the workloads of most of the largest supercomputers. It is popular among AWS customers for high-performance computing workloads, such as meteorology, life science, and engineering simulations. It is also used in media and entertainment, as well as the financial services industry. If your workloads require fast, POSIX-compliant file system access to your S3 buckets, then you can use FSx for Lustre to link your S3 buckets to a file system and keep data synchronized between the file system and S3 in both directions.
Amazon FSx for Lustre offers a choice of solid-state drive (SSD) and hard disk drive (HDD) storage types that are optimized for different data processing requirements.
FSx for Lustre integrates with Amazon S3, making it easier for you to process cloud datasets using the Lustre high-performance file system. When linked to an Amazon S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files. Amazon FSx imports listings of all existing files in your S3 bucket at file system creation. Amazon FSx can also import listings of files added to the data repository after the file system is created. You can set the import preferences to match your workflow needs. The file system also makes it possible for you to write file system data back to S3. Data repository tasks simplify the transfer of data and metadata between your FSx for Lustre file system and its durable data repository on Amazon S3.
Therefore, the correct answer is: For the data tier, create an Amazon S3 bucket and move the objects of the existing shared file system to it. Use S3 Intelligent-Tiering Storage class to save costs. Use lazy-loading on an Amazon FSx for Lustre filesystem to import the contents of the S3 bucket. Use this filesystem as shared storage for the application tier EC2 instances for the duration of the job and delete it once the job is completed. The FSx for Lustre filesystem can temporarily load the data from S3 and share it among the application tier instances. This is cheaper as we can use S3 for long-term storage and FSx for Lustre for the temporary shared file system.
The option that says: For the data tier, create an Amazon EFS filesystem and move the objects of the existing shared file system to it. Since the data will only be used once a month, use EFS Standard–Infrequent Access (IA) class to save costs. Mount this filesystem as shared storage for the application tier EC2 instance for the duration of the job is incorrect. Amazon EFS Filesystem costs significantly more than standard Amazon S3 storage costs.
The option that says: For the data tier, create an Amazon S3 bucket and move the objects of the existing shared file system to it. Use S3 Glacier Instant Retrieval Storage class to save costs. Use lazy-loading on an Amazon FSx for Lustre filesystem to import the contents of the S3 bucket. Use this filesystem as shared storage for the application tier EC2 instances for the duration of the job and delete it once the job is completed is incorrect. S3 Glacier is recommended for long-lived archive data accessed once a quarter. Also, S3 Glacier has additional charges for retrieving data from cold storage.
The option that says: For the data tier, create a large EBS volume with Multi-Attach enabled. Move the objects of the existing shared file system to it. This will save cost since only 1 EBS volume needs to be mounted on all application tier EC2 instances for the duration of the job. The EBS volume will be retained once the job is completed is incorrect. A multi-attached EBS volume can only be attached in 16 EC2 instances simultaneously. Also, it is only supported for Provisioned IOPS SSD volumes which cost significantly more.
References:
Check out these Amazon S3 and Amazon FSx for Lustre Cheat Sheets:
Question 75: Skipped
A data analytics company is running simulations on a high-performance computing (HPC) cluster in AWS. The compute node and storage are tightly coupled to achieve the best performance possible. The running simulations on the cluster produce thousands of large files stored on an Amazon EFS share that is shared across 200 Amazon EC2 instances. Several more simulation jobs need to be run on the cluster so the number of nodes has been increased to 1000 instances. However, the bigger cluster performed below the expectations of the company. The Solutions Architect was tasked to implement a solution that will achieve maximum performance from the HPC cluster.
Which of the following options are the recommended actions to achieve this? (Select THREE.)
Improve the storage performance by using Amazon EBS Throughput Optimized volumes configured on RAID 0 mode. This allows for much higher IOPS compared to Amazon EFS.
Improve the performance by placing all the compute nodes as close to each other. Re-launch all the Amazon EC2 instances within a single Availability Zone in a cluster placement group.
(Correct)
Improve the network performance of each node by using Amazon EC2 instances with an Elastic Fabric Adapter (EFA) network interface.
(Correct)
Improve the storage performance by implementing Amazon FSx for Lustre instead of using Amazon EFS.
(Correct)
Improve the network performance of each node by attaching multiple network interfaces to the Amazon EC2 instances. This ensures that the network bandwidth is not a bottleneck.
Improve the performance and scalability of the HPC cluster by spreading the Amazon EC2 instances into multiple Availability Zones. Improve the storage performance by using Amazon EBS volumes configured on RAID 0 mode. This allows for much higher IOPS compared to Amazon EFS.
Explanation
In this scenario, the shared Amazon EFS storage and the spreading of the compute nodes on multiple AZs cause sluggishness on both the storage and network performance of the HPC cluster. For a tightly-coupled HPC cluster, you want the Amazon EC2 instances to be launched on a single Availability Zone.
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies:
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling these applications. EFA’s unique OS bypass networking mechanism provides a low-latency, low-jitter channel for inter-instance communications. This enables your tightly-coupled HPC or distributed machine learning applications to scale to thousands of cores, making your applications run faster.
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the popular, high-performance Lustre file system. You use Lustre for workloads where speed matters, such as machine learning, high-performance computing (HPC), video processing, and financial modeling. Amazon FSx file systems provide up to multiple GB/s of throughput and hundreds of thousands of IOPS. The specific amount of throughput and IOPS that your workload can drive on your file system depends on the throughput capacity and storage capacity configuration of your file system, along with the nature of your workload, including the size of the active working set.
The open-source Lustre file system is designed for applications that require fast storage—where you want your storage to keep up with your computing capacity. Lustre was built to solve the problem of quickly and cheaply processing the world's ever-growing datasets. It's a widely used file system designed for the fastest computers in the world. It provides submillisecond latencies, up to hundreds of Gbps of throughput, and up to millions of IOPS.
Therefore, the correct answers are:
- Improve the performance by placing all the compute nodes as close to each other. Re-launch all the Amazon EC2 instances within a single Availability Zone in a cluster placement group.
- Improve the network performance of each node by using Amazon EC2 instances with an Elastic Fabric Adapter (EFA) network interface.
- Improve the storage performance by implementing Amazon FSx for Lustre instead of using Amazon EFS.
The option that says: Improve the network performance of each node by attaching multiple network interfaces to the Amazon EC2 instances. This ensures that the network bandwidth is not a bottleneck is incorrect. Although this may increase the network performance of the individual compute node, the Amazon EFS could still be a bottleneck because there are too many instances accessing the EFS shared storage simultaneously. A better solution is to use Amazon FSx for Lustre which supports multiple GB/s of throughput and hundreds of thousands of IOPS.
The option that says: Improve the performance and scalability of the HPC cluster by spreading the Amazon EC2 instances into multiple Availability Zones is incorrect. The HPC cluster relies on tight communication between the compute nodes and the storage solution. Spreading the EC2 instances into multiple Availability Zones will lower the performance of the cluster because the underlying physical servers will have to communicate with each other over long distances.
The option that says: Improve the storage performance by using Amazon EBS Throughput Optimized volumes configured on RAID 0 mode. This allows for much higher IOPS compared to Amazon EFS is incorrect. The cluster relies on shared storage across all the compute nodes. A Throughput Optimized EBS volume cannot be shared between EC2 instances. The Amazon EBS Multi-Attach feature is only applicable for Provisioned IOPS SSD volumes.
References:
Check out the Amazon FSx Cheat Sheet:
Question 3: Skipped
A big fast-food chain in Asia is planning to implement a location-based alert on their existing mobile app. If a user is in proximity to one of its restaurants, an alert will be shown on the user’s mobile phone. The notification needs to happen in less than a minute while the user is still in the vicinity. Currently, the mobile app has 10 million users in the Philippines, China, Korea, and other Asian countries.
Which one of the following AWS architecture is the most suitable option for this scenario?
The mobile app will send device location to an SQS endpoint. Set up an API that utilizes an Application Load Balancer and an Auto Scaling group of EC2 instances, which will retrieve the relevant offers from DynamoDB. Use AWS Mobile Push to send offers to the mobile app.(Correct)
The mobile app will send the real-time location data using Amazon Kinesis. Set up an API which uses an Application Load Balancer and an Auto Scaling group of EC2 instances to retrieve the relevant offers from a DynamoDB table. Use Amazon Lambda and SES to push the notification to the mobile app.
Establish connectivity with mobile carriers using AWS Direct Connect. Set up an API on all EC2 instances to receive the location data from the mobile app via the carrier's GPS connection. Use RDS to store the data and fetch relevant offers from the restaurant. The EC2 instances will communicate with mobile carriers to send alerts to the mobile app.
Set up an API that uses an Application Load Balancer and an Auto Scaling group of EC2 instances. The mobile app will send the user's location data to the API web service. Use DynamoDB to store and retrieve relevant offers on the nearest restaurant. Configure the EC2 instances to push alerts to the mobile app.
Explanation
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
With the Mobile Push feature of Amazon SNS, you have the ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts.
You send push notification messages to both mobile devices and desktops using one of the following supported push notification services:
- Amazon Device Messaging (ADM)
- Apple Push Notification Service (APNS) for both iOS and Mac OS X
- Baidu Cloud Push (Baidu)
- Google Cloud Messaging for Android (GCM)
- Microsoft Push Notification Service for Windows Phone (MPNS)
- Windows Push Notification Services (WNS)
Therefore, the correct answer is: The mobile app will send device location to an SQS endpoint. Set up an API that utilizes an Application Load Balancer and an Auto Scaling group of EC2 instances, which will retrieve the relevant offers from DynamoDB. Use AWS Mobile Push to send offers to the mobile app.
The option that says: Set up an API that uses an Application Load Balancer and an Auto Scaling group of EC2 instances. The mobile app will send the user's location data to the API web service. Use DynamoDB to store and retrieve relevant offers on the nearest restaurant. Configure the EC2 instances to push alerts to the mobile app is incorrect. Using EC2 instances to push alerts to the mobile app is not an appropriate solution. You have to use the AWS Mobile Push feature of SNS.
The option that says: Establish connectivity with mobile carriers using AWS Direct Connect. Set up an API on all EC2 instances to receive the location data from the mobile app via the carrier's GPS connection. Use RDS to store the data and fetch relevant offers from the restaurant. The EC2 instances will communicate with mobile carriers to send alerts to the mobile app is incorrect. AWS Direct Connect is primarily used to establish a dedicated network connection from your premises to AWS.
The option that says: The mobile app will send the real-time location data using Amazon Kinesis. Set up an API which uses an Application Load Balancer and an Auto Scaling group of EC2 instances to retrieve the relevant offers from a DynamoDB table. Use Amazon Lambda and SES to push the notification to the mobile app is incorrect. You can't use SES to send push notifications to mobile phones. You have to use SNS instead.
References:
Check out this Amazon SNS Cheat Sheet:
Question 4: Skipped
A stock trading company is running its application on the AWS cloud. The mission-critical database is hosted on an Amazon RDS for MySQL instance deployed in a Multi-AZ configuration. An AWS Backup rule is in place to take automated snapshots hourly. The operations team recently performed an RDS database failover test and found that it caused an outage of approximately 40 seconds.
The management has asked the solutions architect to implement a solution that will reduce the outage to less than 20 seconds. Most connections should stay alive during failovers except for the ones that are in the middle of a transaction or SQL statement. New database connections should still be accepted, and the incoming write requests should be queued until the failover completes.
Which of the following options should the solutions architect implement to meet the company's requirements? (Choose THREE.)
Ensure that Multi-AZ is enabled on Amazon RDS for MySQL and create one or more read replicas to ensure quick failover.
Enable the Amazon RDS Optimized Reads feature and launch an Amazon ElastiCache for Redis cluster in front of the database layer to temporarily serve database queries in case of RDS failure.
Enable the Amazon RDS Optimized Writes feature and create an Amazon ElastiCache for Memcached cluster in front of the database layer to cache any frequently requested queries.
Migrate the Amazon RDS for MySQL cluster to an Amazon Aurora for MySQL cluster.
(Correct)
Ensure that Multi-AZ is enabled on Amazon Aurora and create one or more Aurora Replicas.
(Correct)
Set up an Amazon RDS Proxy in front of the database layer to automatically route traffic to healthy RDS instances.
(Correct)
Explanation
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more. Amazon RDS Proxy sits between your application and your relational database to efficiently manage connections to the database and improve the scalability of the application. Amazon RDS Proxy can be enabled for most applications with no code changes.
Your Amazon RDS Proxy instance maintains a pool of established connections to your RDS database instances, reducing the stress on database compute and memory resources that typically occurs when new connections are established. RDS Proxy also shares infrequently used database connections so that fewer connections access the RDS database.
With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure-handling code. The proxy automatically routes traffic to a new database instance while preserving application connections. It also bypasses Domain Name System (DNS) caches to reduce failover times by up to 66% for Aurora Multi-AZ databases
Connecting through a proxy makes your application more resilient to database failovers. When the original DB instance becomes unavailable, RDS Proxy connects to the standby database without dropping idle application connections. Doing so helps to speed up and simplify the failover process. The result is faster failover that's less disruptive to your application than a typical reboot or database problem.
Without RDS Proxy, a failover involves a brief outage. During the outage, you can't perform write operations on that database. Any existing database connections are disrupted, and your application must reopen them. The database becomes available for new connections and write operations when a read-only DB instance is promoted in place of one that's unavailable.
For applications that maintain their own connection pool, going through RDS Proxy means that most connections stay alive during failovers or other disruptions. Only connections that are in the middle of a transaction or SQL statement are canceled. RDS Proxy immediately accepts new connections. When the database writer is unavailable, RDS Proxy queues up incoming requests.
For applications that don't maintain their own connection pools, RDS Proxy offers faster connection rates and more open connections. It offloads the expensive overhead of frequent reconnects from the database. It does so by reusing database connections maintained in the RDS Proxy connection pool. This approach is particularly important for TLS connections, where setup costs are significant.
Amazon Aurora is a modern relational database service offering performance and high availability at scale, fully open-source MySQL and PostgreSQL-compatible editions. Unlike other databases, after a database crash, Amazon Aurora does not need to replay the redo log from the last database checkpoint (typically five minutes) and confirm that all changes have been applied before making the database available for operations. Amazon Aurora moves the buffer cache out of the database process and makes it available immediately at restart time. This prevents you from having to throttle access until the cache is repopulated to avoid brownouts. This reduces database restart times to less than 60 seconds in most cases.
With properly configured Multi-AZ and Aurora replicas, Amazon Aurora can quickly recover from failures. Amazon Aurora does not need to replay the redo log from the last database checkpoint (typically five minutes) and confirm that all changes have been applied before making the database available for operations.
Thus, enabling RDS Proxy can further decrease the downtime of a failover to around 20 seconds (66% reduction from 60 seconds).
Therefore, the correct answers are:
-Set up an Amazon RDS Proxy in front of the database layer to automatically route traffic to healthy RDS instances.
-Migrate the Amazon RDS for MySQL cluster to an Amazon Aurora for MySQL cluster.
-Ensure that Multi-AZ is enabled on Amazon Aurora and create one or more Aurora Replicas.
The option that says: Enable the Amazon RDS Optimized Writes feature and create an Amazon ElastiCache for Memcached cluster in front of the database layer to cache any frequently requested queries is incorrect. Memcached is used for database caching only and not as a database proxy. Although it can respond to frequently queried data, it can't queue any write request to the database while the failover process is in place. The Amazon RDS Optimized Writes feature is not useful in this scenario as it only provides you with up to 2x improvement in write transaction throughput on RDS for MySQL by writing only once while protecting you from data loss.
The option that says: Enable the Amazon RDS Optimized Reads feature and launch an Amazon ElastiCache for Redis cluster in front of the database layer to temporarily serve database queries in case of RDS failure is incorrect. Although Redis can store data to some extent similar to a database, it is still no substitute for a real database storage system. Moreover, the Amazon RDS Optimized Reads feature is commonly used to achieve faster query processing by placing temporary tables generated by MySQL on NVMe-based SSD block storage that is physically connected to the host server. This feature cannot make most DB connections stay alive during failovers.
References:
Check out these Amazon Aurora and Amazon RDS Cheat Sheets:
Check out this Amazon Aurora vs Amazon RDS comparison:
Question 5: Skipped
A software development company based in New Jersey has tasked the solutions architect to design the network architecture for their new enterprise resource planning (ERP) system in AWS. The new system should allow access to business managers and analysts over the Internet, whether they are in their hotel rooms, cafes, or elsewhere. However, the ERP system should not be publicly accessible by anyone over the Internet but only by authorized personnel.
Which of the following network design meets the above requirements while minimizing deployment and operational costs?
Deploy the ERP system behind an Elastic Load Balancer with an SSL certificate to allow HTTPS connections.
Establish an IPsec VPN connection and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it.
Establish an SSL VPN solution in a public subnet of your VPC. Install and configure SSL VPN client software on all the workstations/laptops of the users who need access to the ERP system. Create a private subnet in your VPC and place your application servers behind it.(Correct)
Establish an AWS Direct Connect connection and create a private interface to your VPC. Create a public subnet and place your app servers in it.
Explanation
Secure Sockets Layer (SSL) VPN is an emerging technology that provides remote-access VPN capability, using the SSL function that is already built into a modern web browser. SSL VPN allows users from any Internet-enabled location to launch a web browser to establish remote-access VPN connections, thus promising productivity enhancements and improved availability, as well as further IT cost reduction for VPN client software and support.
Always keep in mind to install the SSL VPN software in your EC2 instances which are deployed in the public subnet. This will be where your users can connect to the Internet to be able to access the business applications, which is deployed in the private subnet of your VPC.
Therefore, the correct answer is: Establish an SSL VPN solution in a public subnet of your VPC. Install and configure SSL VPN client software on all the workstations/laptops of the users who need access to the ERP system. Create a private subnet in your VPC and place your application servers behind it. Configuring the SSL VPN solution is cost-effective and allows access only for business travelers and remote employees. And since the application servers are in the private subnet, the application is not accessible via the Internet.
The option that says: Establish an AWS Direct Connect connection and create a private interface to your VPC. Create a public subnet and place your app servers in it is incorrect. AWS Direct Connect is not a cost-effective solution compared with a VPN solution.
The option that says: Deploy the ERP system behind an Elastic Load Balancer with an SSL certificate to allow HTTPS connections is incorrect. It does not mention how the application would be accessible only for business travelers and remote employees, and not to the public.
The option that says: Establish an IPsec VPN connection and provide the users with the configuration details. Create a public subnet in your VPC, and place your application servers in it is incorrect. If the application servers are put in the public subnet, they would be publicly accessible via the Internet.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 6: Skipped
A privately funded aerospace and sub-orbital spaceflight services company hosts its rapidly evolving applications in AWS. For its deployment process, the company is using CloudFormation templates which are regularly updated to map the latest AMI IDs for its Amazon EC2 instances clusters. It takes a lot of time to execute this on a regular basis which is why the solutions architect has been instructed to automate this process.
Which of the following options is the most suitable solution that can satisfy the above requirements?
Configure your Systems Manager State Manager to store the latest AMI IDs and integrate them with your CloudFormation template. Call the update-stack API in CloudFormation whenever you decide to update the EC2 instances in your CloudFormation template.
Use a combination of AWS Service Catalog with AWS Config to automatically fetch the latest AMI and use it for succeeding deployments.
Use CloudFormation with AWS Service Catalog to fetch the latest AMI IDs and automatically use them for succeeding deployments.
Use CloudFormation with Systems Manager Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your CloudFormation template.
(Correct)
Explanation
You can use the existing Parameters section of your CloudFormation template to define Systems Manager parameters, along with other parameters. Systems Manager parameters are a unique type that is different from existing parameters because they refer to actual values in the Parameter Store. The value for this type of parameter would be the Systems Manager (SSM) parameter key instead of a string or other value. CloudFormation will fetch values stored against these keys in Systems Manager in your account and use them for the current stack operation.
If the parameter being referenced in the template does not exist in Systems Manager, a synchronous validation error is thrown. Also, if you have defined any parameter value validations (AllowedValues, AllowedPattern, etc.) for Systems Manager parameters, they will be performed against SSM keys which are given as input values for template parameters, not actual values stored in Systems Manager.
Hence, the correct answer is the option that says: Use CloudFormation with Systems Manager Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your CloudFormation template.
The option that says: Configure your Systems Manager State Manager to store the latest AMI IDs and integrate them with your CloudFormation template. Call the update-stack API in CloudFormation whenever you decide to update the EC2 instances in your CloudFormation template is incorrect because the Systems Manager State Manager service simply automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define. This can't be used as a parameter store that refers to the latest AMI of your application.
The following options are incorrect because using AWS Service Catalog is not suitable in this scenario. This service just allows organizations to create and manage catalogs of IT services that are approved for use on AWS:
- Use a combination of AWS Service Catalog with AWS Config to automatically fetch the latest AMI and use it for succeeding deployments.
- Use CloudFormation with AWS Service Catalog to fetch the latest AMI IDs and automatically use them for succeeding deployments.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 8: Skipped
A leading electronics company is getting ready to do a major public announcement of its latest smartphone. Their official website uses an Application Load Balancer in front of an Auto Scaling group of On-Demand EC2 instances, which are deployed across multiple Availability Zones with a Multi-AZ RDS MySQL database. In preparation for their new product launch, the solutions architect checked the performance of the company website and found that the database takes a lot of time to retrieve the data when there are over 100,000 simultaneous requests on the server. The static content such as the images and videos are promptly loaded as expected, but not the customer information that is fetched from the database.
Which of the following options could be done to solve this issue in a cost-effective way? (Select TWO.)
Implement a caching system using ElastiCache in-memory cache on each Availability Zone.(Correct)
Launch a CloudFront web distribution to solve the latency issue.
Configure the database tier to use sharding, which will distribute the incoming load to multiple RDS MySQL instances.
Upgrade the RDS MySQL database instance size and increase the provisioned IOPS for faster processing.
Add Read Replicas in RDS for each Availability Zone.(Correct)
Migrate the database to use Amazon Keyspaces which natively supports sharding to distribute database queries to multiple nodes.
Explanation
In-memory data caching can be one of the most effective strategies to improve your overall application performance and to reduce your database costs. Caching can be applied to any type of database including relational databases such as Amazon RDS or NoSQL databases such as Amazon DynamoDB, MongoDB, and Apache Cassandra. The best part of caching is that it’s minimally invasive to implement and by doing so, your application performance regarding both scale and speed is dramatically improved.
In this scenario, the issue lies in the database tier of the architecture.
The option that says: Implement a caching system using ElastiCache in-memory cache on each Availability Zone is correct. Adding a cache will significantly reduce the retrieval time for frequent database queries. This also reduces the load significantly on the database tier.
The option that says: Add Read Replicas in RDS for each Availability Zone is correct. This improves the database retrieval time as there will be more nodes that can serve the database request. You won't need to increase the master instance size to accommodate all the read loads.
The following options are possible answers here, but they are not cost-effective unlike the options mentioned above:
-Configure the database tier to use sharding, which will distribute the incoming load to multiple RDS MySQL instances.
-Upgrade the RDS MySQL database instance size and increase the provisioned IOPS for faster processing
It is quite expensive to implement load distribution on the RDS instances because you will need to use multiple sets of standby instances and read replicas. Since it will use more instances, this solution will cost more as opposed to using just Read Replicas.
The option that says: Migrate the database to use Amazon Keyspaces which natively supports sharding to distribute database queries to multiple nodes is incorrect. Amazon Keyspaces is designed to be compatible with Apache Cassandra databases. Since the question uses MySQL compatible database, using Amazon Keyspaces is not recommended here.
The option that says: Launch a CloudFront web distribution to solve the latency issue is incorrect. This option tries to improve the performance of the front-end tier. To solve the slow data retrieval times from RDS, it is best to implement caching and adding Read Replicas which are cheaper and simpler to do than upgrading the RDS database instance.
References:
Check out this Amazon RDS Cheat Sheet:
Question 9: Skipped
A company has data centers in Europe, Asia, and North America regions. Each data center has a 10Gbps Direct Connect connection to AWS, and the company uses a custom VPN to encrypt traffic between its data center network and AWS. In total, the data centers have about five hundred physical servers that host a mix of Windows and Linux-based applications and database services. The company plans to decommission these data centers and migrate its entire infrastructure to the AWS cloud instead. Separate accounts for staging and launching VMs must be implemented, as well as the ability to do AWS Region to Region VPC stack creation.
Which of the following options is the recommended solution for this migration?
Leverage the Application Discovery Service from AWS. Install the Application Discovery Service agents on each physical server and visualize the infrastructure on the AWS Migration Hub console. Trigger the replication to copy your servers to AWS. After the replication is completed, start the cutover to AWS.
Leverage AWS Application Migration Service (AWS MGN) for the migration. Install the AWS Replication agent on each physical machine to start the replication to the AWS Cloud. Once syncing is completed, launch test instances and initiate cutover to the AWS Cloud.
(Correct)
Leverage AWS Storage Gateway for the migration. Install the AWS Storage Gateway software appliance on their on-premises servers and use it to transfer their data to Amazon S3. Once the data is in Amazon S3, use Amazon EC2 to create virtual machines that replicate their on-premises infrastructure.
Leverage AWS Outposts service for the migration of physical servers to AWS. Install the AWS Outposts server agent on the data center to incrementally replicate the servers into Amazon Machine Images (AMIs). Deploy the AMIs into Amazon EC2 instances to initiate cutover.
Explanation
AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to lift and shift a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows. MGN replicates source servers into your AWS account.
When you’re ready, it automatically converts and launches your servers on AWS so you can quickly benefit from the cost savings, productivity, resilience, and agility of the Cloud. Once your applications are running on AWS, you can leverage AWS services and capabilities to quickly and easily re-platform or refactor those applications – which makes lift-and-shift a fast route to modernization.
AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS.
Therefore, the correct answer is: Leverage AWS Application Migration Service (AWS MGN) for the migration. Install the AWS Replication agent on each physical machine to start the replication to the AWS Cloud. Once syncing is completed, launch test instances and initiate cutover to the AWS Cloud. AWS Application Migration Service (AWS MGN) allows you to quickly lift-and-shift physical, virtual, or cloud servers to AWS. When you’re ready to migrate, it automatically converts and launches your servers on AWS.
The option that says: Leverage AWS Storage Gateway for the migration. Install the AWS Storage Gateway software appliance on their on-premises servers and use it to transfer their data to Amazon S3. Once the data is in Amazon S3, use Amazon EC2 to create virtual machines that replicate their on-premises infrastructure is incorrect because AWS Storage Gateway is primarily used for connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure. It is not designed for migrating live applications and databases.
The option that says: Leverage AWS Outposts service for the migration of physical servers to AWS. Install the AWS Outposts server agent on the data center to incrementally replicate the servers into Amazon Machine Images (AMIs). Deploy the AMIs into Amazon EC2 instances to initiate cutover is incorrect. AWS Outposts is a service that extends AWS infrastructure, services, APIs, and tools to customer premises. It allows customers to build and run applications on-premises using the same programming interfaces as in AWS Regions. It is not a migration service.
The option that says: Leverage the Application Discovery Service from AWS. Install the Application Discovery Service agents on each physical server and visualize the infrastructure on the AWS Migration Hub console. Trigger the replication to copy your servers to AWS. After the replication is completed, start the cutover to AWS is incorrect. AWS Application Discovery Service only gathers data for migration. It helps customers plan migration projects by gathering information about their on-premises data centers. It does not perform the actual migration of physical servers to AWS.
References:
Check out this AWS Server Migration Service Cheat Sheet:
Question 10: Skipped
A retail company runs its customer support call system on its in-house data center. The Solutions Architect was tasked to migrate the call system to AWS and leverage the use of managed services to reduce management overhead. The solution must be able to handle the current tasks such as receiving calls and creating contact flows. It must be able to scale to handle more calls as the customer base grows. The company also wants to add deep learning capabilities to the call system to reduce the need to speak to an agent. It must be able to recognize the intent of the caller based on certain keywords and handle basic tasks, as well as provide information to the call center agents.
Which combination of actions should the Solutions Architect implement to meet the company's requirements? (Select TWO.)
Send incoming customer calls to an Amazon Kinesis stream and process their voice through Amazon Comprehend to determine the customer’s intent.
Build a conversational interface on Amazon Alexa for Business to have AI-based answers to customer queries, thereby reducing the need to speak to an agent.
Use an Amazon Lex bot to recognize callers' intent.
(Correct)
Use the Amazon Connect service to create an omnichannel cloud-based contact center for the agents.
(Correct)
Leverage Amazon Rekognition to identify the caller and process the voice through Amazon Polly to determine the intent based on the customer’s voice.
Explanation
Amazon Connect is an easy-to-use omnichannel cloud contact center that helps companies provide superior customer service across voice, chat, and tasks at a lower cost than traditional contact center systems. You can set up a contact center in a few steps, add agents who are located anywhere, and start engaging with your customers.
You can create personalized experiences for your customers using omnichannel communications. For example, you can dynamically offer chat and voice contact based on such factors as customer preference and estimated wait times. Agents, meanwhile, conveniently handle all customers from just one interface. For example, they can chat with customers and create or respond to tasks as they are routed to them.
The following diagram shows these key characteristics of Amazon Connect.
To help provide a better contact center, you can use Amazon Connect to integrate with several AWS services to provide Machine Learning (ML) and Artificial Intelligence (AI) capabilities.
Amazon Connect uses the following services for ML/AI:
Amazon Lex—Let you create a chatbot to use as an Interactive Voice Response (IVR).
Amazon Polly—Provides text-to-speech in all contact flows.
Amazon Transcribe—Grabs conversation recordings from Amazon S3 and transcribes them to text so you can review them.
Amazon Comprehend—Takes the transcription of recordings and applies speech analytics machine learning to the call to identify sentiment, keywords, adherence to company policies, and more.
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) to recognize the intent of the text to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.
By using an Amazon Lex chatbot in your call center, callers can perform tasks such as changing a password, requesting a balance on an account, or scheduling an appointment without needing to speak to an agent. These chatbots use automatic speech recognition and natural language understanding to ascertain a caller’s intent, maintain context and fluidly manage the conversation. Amazon Lex uses AWS Lambda functions to query your business applications, provide information back to callers, and make updates as requested.
The option that says: Use the Amazon Connect service to create an omnichannel cloud-based contact center for the agents is correct. Amazon Connect is the AWS service you will want to use to build a cloud-based call center system.
The option that says: Use an Amazon Lex bot to recognize callers' intent is correct. Amazon Lex provides the deep learning capabilities required to recognize the intent of a caller based on certain keywords and handle basic tasks autonomously. It integrates well with Amazon Connect, allowing the creation of conversational interfaces that can interact with users effectively.
The option that says: Leverage Amazon Rekognition to identify the caller and process the voice through Amazon Polly to determine the intent based on the customer’s voice is incorrect. Amazon Rekognition is used to identify persons on photos or videos, not on voice calls. Amazon Polly is used for text-to-speech services. You should use Amazon Lex for this scenario.
The option that says: Build a conversational interface on Amazon Alexa for Business to have AI-based answers to customer queries, thereby reducing the need to speak to an agent. is incorrect. Alexa for Business gives you the tools you need to manage Alexa devices, enroll your users, and assign skills at scale for your organization. For conversational interfaces such as chatbots, you can use Amazon Lex.
The option that says: Send incoming customer calls to an Amazon Kinesis stream and process their voice through Amazon Comprehend to determine the customer’s intent is incorrect. Amazon Comprehend processes text and applies speech analysis. It can't handle speech from audio sources. The scenario requires understanding the intent of the user based on their speech, so Amazon Lex is better suited for this.
References:
Check out the Amazon Lex Cheat Sheet:
Question 11: Skipped
A legal consulting firm is running a WordPress website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL database instance. Their website is designed to use an eventual consistency model and performs a high number of read and write operations. There is a growing number of people who are reporting that the website is slow and after checking, the root cause is due to the slow read processing in your database tier. The current DB instances are already optimized for the firm's operational budget, with considerations for cost-effectiveness and resource utilization.
Which of the following options could solve this issue? (Select THREE.)
Upgrade the RDS MySQL instance to use provisioned IOPS.
Integrate Amazon CloudFront to the website to deliver the static media assets to the viewers faster. Consider using AWS Compute Optimizer to rightsize your fleet of Amazon EC2 instances.
Implement sharding to distribute the incoming load to multiple RDS MySQL instances.(Correct)
Deploy an Amazon ElastiCache Cluster with nodes running in each Availability Zone.
(Correct)
Add an RDS MySQL Read Replica in each Availability Zone.(Correct)
Upgrade the instance type of the RDS MySQL database instance to a larger type.
Explanation
To improve the read performance of the application, you can use RDS Read Replicas and ElastiCache.
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores instead of relying entirely on slower disk-based databases.
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
Database sharding is the process of storing a large database across multiple machines. A single machine, or database server, can store and process only a limited amount of data. Database sharding overcomes this limitation by splitting data into smaller chunks, called shards, and storing them across several database servers. All database servers usually have the same underlying technologies, and they work together to store and process large volumes of data.
The option that says: Add an RDS MySQL Read Replica in each Availability Zone. This improves the database retrieval time as there will be more nodes that can serve the database request is correct. You won't need to increase the master instance size to accommodate all the read loads.
The option that says: Deploy an Amazon ElastiCache Cluster with nodes running in each Availability Zone is correct. Adding a cache will significantly reduce the retrieval time for frequent database queries. This also reduces the load significantly on the database tier.
The option that says: Implement sharding to distribute the incoming load to multiple RDS MySQL instances is correct. You can use Amazon RDS as the building block a sharded architecture. Sharding allows you to split your current instances into smaller ones that could operate more efficiently. It would also provide horizontal scalability, which can be more cost-effective in the long term compared to simply upgrading to a larger DB instance.
The option that says: Upgrade the RDS MySQL instance to use provisioned IOPS is incorrect. This provides an increase in IOPS for the database tier, however, this is also very expensive to implement since provisioned IOPS costs a lot more.
The option that says: Integrate Amazon CloudFront to the website to deliver the static media assets to the viewers faster. Consider using AWS Compute Optimizer to rightsize your fleet of Amazon EC2 instances is incorrect. Amazon CloudFront simply caches content at the edge. It can be a complementary solution for overall performance improvement but would not directly impact the database layer where the bottleneck is occuring.
The option that says: Upgrade the instance type of the RDS MySQL database instance to a larger type is incorrect. While this may improve the overall performance of the database, it's costly and may exceed the operational budget. This approach is an example of vertical scaling, which has limits and may not be as sustainable as horizontal scaling solutions in the long run.
References:
Check out these Amazon Elasticache and Amazon RDS Cheat Sheet:
Question 15: Skipped
A startup is developing a health-related mobile app for both iOS and Android devices. The co-founder developed a sleep tracking app that collects the user's biometric data then stores them in an Amazon DynamoDB table, which is configured with an on-demand provisioned throughput capacity. Every nine in the morning, a scheduled task scans the DynamoDB table to extract and aggregate last night’s data for each user and stores the results in an Amazon S3 bucket. When the new data is available, the users are then notified via Amazon SNS mobile push notifications. Due to budget constraints, the management wants to optimize the current architecture of the backend system to lower costs and increase the overall revenue.
Which of the following options can the solutions architect implement to further lower the cost in AWS? (Select TWO.)
Launch a Redshift cluster to replace Amazon DynamoDB. Switch from a Standard S3 bucket to One Zone-Infrequent Access storage class.
Use a RDS instance configured with Multi-AZ deployments and Read Replicas as a replacement to your DynamoDB.
Set up a scheduled job to drop the DynamoDB table for the previous day that contains the biometric data after it is successfully stored in the S3 bucket. Create another DynamoDB table for the day and perform the deletion and creation process everyday.(Correct)
Use ElastiCache to cache reads and writes from the DynamoDB table.
Avail a reserved capacity for provisioned throughput for DynamoDB.(Correct)
Explanation
You can purchase reserved capacity in advance to lower the costs of running your DynamoDB instance. With reserved capacity, you pay a one-time upfront fee and commit to a minimum usage level over a period of time. By reserving your read and write capacity units ahead of time, you realize significant cost savings compared to on-demand provisioned throughput settings. In addition, you can also drop the DynamoDB table which contains the biometric data right after it is successfully stored in S3.
The option that says: Avail a reserved capacity for provisioned throughput for DynamoDB is correct. If you can predict your need for Amazon DynamoDB read-and-write throughput, reserved capacity offers significant savings over the normal price of DynamoDB provisioned throughput capacity.
The option that says: Set up a scheduled job to drop the DynamoDB table for the previous day that contains the biometric data after it is successfully stored in the S3 bucket. Create another DynamoDB table for the day and perform the deletion and creation process everyday is correct. This saves costs by not storing too much data on DynamoDB tables.
The option that says: Use a RDS instance configured with Multi-AZ deployments and Read Replicas as a replacement to your DynamoDB is incorrect because using an RDS database with Multi-AZ deployments and Read Replicas will actually cost more than the current architecture. The scenario just wants to lower the cost and it didn't specify the need to shift to a NoSQL database.
The option that says: Launch a Redshift cluster to replace Amazon DynamoDB. Switch from a Standard S3 bucket to One Zone-Infrequent Access storage class is incorrect because Redshift is primarily used for online analytical processing (OLAP) applications and not as a NoSQL database. This change also entails a significant amount of time and resources to execute.
The option that says: Use ElastiCache to cache reads and writes from the DynamoDB table is incorrect. Although an ElastiCache cluster can lower the CPU utilization of your EC2 instances and improve application performance, it doesn't provide significant cost reduction as compared to Options 3 and 5. Take note that this will also increase the cost since you have to pay for your ElastiCache cluster.
References:
Check out this Amazon DynamoDB Cheat Sheet:
Question 18: Skipped
An adventure company runs a PostgreSQL database that is used to store events from its monitoring application on its on-premises data center. The database is unable to scale enough to handle frequent write events that need to be ingested into the database. The management has tasked the solutions architect to create a hybrid solution that will utilize the existing company VPN connection to AWS. Additional requirements are as follows:
Leverage AWS-managed services to minimize operation overhead
Create a buffer that automatically scales to accommodate the events that need to be ingested
A visualization tool to observe near real-time events and supports creating dashboards.
Has support for dynamic schemas and semi-structured JSON data.
Which of the following options should the solutions architect implement to meet the company requirements? (Select TWO.)
Ingest the events using Kinesis Data Firehose. Write a Lambda function to process and transform the buffered events.
(Correct)
Levering Amazon Neptune DB auto-scaling feature to reliably ingest the events. Use Neptune DB as the DB source for Amazon QuickSight to create near-real-time dashboards and visualizations.
Use Amazon Kinesis Data Stream to reliably ingest the events. Write a Lambda function to process and transform the buffered events.
Create an Amazon OpenSearch Service domain to reliably ingest the events. Leverage the OpenSearch Dashboards tool to create near-real-time dashboards and visualizations.
(Correct)
Provision an Amazon Aurora PostgreSQL DB cluster to reliably ingest the events. Use this DB as the source for Amazon QuickSight to create near-real-time dashboards and visualizations.
Explanation
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Amazon OpenSearch Serverless, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers. With Kinesis Data Firehose, you don't need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.
Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. Amazon OpenSearch Service supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open-source version of the software). When you create a cluster, you have the option of which search engine to use.
OpenSearch is a fully open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. Amazon OpenSearch Service provisions all the resources for your cluster and launches it. It also automatically detects and replaces failed OpenSearch Service nodes, reducing the overhead associated with self-managed infrastructures.
OpenSearch Dashboards is an open-source visualization tool designed to work with OpenSearch. Amazon OpenSearch Service provides an installation of OpenSearch Dashboards with every OpenSearch Service domain. You can find a link to Dashboards on your domain dashboard on the OpenSearch Service console.
The option that says: Ingest the events using Kinesis Data Firehose. Write a Lambda function to process and transform the buffered events is correct. Kinesis Data Firehose can automatically scale to buffer the streaming events, while a Lambda function processes and transforms the events to a proper format.
The option that says: Create an Amazon OpenSearch Service domain to reliably ingest the events. Leverage the OpenSearch Dashboards tool to create near-real-time dashboards and visualizations is correct. OpenSearch Service allows you to store and search the semi-structured JSON data. It also has tools to create dashboards and visualizations.
The option that says: Levering Amazon Neptune DB auto-scaling feature to reliably ingest the events. Use Neptune DB as the DB source for Amazon QuickSight to create near-real-time dashboards and visualizations is incorrect. Neptune DB is designed for graph application and loading CSV formatted data. Amazon QuickSight can directly use Neptune DB as a source.
The option that says: Provision an Amazon Aurora PostgreSQL DB cluster to reliably ingest the events. Use this DB as the source for Amazon QuickSight to create near-real-time dashboards and visualizations is incorrect. Amazon Aurora PostgreSQL may not be suited for ingesting semi-structured JSON or data with dynamic schemas. You will need to use a data transformer before ingesting to PostgreSQL database.
The option that says: Use Amazon Kinesis Data Stream to reliably ingest the events. Write a Lambda function to process and transform the buffered events is incorrect. This may be possible, but since you are going to use Kinesis as a buffer, you don't need the longer-term, durable storage offered by Kinesis Data Stream. Amazon Kinesis Data Firehose has a feature to buffer data and supports data transformation in near-real-time.
References:
Check out these Amazon Kinesis and Amazon OpenSearch Cheat Sheets:
Question 20: Skipped
A computer hardware manufacturer has a supply chain application that is written in NodeJS. The application is deployed on an Amazon EC2 Reserved instance which has been provisioned with an IAM Role that provides access to data files stored in an S3 bucket.
In this architecture, which of the following IAM policies control access to the data files in S3? (Select TWO.)
An IAM permissions policy that allows the EC2 role to access S3 objects.
(Correct)
An IAM trust policy that allows the EC2 instance to assume an S3 role.
An IAM bucket policy that allows the EC2 role to access S3 objects.
An IAM trust policy that allows the NodeJS supply chain application running on the EC2 instance to access the data files stored in the S3 bucket.
An IAM trust policy that allows the EC2 instance to assume an EC2 instance role.
(Correct)
Explanation
An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session.
To delegate permission to access a resource, you create an IAM role in the trusting account that has two policies attached:
The permissions policy grants the user of the role the needed permissions to carry out the intended tasks on the resource.
The trust policy specifies which trusted account members are allowed to assume the role.
Hence, the correct answers are:
- An IAM trust policy that allows the EC2 instance to assume an EC2 instance role.
- An IAM permissions policy that allows the EC2 role to access S3 objects.
The option that says: An IAM trust policy that allows the NodeJS supply chain application running on the EC2 instance to access the data files stored in the S3 bucket is incorrect because IAM only provides and controls access to the AWS resources such as EC2 instances but not specifically to the underlying applications being hosted inside the instance.
The option that says: An IAM bucket policy that allows the EC2 role to access S3 objects is incorrect. There is no such thing as an IAM bucket policy as only S3 has this kind of policy.
The option that says: An IAM trust policy that allows the EC2 instance to assume an S3 role is incorrect as an S3 Role is inappropriate in this scenario. We are using an EC2 role in order to provide the EC2 instance access to the data files from the S3 bucket.
References:
Check out this AWS IAM Cheat Sheet:
Question 21: Skipped
A technology company is developing an educational mobile app for students, with an exam feature that also allows them to submit their answers. The developers used React Native so the app can be deployed on both iOS and Android devices. They used AWS Lambda and Amazon API Gateway for the backend services and a DynamoDB table as the database service. After a month, the released app has been downloaded over 3 million times. However, there are a lot of users who complain about the slow processing of the app especially when they are submitting their answers in the multiple-choice exams. The diagrams and images on the exam also take a lot of time to load, which is not a good user experience.
Which of the following options provides the most cost-effective and scalable architecture for the application?
Increase the write capacity in DynamoDB to 10,000 WCU. Use a web distribution in CloudFront with associated CloudFront Functions to host the diagrams, images and other static assets of the mobile app in real-time
Enable Auto Scaling in DynamoDB with a Target Utilization of 100% and a maximum provisioned capacity of 1000 units. Use an S3 bucket to host the diagrams, images, and other static assets of the mobile app.
Launch an SQS queue and develop a custom service which integrates with SQS to buffer the incoming requests. Use a web distribution in CloudFront and Amazon S3 to host the diagrams, images, and other static assets of the mobile app.
(Correct)
Instead of DynamoDB, use RDS Multi-AZ configuration with Read Replicas. Use a web distribution in CloudFront and Amazon S3 to host the diagrams, images, and other static assets of the mobile app.
Explanation
In this scenario, the mobile app is both write and read-intensive. You can use SQS to buffer and scale the backend service to accommodate the large incoming traffic. CloudFront is the best choice to distribute the static assets of the mobile app to load it faster.
SQS can help with the slowness experienced when submitting answers. You can modify your application to use SQS. For example, when the users submit their answers, instead of waiting for the complete submission and processing of all the answers you can save the submitted answers on the SQS Queue and then quickly return the user to a new page stating that the answers are successfully submitted. This improves the user experience as the user will not wait on a "loading submit" button until all the answers are processed completely. When the answers are buffered in the queue, the backend instances can now process the answers in the queue. You can then have another webpage that will show the results separately. This is a simple cost-effective design that improves user experience without increasing the capacity of your systems.
With CloudFront, the images on the S3 bucket will be cached on Edge locations close to users. This further improves the load times of the web page. Images are quickly accessed and loaded because they are physically closer (using CloudFront cache) to users around the world.
Therefor the correct answer is: Launch an SQS queue and develop a custom service which integrates with SQS to buffer the incoming requests. Use a web distribution in CloudFront and Amazon S3 to host the diagrams, images, and other static assets of the mobile app.
The option that says: Enable Auto Scaling in DynamoDB with a Target Utilization of 100% and a maximum provisioned capacity of 1000 units. Use an S3 bucket to host the diagrams, images, and other static assets of the mobile app is incorrect because you can only set the auto scaling target utilization values between 20 and 90 percent for your read and write capacity, not 100%.
The option that says: Increase the write capacity in DynamoDB to 10,000 WCU. Use a web distribution in CloudFront with associated CloudFront Functions to host the diagrams, images and other static assets of the mobile app in real-time is incorrect. Although using a CloudFront Function can reduce the latency of the processing, there are certain limitations on what it can do since this is just a JavaScript-based based feature that is commonly used to manipulate the requests and responses that flow through CloudFront, perform basic authentication and authorization, generate HTTP responses at the edge, and more. Increasing the write capacity (WCU) for the DynamoDB table may work but using a 10,000 WCU entails a significantly high cost.
The options that says: Instead of DynamoDB, use RDS Multi-AZ configuration with Read Replicas. Use a web distribution in CloudFront and Amazon S3 to host the diagrams, images, and other static assets of the mobile app is incorrect because this will require changing the whole application to use an SQL database and moving from DynamoDB to RDS Multi-AZ is not cost-effective.
References:
Check out this Amazon SQS Cheat Sheet:
Question 24: Skipped
A credit company deployed its online load application system in an Auto Scaling group across multiple Availability Zones in the ap-southeast-2 region. As part of the Disaster Recovery Plan of the company, the target RTO must be less than 2 hours and the target RPO must be 10 minutes. At 12:00 PM, there was a production incident in the main database and the operations team found out that they cannot recover the transactions made from 10:30 AM onwards or 1.5 hours ago.
How can the solutions architect change the current architecture to achieve the required RTO and RPO in case a similar system failure occurred in the future?
Improve data redundancy by implementing a synchronous database “source-replica” replication between two Availability Zones.
Create database backups every hour and store it to Glacier for archiving. Store the transaction logs in an S3 bucket every 5 minutes.
Perform database backups every hour and store the result to EBS volumes. Backup the transaction logs every 5 minutes to an S3 bucket with Cross-Region Replication enabled.
Create database backups every hour and store it in an S3 bucket with Cross-Region Replication enabled. Store the transaction logs in the same S3 bucket every 5 minutes.
(Correct)
Explanation
Businesses of all sizes are using AWS to enable faster disaster recovery of their critical IT systems without incurring the infrastructure expense of a second physical site. AWS supports many disaster recovery architectures, from those built for smaller workloads to enterprise solutions that enable rapid failover at scale. AWS provides a set of cloud-based disaster recovery services that enable fast recovery of your IT infrastructure and data.
Recovery time objective (RTO) - The time it takes after a disruption to restore a business process to its service level, as defined by the operational level agreement (OLA). For example, if a disaster occurs at 12:00 PM (noon) and the RTO is eight hours, the DR process should restore the business process to the acceptable service level by 8:00 PM.
Recovery point objective (RPO) - The acceptable amount of data loss measured in time. For example, if a disaster occurs at 12:00 PM (noon) and the RPO is one hour, the system should recover all data that was in the system before 11:00 AM. Data loss will span only one hour, between 11:00 AM and 12:00 PM (noon).
A company typically decides on an acceptable RTO and RPO based on the financial impact to the business when systems are unavailable. The company determines financial impact by considering many factors, such as the loss of business and damage to its reputation due to downtime and the lack of systems availability. IT organizations then plan solutions to provide cost-effective system recovery based on the RPO within the timeline and the service level established by the RTO.
Cross-region replication (CRR) enables automatic, asynchronous copying of objects across buckets in different AWS Regions. Buckets configured for cross-region replication can be owned by the same AWS account or by different accounts. This is also helpful to durably store your data and for disaster recovery in the event of a region-wide outage.
Therefore, the correct answer is: Create database backups every hour and store it in an S3 bucket with Cross-Region Replication enabled. Store the transaction logs in the same S3 bucket every 5 minutes.
The option that says: Improve data redundancy by implementing a synchronous database "source-replica" replication between two Availability Zones is incorrect. Although this is a valid answer, it will not be able to provide the needed data redundancy in the event of a region-wide outage. If one Availability Zone goes down, then there is another Availability Zone that contains the data. However, if the whole AWS region is down, then the data will be totally unreachable. A better solution is to use S3 and enable Cross-Region Replication (CRR).
The option that says: Perform database backups every hour and store the result to EBS volumes. Backup the transaction logs every 5 minutes to an S3 bucket with Cross-Region Replication enabled is incorrect. EBS Volumes are not durable enough to store the database backups every hour for this scenario. If there is an AZ or an AWS Region-wide outage, then the data in the EBS Volume could potentially be unavailable.
The option that says: Create database backups every hour and store it to Glacier for archiving. Store the transaction logs in an S3 bucket every 5 minutes is incorrect because Glacier is primarily used for long term data archiving.
References:
Check out this Amazon S3 Cheat Sheet:
Question 26: Skipped
A pharmaceutical company has a hybrid cloud architecture in AWS. It has three different accounts for its environments: DEV, UAT, and PROD, which are all part of the consolidated billing account. The PROD account has purchased 10 Reserved EC2 Instances in the us-west-2a Availability Zone.
Currently, there are no running EC2 instances in the PROD account because the application is not live yet. However, in the DEV account, there are 5 EC2 instances running in the us-west-2a Availability Zone. In the UAT account, there are also 5 EC2 instances running in the us-west-1a Availability Zone. All the EC2 instances in the DEV and UAT accounts match the reserved instance type in the PROD account.
In this scenario, which account benefits the most from the Reserved Instance pricing?
None. Considering that the PROD account is the one that purchased the Reserved Instance and it does not have any running EC2 instance, there is currently no other member account that benefits from the Reserved Instance pricing.
Currently, only the DEV account benefits from the Reserved Pricing.(Correct)
Currently, only the UAT account benefits from the Reserved Pricing.
Since both DEV and UAT accounts are running an EC2 instance type that exactly matches the Reserved Instance type, then the Reserved instance pricing will be applied to all EC2 instances in those two member accounts.
Explanation
In the question, DEV and UAT are running the same instance type, but the main difference is that the former is using us-west-2a Availability Zone while the latter uses us-west-1a. Remember that the PROD account has bought the Reserved Instance in the us-west-2a Availability Zone, which means that only the DEV account exactly matches the criteria.
As an Amazon EC2 Reserved Instances example, suppose that Bob and Susan each have an account in an organization. Susan has five Reserved Instances of the same type, and Bob has none. During one particular hour, Susan uses three instances, and Bob uses six, for a total of nine instances on the organization's consolidated bill. AWS bills five instances as Reserved Instances and the remaining four instances as regular instances.
Bob receives the cost-benefit from Susan's Reserved Instances only if he launches his instances in the same Availability Zone where Susan purchased her Reserved Instances. For example, if Susan specifies us-west-2a when she purchases her Reserved Instances, Bob must specify us-west-2a when he launches his instances to get the cost-benefit on the organization's consolidated bill. However, the actual locations of Availability Zones are independent from one account to another. For example, the us-west-2a Availability Zone for Bob's account might be in a different location than the location for Susan's account.
Therefore, the correct answer is: Currently, only the DEV account benefits from the Reserved Pricing.
The option that says: Currently, only the UAT account benefits from the Reserved Pricing is incorrect. The UAT account can only benefit from the Reserved Instances of the PROD account if they were launched in the same Availability Zone where the PROD account purchased the Reserved Instances.
The option that says: Since both DEV and UAT accounts are running an EC2 instance type which exactly matches the Reserved Instance type, then the Reserved instance pricing will be applied to all EC2 instances in those two member accounts is incorrect. The UAT account can only benefit from the Reserved Instances of the PROD account if they were launched in the same Availability Zone where the PROD account purchased the Reserved Instances.
The option that says: None. Considering that the PROD account is the one that purchased the Reserved Instance and it does not have any running EC2 instance, there is currently no other member account that benefits from the Reserved Instance pricing is incorrect. The DEV account benefits from the Reserved Instances pricing because the instances are launched on the same AZ where the PROD account purchased the Reserved Instances.
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
Question 27: Skipped
A digital advertising startup runs an ad-supported photo-sharing website that has users around the globe. The startup is using Amazon S3 to serve photos to website users. Several weeks later, the solutions architect found out that third-party sites have been linking to the photos on the company S3 bucket which is causing losses in overall financial ad revenue. Some users are also reporting that the photos are taking too much time to load.
Which of the following options is an effective method to mitigate this security flaw and to improve the performance of the photo-sharing website?
Use CloudFront to distribute static content and block the IP addresses of the other websites that are illegally linking the photos to their own websites.
Store photos on an EBS volume of the web server with data encryption enabled.
Block the IPs of the offending websites in Security Groups and use S3 Cross-Region Replication (CRR).
Remove public read access from the S3 bucket. Use CloudFront as the global content delivery network (CDN) service for the photos and use Signed URLs with expiry dates.
(Correct)
Explanation
Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds. CloudFront is integrated with AWS - including physical locations that are directly connected to the AWS global infrastructure, as well as software that works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code close to your viewers.
A signed URL includes additional information, for example, expiration date and time, that gives you more control over access to your content. These additional information appear in a policy statement, which is based on either a canned policy or a custom policy.
In this scenario, the main issue is that there are other websites that are illegally using the photos hosted from your S3 bucket. The secondary issue is the slow loading times of the photos. The use of CloudFront as a CDN with Signed URLs is the best solution for this situation.
Therefore, the correct answer is: Remove public read access from the S3 bucket. Use CloudFront as the global content delivery network (CDN) service for the photos and use Signed URLs with expiry dates.
The option that says: Use CloudFront to distribute static content and block the IP addresses of the other websites that are illegally linking the photos to their own websites is incorrect. Although you can block the offending IP addresses of the websites that illegally use your photos, other attackers can still use a different IP address and use your photos without your permission.
The option that says: Store photos on an EBS volume of the web server with data encryption enabled is incorrect. An EBS volume is not a scalable storage solution and is not suitable in hosting static assets for a public website.
The option that says: Block the IPs of the offending websites in Security Groups and use S3 Cross-Region Replication (CRR) is incorrect. In a Security Group, you can specify and allow rules, but not deny rules. In addition, S3 Cross-Region Replication (CRR) will not provide the security needed in this case. Take note: although S3 Cross-Region Replication replicates the bucket to another region, it is still better to use CloudFront to cover all of the global regions.
References:
Check out this Amazon CloudFront Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 28: Skipped
A company runs its travel and tours website on AWS. The application only supports HTTP at the moment. To improve their SEO ranking and provide more security for their customers, they decided to enable SSL on their website. The company would also like to ensure the separation of roles between the Development team and the Security team in handling the sensitive SSL certificate. The Development team can log in to EC2 Instances but they should not have access to the SSL certificate, which only the Security team has exclusive control of. Currently, they are using an Application Load Balancer which provides loads of incoming traffic to an Auto Scaling group of On-Demand EC2 instances.
Which of the following options should the solutions architect implement to satisfy the above requirements?
Retrieve a read-only copy of the SSL certificate upon the boot of the EC2 instance from a CloudHSM, which is exclusively managed by the Security team.
Store the SSL certificate in IAM and authorize access only to the Security team using an IAM policy. Configure the Application Load Balancer to use the SSL certificate instead of the EC2 instances.(Correct)
In the web server, set the file owner of the SSL certificate to the Security team and set the file permissions to 700
which will deny all access to the Development team.
Create a new private S3 bucket and then upload the SSL certificate owned by the Security team. Configure the EC2 instance to have exclusive access to the certificate and block any access from the Development team.
Explanation
In the ALB, you can create a listener that uses encrypted connections (also known as SSL offload). This feature enables traffic encryption between your load balancer and the clients that initiate SSL or TLS sessions. To use an HTTPS listener, you must deploy an SSL/TLS server certificate on your load balancer. The load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to the targets.
Although you can terminate the SSL in the EC2 instance, this setup is not recommended in this scenario. It is best to store the SSL certificate in IAM or in AWS Certificate Manager (ACM) where you can control which teams, either the Security or the Development team, can have access.
Therefore the correct answer is: Store the SSL certificate in IAM and authorize access only to the Security team using an IAM policy. Configure the Application Load Balancer to use the SSL certificate instead of the EC2 instances.
The option that says: In the web server, set the file owner of the SSL certificate to the Security team and set the file permissions to 700
which will deny all access to the Development team is incorrect. You don't have to store the SSL Cert on the EC2 instances. The SSL certificate should be at the Load Balancer since the EC2 instances are behind it and the application only supports HTTP.
The option that says: Retrieve a read-only copy of the SSL certificate upon the boot of the EC2 instance from a CloudHSM, which is exclusively managed by the Security team is incorrect. The clients access the website through the Load Balancer and not the EC2 instances directly. Using CloudHSM to store your SSL certificate will just increase your costs without satisfying the requirement.
The option that says: Create a new private S3 bucket and then upload the SSL certificate owned by the Security team. Configure the EC2 instance to have exclusive access to the certificate and block any access from the Development team is incorrect. The Developers can log in to the EC2, therefore, they will still be able to access the SSL certificate in the EC2 instances after they were downloaded from Amazon S3.
References:
Check out this AWS Elastic Load Balancing (ELB) Cheat Sheet:
Question 33: Skipped
A company manually runs its custom scripts when deploying a new version of its application that is hosted on a fleet of Amazon EC2 instances. This method is prone to human errors, such as accidentally running the wrong script or deploying the wrong artifact. The company wants to automate its deployment procedure.
If errors are encountered after the deployment, the company wants to be able to roll back to the older application version as fast as possible.
Which of the following options should the Solutions Architect implement to meet the requirements?
Create a new pipeline on AWS CodePipeline and add a stage that will deploy the application on the AWS EC2 instances. Choose a “rolling update with an additional batch” deployment strategy, to allow a quick rollback to the older version in case of errors.
Create an AWS System Manager automation runbook to manage the deployment process. Set up the runbook to first deploy the new application version to a staging environment. Include automated tests and, upon successful completion, use the runbook to deploy the application to the production environment
Create two identical environments of the application on AWS Elastic Beanstalk. Use a blue/green deployment strategy by swapping the environment’s URL. Deploy the custom scripts using Elastic Beanstalk platform hooks.
(Correct)
Utilize AWS CodeBuild and add a job with Chef recipes for the new application version. Use a “canary” deployment strategy to the new version on a new instance. Delete the canary instance if errors are found on the new version.
Explanation
Blue/Green Deployment involves maintaining two separate, identical environments. The "Blue" environment is the current production version, while the "Green" is for the new version. . This Green environment is an exact replica of the Blue one but hosts the new version of your application. After deploying and thoroughly testing the new version in the Green environment, you simply switch the environment's URL to redirect traffic from the Blue to the Green environment. This switch makes the new version live for users. If a rollback is needed due to any issues, it's just a matter of switching the URL back to the original Blue environment and instantly reverting to the previous version of the application.
In Elastic Beanstalk you can perform a blue/green deployment by swapping the CNAMEs of the two environments to redirect traffic to the new version instantly. If there are any custom scripts or executable files that you want to run automatically as part of your deployment process, you may use platform hooks.
To provide platform hooks that run during an application deployment, place the files under the .platform/hooks
directory in your source bundle, in one of the following subdirectories:
prebuild
– Files here run after the Elastic Beanstalk platform engine downloads and extracts the application source bundle, and before it sets up and configures the application and web server.
predeploy
– Files here run after the Elastic Beanstalk platform engine sets up and configures the application and web server, and before it deploys them to their final runtime location.
postdeploy
– Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server.
Therefore, the correct answer is: Create two identical environments of the application on AWS Elastic Beanstalk. Use a blue/green deployment strategy by swapping the environment’s URL. Deploy the custom scripts using Elastic Beanstalk platform hooks.
The option that says: Create an AWS System Manager automation runbook to manage the deployment process. Set up the runbook to first deploy the new application version to a staging environment. Include automated tests and, upon successful completion, use the runbook to deploy the application to the production environment is incorrect. While this is technically possible, it does not offer the fastest rollback mechanism in case of immediate issues post-deployment, as the rollback would involve a separate process. Moreover, unlike AWS Elastic Beanstalk, which has built-in features for version tracking, using AWS System Manager for deployment requires a more manual approach to version control. You would need to maintain a system for tracking different application versions, ensuring that you have the correct version deployed in the right environment (staging vs. production). This adds complexity to the deployment process.
The option that says: Create a new pipeline on AWS CodePipeline and add a stage that will deploy the application on the AWS EC2 instances. Choose a “rolling update with an additional batch” deployment strategy, to allow a quick rollback to the older version in case of errors is incorrect. Although the pipeline can deploy the new version on the EC2 instances, rollback for this strategy takes time. You will have to re-deploy the older version if you want to do a rollback.
The option that says: Utilize AWS CodeBuild and add a job with Chef recipes for the new application version. Use a “canary” deployment strategy to the new version on a new instance. Delete the canary instance if errors are found on the new version. is incorrect. Although you can detect errors on a canary deployment, AWS CodeBuild cannot deploy the new application version on the EC2 instances. You have to use AWS CodeDeploy if you want to go this route. It's also easier to set up Chef deployments using AWS OpsWorks rather than in AWS CodeBuild.
References:
Check out this AWS Elastic Beanstalk Cheat Sheet:
Question 36: Skipped
A company runs a mission-critical application on a fixed set of Amazon EC2 instances behind an Application Load Balancer. The application responds to user requests by querying a 120GB dataset. The application requires high throughput and low latency storage so the dataset is stored on Provisioned IOPS (PIOPS) Amazon EBS volumes with 3000 IOPS provisioned. The Amazon EC2 launch template has been configured to allocate and attach this 120GB size PIOPS EBS volume for the fleet of EC2 instances. After a few months of operation, the company noticed the high cost of EBS volumes in the billing section. The Solutions Architect has been tasked to design a solution that will reduce the costs without a negative impact on the application performance and data durability.
Which of the following solutions will meet the company requirements?
Remove the PIOPS EBS volume allocation on the EC2 launch template. Attach a 120GB instance store volume on the EC2 instance to ensure that the application will have enough IOPS for its operation.
Create an Amazon EFS volume and mount it across all the Amazon EC2 instances. Use Max I/O performance mode on the EFS volume to ensure the application can reach the required IOPS.
Use the cheaper General Purpose SSD (gp2) EBS volumes instead of PIOPS EBS volumes. Allocating 1TB EBS volumes (gp2) will have a throughput of 3000 IOPS. Update the EC2 launch template to allocate this type of volume.
Create an Amazon EFS volume and mount it across all the Amazon EC2 instances. Use the Provisioned Throughput mode on the EFS volume to ensure that the application can reach the required IOPS.
(Correct)
Explanation
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations.
Amazon EFS is designed to provide the throughput, IOPS, and low latency needed for a broad range of workloads. With Amazon EFS, you can choose from two performance modes and two throughput modes:
- The default General Purpose performance mode is ideal for latency-sensitive use cases, like web serving environments, content management systems, home directories, and general file serving. File systems in the Max I/O mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file metadata operations.
- Using the default Bursting Throughput mode, throughput scales as your file system grows. Using Provisioned Throughput mode, you can specify the throughput of your file system independent of the amount of data stored.
With Bursting Throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. With Provisioned Throughput mode, you can instantly provision the throughput of your file system (in MiB/s) independent of the amount of data stored.
If your file system is in the Provisioned Throughput mode, you can increase the Provisioned Throughput of your file system as often as you want. You can decrease your file system throughput in Provisioned Throughput mode as long as it's been more than 24 hours since the last decrease.
Therefore, the correct answer is: Create an Amazon EFS volume and mount it across all the Amazon EC2 instances. Use the Provisioned Throughput mode on the EFS volume to ensure that the application can reach the required IOPS.
The option that says: Create an Amazon EFS volume and mount it across all the Amazon EC2 instances. Use Max I/O performance mode on the EFS volume to ensure the application can reach the required IOPS is incorrect. File systems in the Max I/O mode can scale to higher levels of aggregate throughput and operations per second. However, this scaling is done with a tradeoff of slightly higher latencies for file metadata operations.
The option that says: Use the cheaper General Purpose SSD (gp2) EBS volumes instead of PIOPS EBS volumes. Allocating 1TB EBS volumes (gp2) will have a throughput of 3000 IOPS. Update the EC2 launch template to allocate this type of volume is incorrect. Although this may look cheaper at first, creating several 1TB volumes for each EC2 instance entails higher costs. The Amazon EFS volume solution will be cheaper for sharing storage across all EC2 instances. Although you can use EBS Multi-Attach to attach EBS volumes to multiple EC2 instances, this is limited only to Provisioned IOPS SSD (io1 or io2) volumes that are attached to Nitro-based EC2 instances in the same Availability Zone.
The option that says: Remove the PIOPS EBS volume allocation on the EC2 launch template. Attach a 120GB instance store volume on the EC2 instance to ensure that the application will have enough IOPS for its operation is incorrect. Instance store volumes are ephemeral which means that you will lose all data in the volume when you stop/start the instance. This is not recommended for this mission-critical application.
References:
Check out these Amazon EFS and Comparison Cheat Sheets:
Question 37: Skipped
A call center company has recently adopted a hybrid architecture in which they need a predictable network performance and reduced bandwidth costs to connect their data center and their AWS Cloud. You have implemented two AWS Direct Connect connections between your data center and AWS to have a stable and highly available network performance. After a recent IT financial audit, it was decided to review the current implementation and replace it with a more cost-effective option.
Which of the following connectivity setup would you recommend for this scenario?
A single AWS Direct Connect and an AWS managed VPN connection to connect your data center with Amazon VPC
(Correct)
Setup a Hardware VPN on your datacenter and set it to use the Direct Connect for its connection
Use AWS VPN CloudHub to connect your data center network to Amazon VPC
A single AWS Direct Connect connection and enable the built-in failover feature
Explanation
The scenario requires you to revise the current setup of using two Direct Connect connections being used by the company. Since Direct Connect does not provide any redundancy for its connection, it is recommended to set up at least two connections for high availability. However, this setup is expensive as you will be charged for the two Direct Connect connections. Usually, the second connection is used only when the main connection fails which rarely happens.
To maintain high availability, but reduce the costs, you can use a single AWS Direct Connect to create a dedicated private connection from your data center network to your Amazon VPC, and then combine this connection with an AWS managed VPN connection to create an IPsec-encrypted connection as a backup connection. If the Direct Connect connection fails, you still have a managed VPN to connect to your Amazon VPC, albeit with a slower connection. This will suffice until your Direct Connect connection is restored.
Therefore, the correct answer is: A single AWS Direct Connect and an AWS managed VPN connection to connect your data center with Amazon VPC. The AWS Direct Connect connection will provide the high speed bandwidth network required while having the VPN as the slower, backup link in case the main Direct Connect link fails.
The option that says: A single AWS Direct Connect connection and enable the built-in failover feature is incorrect. AWS Direct Connect does not have a built-in failover feature.
The option that says: Setup a Hardware VPN on your datacenter and set it to use the Direct Connect for its connection is incorrect. If you implement this, your VPN will also fail if Direct Connect fails.
The option that says: Use AWS VPN CloudHub to connect your data center network to Amazon VPC is incorrect. You still need a dedicated high bandwidth network provided by Direct Connect.
References:
Check out this Amazon VPC Cheat Sheet:
S3 Transfer Acceleration vs Direct Connect vs VPN vs Snowball vs Snowmobile:
Comparison of AWS Services Cheat Sheets:
Question 39: Skipped
A media company recently launched a web service that allows users to upload and share short videos. Currently, the web servers are hosted on an Auto Scaling group of Amazon EC2 instances in which the videos are processed and stored in the EBS volumes. Each uploaded video sends a message on the Amazon SQS queue, which is also processed by an Auto Scaling group of Amazon EC2 instances. The company relies on third-party software to analyze and categorize the videos. The website also contains static content that has variable user traffic. The company wants to re-architecture the application to reduce costs, reduce dependency on third-party software, and reduce management overhead by leveraging AWS-managed services.
Which of the following solutions will meet the company's requirements?
Create an Amazon EFS volume to store the videos and static content. Mount the volume on all EC2 instances of the web application. Have AWS Lambda poll the Amazon SQS queue for messages and invoke a Lambda function that calls the Amazon Rekognition API to analyze and categorize the videos.
Reduce operational overhead by using AWS Elastic Beanstalk to provision the Auto Scaling group of EC2 instances for the web servers and the Amazon SQS queue consumers. Use Amazon Rekognition to analyze and categorize the videos instead of the third-party software. Store the videos and static contents on Amazon S3 buckets.
Create an Amazon ECS Fargate cluster and use containers to host the web application. Create an Auto Scaling group of Amazon EC2 Spot instances to process the SQS queue. Use Amazon Rekognition to analyze and categorize the videos instead of the third-party software. Store the videos and static contents on Amazon S3 buckets.
(Correct)
Create an Amazon S3 bucket with website hosting enabled to host the web application. Store the videos and static content on a separate S3 bucket. Configure S3 event notification to send messages to an Amazon SQS queue for each video upload event. Have AWS Lambda poll the Amazon SQS queue for messages and invoke a Lambda function that calls the Amazon Rekognition API to analyze and categorize the videos.
Explanation
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. Fargate makes it easy for you to focus on building your applications.
When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application.
Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well. Amazon Rekognition also provides highly accurate facial analysis, face comparison, and face search capabilities. You can detect, analyze, and compare faces for a wide variety of use cases, including user verification, cataloging, people counting, and public safety.
Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that’s stored in Amazon S3.
You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. They might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side scripting, but AWS has other resources for hosting dynamic websites. This makes Amazon S3 suitable for hosting static website contents.
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and is adjusted gradually based on the long-term supply of and demand for Spot Instances. Your Spot Instance runs whenever capacity is available and the maximum price per hour for your request exceeds the Spot price.
Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
Therefore, the correct answer is: Create an Amazon ECS Fargate cluster and use containers to host the web application. Create an Auto Scaling group of Amazon EC2 Spot instances to process the SQS queue. Use Amazon Rekognition to analyze and categorize the videos instead of the third-party software. Store the videos and static contents on Amazon S3 buckets. Using an ECS Fargate cluster reduces the overhead of managing EC2 instances and using Spot instances to process the SQS queue significantly reduces the operating costs. Amazon Rekognition is designed to analyze videos which you can use to categorize them. Amazon S3 bucket serves as durable and cost-effective storage for the uploaded videos.
The option that says: Create an Amazon EFS volume to store the videos and static content. Mount the volume on all EC2 instances of the web application. Have AWS Lambda poll the Amazon SQS queue for messages and invoke a Lambda function that calls the Amazon Rekognition API to analyze and categorize the videos is incorrect. Amazon S3 is more cost-effective than using EFS. This also increases management overhead as you need to configure the EFS mount point on all your EC2 instances.
The option that says: Create an Amazon S3 bucket with website hosting enabled to host the web application. Store the videos and static content on a separate S3 bucket. Configure S3 event notification to send messages to an Amazon SQS queue for each video upload event. Have AWS Lambda poll the Amazon SQS queue for messages and invoke a Lambda function that calls the Amazon Rekognition API to analyze and categorize the videos is incorrect. The scenario states that the videos are processed and then stored in EBS volumes, suggesting that the service is dynamic in nature. Amazon S3's static website hosting does not provide server-side processing, which is necessary for dynamic websites. Therefore, hosting a web application on an Amazon S3 bucket with website hosting enabled may not be a suitable option.
The option that says: Reduce operational overhead by using AWS Elastic Beanstalk to provision the Auto Scaling group of EC2 instances for the web servers and the Amazon SQS queue consumers. Use Amazon Rekognition to analyze and categorize the videos instead of the third-party software. Store the videos and static contents on Amazon S3 buckets is incorrect. Although Elastic Beanstalk can manage the EC2 instances for you, there is no mention to use Spot instances for processing the SQS queue which can further reduce the operating costs.
References:
Check out these AWS Fargate and Amazon Rekognition Cheat Sheets:
Question 41: Skipped
A leading insurance company in South East Asia recently deployed a new web portal that enables your users to log in and manage their accounts, view their insurance plans, and pay their monthly premiums. After a few weeks, the solutions architect noticed that there is a significant amount of incoming traffic from a country in which the insurance company does not operate. Later on, it was determined that the same set of IP addresses coming from the unsupported country is sending out massive amounts of requests to your portal which has caused some performance issues on the application.
Which of the following options is the recommended solution to block the series of attacks coming from a set of determined IP ranges?
Create an inbound Network Access control list associated with explicit deny rules to block the attacking IP addresses.(Correct)
Create a custom route table associated with the web tier and block the attacking IP addresses from the Internet Gateway.
Launch the online portal on the private subnet.
Launch a Security Group with explicit deny rules to block the attacking IP addresses.
Explanation
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
The following are the basic things that you need to know about network ACLs:
- Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic.
- You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.
- Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.
- You can associate a network ACL with multiple subnets. However, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed.
- A network ACL contains a numbered list of rules. We evaluate the rules in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. We recommend that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.
- A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.
- Network ACLs are stateless, which means that responses from allow inbound traffic are subject to the rules for outbound traffic (and vice versa).
The required task is to block all of the offending IP addresses from the unsupported country. In order to do this, you have to use Network Access Control List (NACL).
Therefore, the correct answer is: Create an inbound Network Access control list associated with explicit deny rules to block the attacking IP addresses.
The option that says: Launch the online portal on the private subnet is incorrect because if you launch the online portal on the private subnet, then it will not be accessible to the users over the public Internet anymore.
The option that says: Create a custom route table associated with the web tier and blocking the attacking IP addresses from the Internet Gateway is incorrect because you do not have to change the route table nor make any changes to the Internet Gateway (IGW).
The option that says: Launch a Security Group with explicit deny rules to block the attacking IP addresses is incorrect because you cannot explicitly deny or block IP addresses using a security group.
References:
Check out this Amazon VPC Cheat Sheet:
Check out this security group and network access list comparison:
Question 42: Skipped
A data analytics company has recently adopted a hybrid cloud infrastructure with AWS. They are in the business of collecting and processing vast amounts of data. Each data set generates up to several thousands of files which can range from 10 MB to 1 GB in size. The archived data is rarely restored and in case there is a request to retrieve it, the company has a maximum of 24 hours to send the files. The data sets can be searched using its file ID, set name, authors, tags, and other criteria.
Which of the following options provides the most cost-effective architecture to meet the above requirements?
1. Store the files of the completed data sets into a single S3 bucket.
2. Store the S3 object key for the compressed files along with other search metadata in a DynamoDB table.
3. For retrieving the data, query the DynamoDB table for files that match the search criteria and then restore the files from the S3 bucket.
1. For each completed data set, compress and concatenate all of the files into a single Glacier archive.
2. Store the associated archive ID for the compressed files along with other search metadata in a DynamoDB table.
3. For retrieving the data, query the DynamoDB table for files that match the search criteria and then restore the files from the retrieved archive ID.
(Correct)
1. Store individual compressed files to an S3 bucket. Also store the search metadata and the S3 object key of the files in a separate S3 bucket.
2. Create a lifecycle rule to move the data from an S3 Standard class to Glacier after a certain a month.
3. For retrieving the data, query the S3 bucket for files matching the search criteria and then retrieve the file from the other S3 bucket.
1. Store individual files in Glacier using the filename as the archive name.
2. For retrieving the data, query the Glacier vault for files matching the search criteria.
Explanation
You can further lower the cost of storing data by compressing it to a zip or tar file. In addition, searching for archives in Glacier takes a long time, which is why it is advisable to store the search criteria and archive ID in a database for faster search. You can alternatively use Glacier Select to perform filtering operations using simple Structured Query Language (SQL).
In this scenario, this option provides a more cost-effective option for the given architecture:
1. For each completed data set, compress and concatenate all of the files into a single Glacier archive.
2. Store the associated archive ID for the compressed files along with other search metadata in a DynamoDB table.
3. For retrieving the data, query the DynamoDB table for files that match the search criteria and then restore the files from the retrieved archive ID.
The following option is incorrect because storing the data in an S3 Standard class is costly. It is more cost-effective to use Amazon Glacier instead.
1. Store the files of the completed data sets into a single S3 bucket. 2. Store the S3 object key for the compressed files along with other search metadata in a DynamoDB table. 3. For retrieving the data, query the DynamoDB table for files that match the search criteria and then restore the files from the S3 bucket.
The following option is incorrect because initially storing the archive to an S3 Standard class for a month still entails additional cost. Since the company allows a maximum of 24 hours to retrieve the files, you can directly store the archives in Glacier instead:
1. Store individual compressed files to an S3 bucket. Also store the search metadata and the S3 object key of the files in a separate S3 bucket. 2. Create a lifecycle rule to move the data from an S3 Standard class to Glacier after a certain a month. 3. For retrieving the data, query the S3 bucket for files matching the search criteria and then retrieve the file from the other S3 bucket.
The following option is incorrect because Glacier doesn't have a built-in search function to help you retrieve the data. You have to store the archive ID in a database, such as DynamoDB, to help you effectively search the required data:
1. Store individual files in Glacier using the filename as the archive name. 2. For retrieving the data, query the Glacier vault for files matching the search criteria.
References:
Check out this Amazon S3 Glacier Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 43: Skipped
A company develops cloud-native applications and uses AWS CloudFormation templates for deploying applications in AWS. The application artifacts and templates are stored in an Amazon S3 bucket with versioning enabled. The developers use Amazon EC2 instances that have integrated development (IDE) to download, modify, and re-upload the artifacts on the S3 bucket. The unit testing is done locally on the EC2 instances. The company wants to improve the existing deployment process with a CI/CD pipeline to help the developers be more productive. The following requirements need to be satisfied:
- Utilize AWS CodeCommit as the code repository for application and CloudFormation templates.
- Have automated testing and security scanning for the generated artifacts.
- Receive a notification when unit testing fails.
- Ability to turn on/off application features and dynamically customize the deployment as part of CI/CD.
- The Lead Developer must approve changes before deploying applications to production.
Which of the following options should the solutions architect implement to meet the company's requirements?
Create an AWS CodeBuild job to run tests and security scans on the generated artifacts. Create an Amazon EventBridge rule that will send Amazon SNS alerts when unit testing fails. Create AWS Cloud Development Kit (AWS CDK) constructs with a manifest file to turn on/off features of the AWS CDK app. Add a manual approval stage on the pipeline for the Lead Developer’s approval prior to production deployment.
(Correct)
Write an AWS Lambda function to run unit tests and security scans on the generated artifacts. Add another Lambda trigger on the next pipeline stage to notify the developers if the unit testing fails. Create AWS Amplify plugins to allow turning on/off of application features. Add an AWS SES action on the pipeline to send an approval message to the Lead Developer prior to production deployment.
Use AWS CodeArtifact to store generated artifacts. AWS scans the artifacts for common vulnerabilities and allows custom actions to run the unit tests. Create an Amazon CloudWatch rule that will send Amazon SNS alerts when unit testing fails. Use different Docker images for choosing different application features. Add a manual approval stage on the pipeline for the Lead Developer’s approval prior to production deployment.
Create a Jenkins job to run tests and security scans on the generated artifacts. Create an Amazon EventBridge rule that will send Amazon SES alerts when unit testing fails. Use AWS CloudFormation with nested stacks to allow turning on/off of application features. Add an AWS Lambda function to the pipeline to allow approval from the Lead Developer prior to production deployment.
Explanation
AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and builds tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests. CodeBuild provides these benefits:
Fully managed – CodeBuild eliminates the need to set up, patch, update, and manage your own build servers.
On-demand – CodeBuild scales on demand to meet your build needs. You pay only for the number of build minutes you consume.
Out of the box – CodeBuild provides preconfigured build environments for the most popular programming languages. All you need to do is point to your build script to start your first build.
The AWS Cloud Development Kit (AWS CDK) lets you easily define applications in the AWS Cloud using your programming language of choice. But creating an application is just the start of the journey. You also want to make changes to it and deploy them. You can do this through the Code suite of tools: AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. Together, they allow you to build what's called a deployment pipeline for your application.
In AWS CodePipeline, you can add an approval action to a stage in a pipeline at the point where you want the pipeline execution to stop so that someone with the required AWS Identity and Access Management permissions can approve or reject the action.
If the action is approved, the pipeline execution resumes. If the action is rejected—or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping—the result is the same as an action failing, and the pipeline execution does not continue.
You might use manual approvals for these reasons:
- You want someone to perform a code review or change management review before a revision is allowed into the next stage of a pipeline.
- You want someone to perform manual quality assurance testing on the latest version of an application or to confirm the integrity of a build artifact before it is released.
- You want someone to review new or updated text that is published on a company website.
Therefore, the correct answer is: Create an AWS CodeBuild job to run tests and security scans on the generated artifacts. Create an Amazon EventBridge rule that will send Amazon SNS alerts when unit testing fails. Create AWS Cloud Development Kit (AWS CDK) constructs with a manifest file to turn on/off features of the AWS CDK app. Add a manual approval stage on the pipeline for the Lead Developer’s approval prior to production deployment. AWS CodeBuild allows automated build and testing of application artifacts. AWS CDK allows you to control application features using a manifest file and adding an approval on AWS CodePipeline will allow the Lead Developer to approve deployments.
The option that says: Write an AWS Lambda function to run unit tests and security scans on the generated artifacts. Add another Lambda trigger on the next pipeline stage to notify the developers if the unit testing fails. Create AWS Amplify plugins to allow turning on/off of application features. Add an AWS SES action on the pipeline to send an approval message to the Lead Developer prior to production deployment is incorrect. Unit testing and security scanning may take longer than 15 minutes, so AWS Lambda is not recommended here. It would be easier and much simpler to just add an approval stage on the pipeline instead of using AWS SES to send an approval message to the Lead Developer.
The option that says: Create a Jenkins job to run tests and security scans on the generated artifacts. Create an Amazon EventBridge rule that will send Amazon SES alerts when unit testing fails. Use AWS CloudFormation with nested stacks to allow turning on/off of application features. Add an AWS Lambda function to the pipeline to allow approval from the Lead Developer prior to production deployment is incorrect. Instead of creating a Jenkins job, you can use AWS CodeBuild to test your artifacts. CodeBuild is tightly integrated on AWS so it is very easy to add it to your deployment pipeline.
The option that says: Use AWS CodeArtifact to store generated artifacts. AWS scans the artifacts for common vulnerabilities and allows custom actions to run the unit tests. Create an Amazon CloudWatch rule that will send Amazon SNS alerts when unit testing fails. Use different Docker images for choosing different application features. Add a manual approval stage on the pipeline for the Lead Developer’s approval prior to production deployment is incorrect. AWS CodeArtifact is a fully managed artifact repository service that allows organizations to securely store, publish, and share software packages used in their software development process. It works with common package managers and builds tools like Maven, Gradle, npm, yarn, twine, and pip. It does not allow the user to configure custom actions or scripts to perform unit tests on artifacts. It is recommended to use AWS CodeBuild for this scenario.
References:
Check out these AWS CodeBuild and AWS CodePipeline Cheat Sheet:
Question 44: Skipped
A digital media publishing company hired a solutions architect to manage its online portal, which is deployed on a single Amazon EC2 instance. The architecture uses a combination of Reserved EC2 Instances to handle the steady-state load and On-Demand EC2 Instances to handle the peak load. Currently, the web servers operate at 90% utilization during peak load.
Which of the following is the most cost-effective option to enable the online portal to quickly recover in the event of a zone outage?
Create an Auto Scaling group of Spot instances on multiple Availability Zones. Attach an Application Load Balancer to the group.
Create an Auto Scaling group On-Demand instances on multiple Availability Zones. Attach an Application Load Balancer to the group.
Launch a Spot Fleet of On-demand and Spot instances across multiple Availability Zones. Attach an Application Load Balancer to the fleet.
(Correct)
Launch a Spot Fleet of Spot instances across multiple Availability Zones. Attach an Application Load Balancer to the fleet.
Explanation
A Spot Fleet is a set of Spot Instances and optionally On-Demand Instances that are launched based on criteria that you specify. The Spot Fleet selects the Spot capacity pools that meet your needs and launches Spot Instances to meet the target capacity for the fleet. By default, Spot Fleets are set to maintain target capacity by launching replacement instances after Spot Instances in the fleet are terminated. You can submit a Spot Fleet as a one-time request, which does not persist after the instances have been terminated. You can include On-Demand Instance requests in a Spot Fleet request.
To avoid interruption to your Spot instances, you can actually set up a diversified spot fleet allocation strategy in which you are using a range of different EC2 instance types such as c3.2xlarge, m3.xlarge, r3.xlarge et cetera instead of just one type. This will effectively increase the chances of providing a more stable compute capacity to your application. Therefore, in the event that there is a Spot interruption due to the high demand for a specific instance type, say c3.2xlarge, your application could still scale using another instance type, such as m3.xlarge or r3.xlarge.
In the scenario, using a mix of On-demand and Spot instances within a Spot Fleet across multiple Availability Zones strikes the right balance. The On-demand instances primarily handle the steady-state load, while additional Spot instances are launched to help with the increased load during peak times, keeping costs down. The fleet setup automatically adjusts to maintain service even if an Availability Zone goes down, ensuring that the system is both cost-effective and resilient.
Hence, the correct answer is: Launch a Spot Fleet of On-demand and Spot instances across multiple Availability Zones. Attach an Application Load Balancer to the fleet.
The option that says: Create an Auto Scaling group of Spot instances on multiple Availability Zones. Attach an Application Load Balancer to the group is incorrect. While being highly cost-effective due to the use of Spot instances, it could lead to availability issues since Spot instances can be reclaimed by AWS when spot market prices exceed the bid price.
The option that says: Create an Auto Scaling group On-Demand instances on multiple Availability Zones. Attach an Application Load Balancer to the group is incorrect. This approach ensures that the portal can handle peak loads without the risk of losing instances unexpectedly. However, the cost of On-Demand instances is significantly higher than Spot instances, making this option less cost-effective for managing fluctuating or unpredictable workloads.
The option that says: Launch a Spot Fleet of Spot instances across multiple Availability Zones. Attach an Application Load Balancer to the fleet is incorrect. Like the other incorrect option, this solution fully relies on Spot instances and, while being cheaper, is more susceptible to service disruption due to Spot market fluctuations.
References:
Check out this Amazon EC2 Cheat Sheet:
Question 48: Skipped
A company has a large collection of user-submitted stock photos. An AWS Lambda function processes and extracts metadata from these photos to make a searchable catalog. The metadata is extracted depending on several rules and the output is sent to an Amazon ElastiCache for Redis cluster. The metadata extraction is done in several batches and the whole process takes about 45 minutes to complete. Whenever there is a change in the metadata extraction rules, the update process is triggered manually before the extraction process starts. As the stock photo submissions are steadily growing, the company wants to reduce the metadata extraction time for its catalog.
Which of the following options should the Solutions Architect implement to reduce the time for the metadata extraction process?
Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Associate the Lambda functions to an AWS Batch compute environment. Write another Lambda function that will retrieve the list of photos for processing and send each item to the job queue in the AWS Batch compute environment.
Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Create a workflow on AWS Step Functions that will run multiple Lambda functions in parallel. Write another Lambda function that will retrieve the list of photos for processing and send each item to an Amazon SQS queue. Set this SQS queue as the input for the Step Functions workflow.
Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Write another Lambda function that will retrieve the list of photos for processing and send each item to an Amazon SQS queue. Configure all the Lambda extraction functions to subscribe to this SQS queue with higher batch size.
Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Create a workflow on AWS Step Functions that will run multiple Lambda functions in parallel. Create another workflow that will retrieve the list of photos for processing and execute the metadata extraction workflow for each photo.
(Correct)
Explanation
AWS Step Functions can be used to run a serverless workflow that coordinates multiple AWS Lambda functions. AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as an input to the next. Each step in your application executes in order, as defined by your business logic.
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code as a ZIP file or container image, and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic.
In this scenario, you can quickly make changes to the whole process depending on the extraction rules if you have dedicated Lambda functions for each type of metadata. Technically, you can create one Lambda function to call the other Lambda metadata extraction functions. However, it is quite challenging to orchestrate the data flow of the Lambda functions as the number of functions grow. Plus, any change in the flow of the application will require changes in multiple places, and you could end up writing the same code over and over again.
To solve this challenge, you can use AWS Step Functions. This is a serverless orchestration service that lets you easily coordinate multiple Lambda functions into flexible workflows that are easy to debug and easy to change. Step Functions will keep your Lambda functions free of additional logic by triggering and tracking each step of your application for you.
Therefore, the correct answer is: Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Create a workflow on AWS Step Functions that will run multiple Lambda functions in parallel. Create another workflow that will retrieve the list of photos for processing and execute the metadata extraction workflow for each photo.
The option that says: Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Associate the Lambda functions to an AWS Batch compute environment. Write another Lambda function that will retrieve the list of photos for processing and send each item to the job queue in the AWS Batch compute environment is incorrect. AWS Batch only adds complexity to the solution. AWS Batch is designed to easily and efficiently run hundreds of thousands of batch computing jobs on AWS but not with Lambda functions.
The option that says: Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Create a workflow on AWS Step Functions that will run multiple Lambda functions in parallel. Write another Lambda function that will retrieve the list of photos for processing and send each item to an Amazon SQS queue. Set this SQS queue as the input for the Step Functions workflow is incorrect. This might be possible if you have a Lambda Function that consumes the messages on the SQS and feeds them to the Step Function, which is not explicitly stated on this option. An SQS queue cannot be used as a direct input for an AWS Step Function workflow.
The option that says: Split the single Lambda function that processes the photos into several functions dedicated for each type of metadata. Write another Lambda function that will retrieve the list of photos for processing and send each item to an Amazon SQS queue. Configure all the Lambda extraction functions to subscribe to this SQS queue with higher batch size is incorrect. This may be possible but it will not work for this scenario. When a Lambda function processes the photo for a specific type of metadata, it will become invisible on the queue, and other Lambda functions will not be able to process the same photo. This defeats the purpose of multiple Lambda functions that are supposed to process the photo in parallel.
References:
Check out the AWS Step Functions Cheat Sheet:
Question 51: Skipped
A global data analytics firm has various data centers from different countries all over the world. The staff are regularly uploading analytics, financial, and regulatory files of each of their respective data centers to a web portal deployed in AWS, which uses an S3 bucket named global-analytics-reports-bucket
to durably store the data. The staff download various reports from a CloudFront distribution which uses the global-analytics-reports-bucket
S3 bucket as the origin. The security team noticed that the staff are using both the CloudFront link and the direct Amazon S3 URLs to download the reports. The security team sees this as a security risk and they recommended implementing a way to prevent anyone from bypassing CloudFront and using the direct Amazon S3 URLs.
Which of the following options should the solutions architect implement to meet the above requirement?
1. Configure the distribution to use Signed URLs. 2. Create a special CloudFront user called an origin access identity (OAI). 3. Give the origin access identity permission to read the objects in your bucket.
1. Set up a field-level encryption configuration in the CloudFront distribution. 2. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
1. Create a special CloudFront user called an origin access identity (OAI) and associate it with your CloudFront distribution.
2. Give the origin access identity permission to read the objects in your bucket.
3. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
(Correct)
1. In your CloudFront distribution, use a custom SSL instead of the default SSL. 2. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
Explanation
You can optionally secure the content in your Amazon S3 bucket so that users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents someone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but is recommended by AWS. Be aware that this option is only available if you have not set up your Amazon S3 bucket as a website endpoint.
To require that users access your content through CloudFront URLs, you do the following tasks:
- Create a special CloudFront user called an origin access control and associate it with your CloudFront distribution.
- Give the origin access control permission to read the files in your bucket.
- Remove permission for anyone else to use Amazon S3 URLs to read the files.
In this scenario, the main objective is to prevent the staff from using the direct Amazon S3 URLs to download the reports. The best solution that you can choose here is to use an Origin Access Control (OAC) and remove anyone else's permission to use the S3 URLs to read the objects. Hence, the correct answer is the following option:
1. Create a special CloudFront user called an origin access control (OAC).
2. Give the origin access control permission to read the objects in your bucket.
3. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
The following option is incorrect because SSL is not needed in this particular scenario:
1. In your CloudFront distribution, use a custom SSL instead of the default SSL.
2. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
What you need to implement is an OAC.
The following option is incorrect because the field-level encryption configuration is mainly used for safeguarding sensitive fields in your CloudFront, and not suitable for this scenario:
1. Set up a field-level encryption configuration in the CloudFront distribution.
2. Remove anyone else's permission to use Amazon S3 URLs to read the objects.
The following option is incorrect because although it is recommended to use Signed URLs and OAC in CloudFront, this option is still missing the crucial step of removing anyone else's permission to use the S3 URLs to read the objects:
1. Configure the distribution to use Signed URLs.
2. Create a special CloudFront user called an origin access control (OAC).
3. Give the origin access control permission to read the objects in your bucket.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 52: Skipped
A company wants to release a weather forecasting app for mobile users. The application servers generate a weather forecast every 15 minutes, and each forecast update overwrites the older forecast data. Each weather forecast outputs approximately 1 billion unique data points, where each point is about 20 bytes in size. This results in about 20GB of data for each forecast. Approximately 1,500 global users access the forecast data concurrently every second, and this traffic can spike up to 10 times more during weather events. The company wants users to have a good experience when using the weather forecast application so it requires that each user query must be processed in less than two seconds.
Which of the following solutions will meet the required application request rate and response time?
Create an Amazon S3 bucket to store the weather forecast data points as individual objects. Create a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer to query the objects on the S3 bucket. Create an Amazon CloudFront distribution and point the origin to the ELB. Configure a 15-minute cache-control timeout for the CloudFront distribution.
Create an Amazon OpenSearch cluster to store the weather forecast data points. Write AWS Lambda functions to query the ES cluster. Create an Amazon CloudFront distribution and point the origin to an Amazon API Gateway endpoint that invokes the Lambda functions. Write an Amazon Lambda@Edge function to cache the data points on edge locations for a 15-minute duration.
Use an Amazon EFS volume to store the weather forecast data points. Mount this EFS volume on a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer. Create an Amazon CloudFront distribution and point the origin to the ELB. Configure a 15-minute cache-control timeout for the CloudFront distribution.
(Correct)
Create an Amazon OpenSearch cluster to store the weather forecast data points. Write AWS Lambda functions to query the ES cluster. Create an Amazon CloudFront distribution and point the origin to an Amazon API Gateway endpoint that invokes the Lambda functions. Configure a cache-control timeout of 15 minutes in the API caching section of the API Gateway stage.
Explanation
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS can provide very low and consistent operational latency as well as a throughput scale of 10+GB per second.
Amazon EFS file systems are distributed across an unconstrained number of storage servers. This distributed data storage design enables file systems to grow elastically to petabyte scale and enables massively parallel access from Amazon EC2 instances to your data. The Amazon EFS-distributed design avoids the bottlenecks and constraints inherent to traditional file servers.
This distributed data storage design means that multithreaded applications and applications that concurrently access data from multiple Amazon EC2 instances can drive substantial levels of aggregate throughput and IOPS. Big data and analytics workloads, media processing workflows, content management, and web serving are examples of these applications. In addition, Amazon EFS data is distributed across multiple Availability Zones, providing a high level of durability and availability.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront can provide a reliable, low latency, and high throughput network connectivity for global users.
To deliver content to end-users with lower latency, Amazon CloudFront peers with thousands of Tier 1/2/3 telecom carriers globally is well connected with all major access networks for optimal performance and has hundreds of terabits of deployed capacity.
Therefore, the correct answer is: Use an Amazon EFS volume to store the weather forecast data points. Mount this EFS volume on a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer. Create an Amazon CloudFront distribution and point the origin to the ELB. Configure a 15-minute cache-control timeout for the CloudFront distribution.
The option that says: Create an Amazon OpenSearch cluster to store the weather forecast data points. Write AWS Lambda functions to query the ES cluster. Create an Amazon CloudFront distribution and point the origin to an Amazon API Gateway endpoint that invokes the Lambda functions. Configure a cache-control timeout of 15 minutes in the API caching section of the API Gateway stage is incorrect. This is a possible implementation, but the Lambda functions won’t be able to quickly scale and serve requests during peak traffic. By default, the burst concurrency for Lambda functions is between 500-3000 requests per second (depending on region).
The option that says: Create an Amazon OpenSearch cluster to store the weather forecast data points. Write AWS Lambda functions to query the ES cluster. Create an Amazon CloudFront distribution and point the origin to an Amazon API Gateway endpoint that invokes the Lambda functions. Write an Amazon Lambda@Edge function to cache the data points on edge locations for a 15-minute duration is incorrect. This is a good solution for caching, however, this solution will not be able to serve the expected peak traffic during weather events. Lambda@Edge can serve only up to 10,000 requests per second.
The option that says: Create an Amazon S3 bucket to store the weather forecast data points as individual objects. Create a fleet of Auto Scaling Amazon EC2 instances behind an Elastic Load Balancer to query the objects on the S3 bucket. Create an Amazon CloudFront distribution and point the origin to the ELB. Configure a 15-minute cache-control timeout for the CloudFront distribution is incorrect. This solution is not recommended. Access from the EC2 instances to the S3 bucket via HTTP calls will increase the total response time of the application. Access to EFS is faster compared to calling objects on S3 buckets. Although AWS S3 supports strong read-after-write consistency, billions of objects will be overwritten on the S3 bucket every 15 minutes which could take longer to write than using Amazon EFS.
References:
Check out the Amazon EFS and comparison Cheat Sheets:
Question 53: Skipped
A cryptocurrency exchange company has recently signed up for a 3rd party online auditing system, which is also using AWS, to perform regulatory compliance audits on their cloud systems. The online auditing system needs to access certain AWS resources in your network to perform the audit.
In this scenario, which of the following approach is the most secure way of providing access to the 3rd party online auditing system?
Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign it a policy that allows only the actions required for the compliance audit.(Correct)
Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign a policy that allows full and unrestricted access to all AWS resources.
Create a new IAM user and assign a user policy to the IAM user that allows only the actions required by the online audit system. Create a new access and secret key for the IAM user and provide these credentials to the 3rd party auditing company.
Create a new IAM user and assign a user policy to the IAM user that allows full and unrestricted access to all AWS resources. Create a new access and secret key for the IAM user and provide these credentials to the 3rd party auditing company.
Explanation
An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session.
You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be difficult to rotate and where users can potentially extract them). Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources.
To allow users from one AWS account to access resources in another AWS account, create a role that defines who can access it and what permissions it grants to users that switch to it. Use IAM roles to delegate access within or between AWS accounts. By setting up cross-account access, you don't need to create individual IAM users in each account in order to provide access to different AWS accounts.
The option that says: Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign it a policy that allows only the actions required for the compliance audit is correct because it uses an IAM role and only provides the needed access, which adheres to the principle of least privilege.
The option that says: Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign a policy that allows full and unrestricted access to all AWS resources is incorrect. Although you are right to use an IAM role, you should provide only the access needed by the 3rd party audit system and not a full, unrestricted access to all AWS resources.
The option that says: Create a new IAM user and assign a user policy to the IAM user that allows only the actions required by the online audit system. Create a new access and secret key for the IAM user and provide these credentials to the 3rd party auditing company is incorrect as you need to use an IAM role instead of an IAM user.
The option that says: Create a new IAM user and assign a user policy to the IAM user that allows full and unrestricted access to all AWS resources. Create a new access and secret key for the IAM user and provide these credentials to the 3rd party auditing company is incorrect as you need to use an IAM role instead of an IAM user. In addition, you should provide only the access needed by the 3rd party audit system and not a full, unrestricted access to all AWS resources.
References:
Check out this AWS IAM Cheat Sheet:
Question 54: Skipped
A company hosts a serverless application on AWS using Amazon API Gateway and AWS Lambda with Amazon DynamoDB as the backend database. The application has a feature that allows users to create posts and reply to comments based on different topics. The API model currently uses the following methods:
- GET /posts/[postid]
– used to get details about the post
- GET /users/[userid]
– used to get details about a user
- GET /comments/[commentid]
– used to get details of a comment
The application does not use API keys for request authorization. To increase user engagement on the web app, the company wants to reduce comment latency by making the comments appear in real-time.
Which of the following solution should be implemented to meet the requirements and improve user experience?
Leverage AWS AppSync by building GraphQL APIs and using Websockets to deliver comments in real-time.
(Correct)
Create a distribution on Amazon CloudFront and use edge-optimized APIs. Cache API responses in CloudFront to improve comment latency.
Update the application code to call the GET /comments/[commentid]
API every 3 seconds to show comments in real-time without sacrificing performance.
Lower the API response time of the Lambda functions by increasing the concurrency limit. This allows functions to run in parallel to deliver the comments in real-time.
Explanation
AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like Amazon DynamoDB, Lambda, and more. Adding caches to improve performance, subscriptions to support real-time updates, and client-side data stores that keep offline clients in sync are just as easy. Once deployed, AWS AppSync automatically scales your GraphQL API execution engine up and down to meet API request volumes.
With managed GraphQL subscriptions, AWS AppSync can push real-time data updates over Websockets to millions of clients. For mobile and web applications, AppSync also provides local data access when devices go offline, and data synchronization with customizable conflict resolution, when they are back online.
AppSync supports real-time chat applications. You can build conversational mobile or web applications that support multiple private chat rooms, offer access to conversation history, and queue outbound messages, even when a device is offline.
AppSync can also be used for real-time collaboration. You can broadcast data from the backend to all connected clients (one-to-many) or between clients (many-to-many), such as in a second screen scenario where you broadcast the same data to all clients, who can then reply.
Therefore, the correct answer is: Leverage AWS AppSync by building GraphQL APIs and using Websockets to deliver comments in real-time. AWS AppSync can push real-time data updates over Websockets. This can automatically scale to millions of client requests.
The option that says: Create a distribution on Amazon CloudFront and use edge-optimized APIs. Cache API responses in CloudFront to improve comment latency is incorrect. Caching API responses can improve the loading of comments, however, this may not show the comments in real-time if the new comments are not yet on the cache.
The option that says: Update the application code to call the GET /comments/[commentid]
API every 3 seconds to show comments in real-time without sacrificing performance is incorrect. This is possible but will heavily put a burden on your Lambda function executions. Even if there are no new comments, the API will still be called. This will add unnecessary cost to function executions.
The option that says: Lower the API response time of the Lambda functions by increasing the concurrency limit. This allows functions to run in parallel to deliver the comments in real-time is incorrect. Increasing the concurrency limit will not help in showing the comments in real-time. If you don't reach the Lambda concurrency limit regularly, there is no apparent advantage for this option.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 56: Skipped
A company plans to migrate its on-premises legacy application to AWS and develop a highly scalable application. Currently, all user requests are sent to the on-premises load balancer which forwards the requests to two Linux servers hosting the legacy application. The database is hosted on two servers in master-master configuration. Since this is an old application, the communication to the database servers is done through static IP addresses and not via DNS names. The license of the application is tied to the MAC address of the network adapter of the Linux server. If the application is to be installed on a new server, it will take about 15 hours for the software vendor to send the new license via email.
Which combination of actions must be done to meet the company requirements? (Select TWO.)
Provision a pool of Elastic Network Interfaces (ENIs). Request a license file for each ENI from the software vendor. Store the license files inside an Amazon EC2 instance and create a base AMI from this EC2 instance. Use bootstrap scripts to configure license keys and attach the corresponding ENI when provisioning EC2 instances.
Install the application on an EC2 instance and configure the needed license file. Update the local configuration with the database IP addresses. Use this instance as the base AMI for all instances in the Auto Scaling group.
Create an AWS Lambda function to update the database IP addresses on the Systems Manager Parameter Store. Create an Amazon EC2 bootstrap script that will retrieve the database IP address from SSM Parameter Store. Update the local configuration files with the parameters.
(Correct)
Provision a pool of Elastic Network Interfaces (ENIs). Request a license file for each ENI from the software vendor. Store the license files on an Amazon S3 bucket and use bootstrap scripts to retrieve an unused license file and attach corresponding ENI when provisioning EC2 instances.
(Correct)
Create an Amazon EC2 bootstrap script that will resolve the database DNS names into IP addresses. Update the local configuration files with the resolved values.
Explanation
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter.
Parameter Store offers these benefits:
- You can use a secure, scalable, hosted secrets management service with no servers to manage.
- Improve your security posture by separating your data from your code.
- Store configuration data and encrypted strings in hierarchies and track versions.
- Control and audit access at granular levels.
Parameter Store provides support for three types of parameters: String
, StringList
, and SecureString
.
An elastic network interface (ENI) is a logical networking component in a VPC that represents a virtual network card. It can include the following attributes:
- A primary private IPv4 address from the IPv4 address range of your VPC
- One or more secondary private IPv4 addresses from the IPv4 address range of your VPC
- One Elastic IP address (IPv4) per private IPv4 address
- One public IPv4 address
- One or more IPv6 addresses
- One or more security groups
- A MAC address
- A source/destination check flag
- A description
You can create and configure network interfaces in your account and attach them to instances in your VPC. Your account might also have requester-managed network interfaces, which are created and managed by AWS services to enable you to use other resources and services.
You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it's attached or detached from an instance and reattached to another instance. When you move a network interface from one instance to another, network traffic is redirected to the new instance.
Each instance has a default network interface, called the primary network interface. You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces.
The option that says: Provision a pool of Elastic Network Interfaces (ENIs). Request a license file for each ENI from the software vendor. Store the license files on an Amazon S3 bucket and use bootstrap scripts to retrieve an unused license file and attach corresponding ENI when provisioning EC2 instances is correct. Having the license files on an Amazon S3 bucket reduces the management overhead for the EC2 instances, as you can easily add/remove more license keys if needed.
The option that says: Create an AWS Lambda function to update the database IP addresses on the Systems Manager Parameter Store. Create an Amazon EC2 bootstrap script that will retrieve the database IP address from SSM Parameter Store. Update the local configuration files with the parameters is correct. Having the database IP addresses on Parameter Store ensures that all the EC2 instances will have a central location to retrieve the IP addresses. This also reduces the need to constantly update any script from inside the EC2 instance even if you add/remove more databases in the future.
The option that says: Provision a pool of Elastic Network Interfaces (ENIs). Request a license file for each ENI from the software vendor. Store the license files inside an Amazon EC2 instance and create a base AMI from this EC2 instance. Use bootstrap scripts to configure license keys and attach the corresponding ENI when provisioning EC2 instances is incorrect. Although this is possible, this is not an ideal solution. If you need more EC2 instances and more ENIs, you will have to manually update your base AMI to include all the new license files. This can be a lot of work if you scale your cluster on a regular basis.
The option that says: Create an Amazon EC2 bootstrap script that will resolve the database DNS names into IP addresses. Update the local configuration files with the resolved values is incorrect. Although this is possible, this is not recommended. You will have to update the bootstrap script manually for the new DNS name every time you create a new database such as when you are scaling out your database instances.
The option that says: Install the application on an EC2 instance and configure the needed license file. Update the local configuration with the database IP addresses. Use this instance as the base AMI for all instances in the Auto Scaling group is incorrect. This will not work because the application license is tied to the MAC address on which it was installed. When you provision a new EC2 instance, it will have a new IP address and a newly assigned MAC address for its network adapter.
References:
Check out these AWS SSM Parameter Store and Amazon EC2 Cheat Sheets:
Question 60: Skipped
A company has performed a security audit on its existing application. It was determined that the application retrieves the Amazon RDS for MySQL credentials from an encrypted file in an Amazon S3 bucket. To improve the security of the application the following should be implemented on the next application deployment:
The database credentials must be randomly generated and stored in a secure AWS managed service.
The credentials must be rotated every 90 days.
Infrastructure-as-code provisioning of application resources using AWS CloudFormation.
For the application deployment, the solutions architect will create a CloudFormation template.
Which of the following options should the solutions architect implement to meet the company’s requirement with the LEAST amount of operational overhead?
Using AWS Secrets Manager, create a secret resource and generate a secure database password. Write an AWS Lambda function to rotate the database password. On AWS CloudFormation, specify a resource for Secrets Manager RotationSchedule
to rotate the password every 90 days.
(Correct)
On Systems Manager Parameter Store, create a SecureString parameter and generate a secure database password. On AWS CloudFormation, create an AWS KMS resource to rotate the database password every 90 days.
On Systems Manager Parameter Store, create a SecureString parameter and generate a secure database password. Write an AWS Lambda function to rotate the database password. On AWS CloudFormation, specify a resource for Parameter Store RotationSchedule
to rotate the password every 90 days.
Using AWS Secrets Manager, create a secret resource and generate a secure database password. Write an AWS Lambda function to rotate the database password. Create a scheduled rule on Amazon EventBridge to trigger the Lambda function to rotate the database password every 90 days.
Explanation
AWS Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.
The following diagram illustrates the most basic scenario. The diagram displays you can store credentials for a database in Secrets Manager and then use those credentials in an application to access the database.
The database administrator creates a set of credentials on the Personnel database for use by an application called MyCustomApp. The administrator also configures those credentials with the permissions required for the application to access the Personnel database.
The database administrator stores the credentials as a secret in Secrets Manager named MyCustomAppCreds. Then, Secrets Manager encrypts and stores the credentials within the secret as the protected secret text.
When MyCustomApp accesses the database, the application queries Secrets Manager for the secret named MyCustomAppCreds.
Secrets Manager retrieves the secret, decrypts the protected secret text, and returns the secret to the client app over a secured (HTTPS with TLS) channel.
The client application parses the credentials, connection string, and any other required information from the response and then uses the information to access the database server.
AWS Secrets Manager supports many types of secrets. However, Secrets Manager can natively rotate credentials for supported AWS databases without any additional programming. However, rotating the secrets for other databases or services requires creating a custom Lambda function to define how Secrets Manager interacts with the database or service.
On AWS CloudFormation template, the AWS::SecretsManager::RotationSchedule
can be used to set the rotation schedule and Lambda rotation function for a secret. You can create a new rotation function based on one of the Secrets Manager rotation function templates by using HostedRotationLambda or you can choose an existing rotation function by using RotationLambdaARN.
Therefore, the correct answer is: Using AWS Secrets Manager, create a secret resource and generate a secure database password. Write an AWS Lambda function to rotate the database password. On AWS CloudFormation, specify a resource for Secrets Manager RotationSchedule
to rotate the password every 90 days. Secrets Manager rotation uses an AWS Lambda function to update the secret and the database. You can then specify the RotationSchedule resource on the CloudFormation and specify the Lambda ARN.
The option that says: On Systems Manager Parameter Store, create a SecureString parameter and generate a secure database password. Write an AWS Lambda function to rotate the database password. On AWS CloudFormation, specify a resource for Parameter Store RotationSchedule
to rotate the password every 90 days is incorrect. There is no Systems Manager Parameter Store RotationSchedule resource on AWS CloudFormation.
The option that says: Using AWS Secrets Manager, create a secret resource and generate a secure database password. Write an AWS Lambda function to rotate the database password. Create a scheduled rule on Amazon EventBridge to trigger the Lambda function to rotate the database password every 90 days is incorrect. This option may be possible, however, it does not use AWS CloudFormation in its solution.
The option that says: On Systems Manager Parameter Store, create a SecureString parameter and generate a secure database password. On AWS CloudFormation, create an AWS KMS resource to rotate the database password every 90 days is incorrect. AWS KMS can automatically rotate keys that are stored only on KMS, not parameters stored Systems Manager Parameter Store.
References:
Check out these AWS Secrets Manager and AWS CloudFormation Cheat Sheets:
Check out this AWS Secrets Manager and Systems Manager Parameter Store comparison:
Question 62: Skipped
A company has released a new mobile game and its backend servers are hosted on the company’s on-premises data center. The game logic is exposed using REST APIs that have multiple functions depending on the user state. Access to the backend services is controlled with an API key, while any test traffic is distinguished by a different key. A central file server stores player session data. User traffic is variable throughout the day but the on-premises servers cannot handle traffic during peak hours. The game also has latency issues caused by the slow fetching of player session data. The management tasked the solutions architect to migrate this infrastructure to AWS in order to improve scalability and reduce the latency for data access while keeping the backend API model unchanged.
Which of the following is the recommended solution to meet the company requirements?
Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by using Amazon API Gateway. Use Amazon DynamoDB with auto-scaling to store the player session data.
(Correct)
Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by placing the Lambda functions behind an Application Load Balancer (ALB). Use Amazon DynamoDB with auto-scaling to store the player session data.
Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by using AWS AppSync. Use Amazon Aurora Serverless to store the player session data.
Create a fleet of Amazon EC2 instances to host the backend services. Expose the REST APIs by placing the instances behind a Network Load Balancer (NLB). Use Amazon Aurora Serverless to store the player session data.
Explanation
AWS Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service, all with zero administration.
You can create a web API with an HTTP endpoint for your Lambda function by using Amazon API Gateway. API Gateway provides tools for creating and documenting web APIs that route HTTP requests to Lambda functions. You can secure access to your API with authentication and authorization controls. Your APIs can serve traffic over the Internet or can be accessible only within your VPC. Amazon API Gateway invokes your function synchronously with an event that contains a JSON representation of the HTTP request. For custom integration, the event is the body of the request. For a proxy integration, the event has a defined structure.
API Gateway allows you to throttle traffic and authorize API calls to ensure that backend operations withstand traffic spikes and backend systems are not unnecessarily called.
You can use Amazon DynamoDB for storing player session data. Gaming companies use Amazon DynamoDB in all parts of game platforms, including game state, player data, session history, and leaderboards. The main benefits that these companies get from DynamoDB are its ability to scale reliably to millions of concurrent users and requests while ensuring consistently low latency—measured in single-digit milliseconds. In addition, as a fully managed service, DynamoDB has no operational overhead. Game developers can focus on developing their games instead of managing databases. Also, as game makers are looking to expand from a single AWS Region to multiple Regions, they can rely on DynamoDB global tables for multi-region, active-active data replication.
Therefore, the correct answer is: Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by using Amazon API Gateway. Use Amazon DynamoDB with auto-scaling to store the player session data. API Gateway tightly integrates with AWS Lambda which enables you to quickly build REST compliant serverless applications. DynamoDB is well-suited for scalability to handle fast access to player session data.
The option that says: Create a fleet of Amazon EC2 instances to host the backend services. Expose the REST APIs by placing the instances behind a Network Load Balancer (NLB). Use Amazon Aurora Serverless to store the player session data is incorrect. Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora. It is recommended to use Aurora Serverless for lightly-used applications, with peaks of 30 minutes to several hours a few times each day or several times per year, such as human resources, budgeting, or operational reporting application.
The option that says: Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by placing the Lambda functions behind an Application Load Balancer (ALB). Use Amazon DynamoDB with auto-scaling to store the player session data is incorrect. Although you can use an ALB with Lambda as the backend service, it is still recommended to use API Gateway to expose your APIs. You have a lot more options and flexibility on API Gateway including the ability to use API keys and add custom headers to each request.
The option that says: Use AWS Lambda functions to run the backend game logic. Expose the REST APIs by using AWS AppSync. Use Amazon Aurora Serverless to store the player session data is incorrect. AWS AppSync is recommended for applications that are written for GraphQL APIs, not REST APIs.
References:
Check out these Amazon API Gateway and Amazon DynamoDB Cheat Sheets:
Question 64: Skipped
A company is running its new web application on a test environment in its on-premises data center. The stateful application is running on a single web server and it connects to a MySQL database that is hosted on a separate server. In a few weeks, the web application is scheduled to be released to the general public and the company is worried about its scalability. The user traffic will be unpredictable so it has been decided to migrate the web application and database to AWS. The company wants to use the Amazon EC2 service for hosting the web application, Amazon Aurora for the database, and Elastic Load Balancing for load distribution.
Which of the following solutions will allow the web and database tier to scale along with user traffic?
Create an Amazon Aurora MySQL database instance and enable Aurora Auto Scaling for the master database. Create an Auto Scaling group of Amazon EC2 instances placed behind a Network Load Balancer with the least outstanding request (LOR) routing algorithm. Ensure that the sticky sessions feature is enabled for the NLB.
Create an Amazon Aurora MySQL database instance. Create an Aurora Replica and enable Aurora Auto Scaling for the replica. Create an Auto Scaling group of Amazon EC2 instances placed behind a Network Load Balancer with the least outstanding request (LOR) routing algorithm. Ensure that the sticky sessions feature is enabled for the NLB.
Create an Amazon Aurora MySQL database instance. Create an Aurora Replica and enable Aurora Auto Scaling for the replica. Create an Auto Scaling group of Amazon EC2 instances placed behind an Application Load Balancer with the round-robin routing algorithm. Ensure that the sticky sessions feature is enabled for the ALB.
(Correct)
Create an Amazon Aurora MySQL database instance and enable Aurora Auto Scaling for the master database. Create an Auto Scaling group of Amazon EC2 instances placed behind an Application Load Balancer with the round-robin routing algorithm. Ensure that the sticky sessions feature is enabled for the ALB.
Explanation
Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. To meet your connectivity and workload requirements, Aurora Auto Scaling dynamically adjusts the number of Aurora Replicas provisioned for an Aurora DB cluster using single-master replication. Aurora Auto Scaling is available for both Aurora MySQL and Aurora PostgreSQL. Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don't pay for unused provisioned DB instances.
You define and apply a scaling policy to an Aurora DB cluster. The scaling policy defines the minimum and maximum number of Aurora Replicas that Aurora Auto Scaling can manage. Based on the policy, Aurora Auto Scaling adjusts the number of Aurora Replicas up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values. Before you can use Aurora Auto Scaling with an Aurora DB cluster, you must first create an Aurora DB cluster with a primary instance and at least one Aurora Replica.
Aurora Auto Scaling uses a scaling policy to adjust the number of Aurora Replicas in an Aurora DB cluster. Aurora Auto Scaling has the following components:
A service-linked role – a service role to allow scaling of the Aurora Replicas
A target metric – a predefined or custom metric and a target value for the metric. Aurora Auto Scaling creates and manages CloudWatch alarms that trigger the scaling policy
Minimum and maximum capacity - the minimum and maximum number of Aurora Replicas to be managed by Application Auto Scaling
A cooldown period - blocks subsequent scale-in or scale-out requests until the specified period expires.
Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies. Sticky sessions are helpful if your stateful application is having problems handling user sessions in an Auto Scaling environment.
There are two common routing algorithms for Application Load Balancers (ALB):
- Round Robin
- Least Outstanding Requests (LOR)
With the Least outstanding requests (LOR) algorithm, customers can route requests within a target group. As the new request comes in, the load balancer will send it to the target with the least number of outstanding requests. Targets processing long-standing requests or having lower processing capabilities are not burdened with more requests and the load is evenly spread across targets. This also helps the new targets to effectively take the load off of overloaded targets.
Therefore, the correct answer is: Create an Amazon Aurora MySQL database instance. Create an Aurora Replica and enable Aurora Auto Scaling for the replica. Create an Auto Scaling group of Amazon EC2 instances placed behind an Application Load Balancer with the round-robin routing algorithm. Ensure that the sticky sessions feature is enabled for the ALB.
The option that says: Create an Amazon Aurora MySQL database instance. Create an Aurora Replica and enable Aurora Auto Scaling for the replica. Create an Auto Scaling group of Amazon EC2 instances placed behind a Network Load Balancer with the least outstanding request (LOR) routing algorithm. Ensure that the sticky sessions feature is enabled for the NLB is incorrect. A Network Load Balancer is only recommended if your application expects millions of requests per second. Since there is no mention of this strict requirement on the question, the Application Load Balancer is the recommended choice as it is less expensive and supports Layer 7 features that are suitable for web applications.
The option that says: Create an Amazon Aurora MySQL database instance and enable Aurora Auto Scaling for the master database. Create an Auto Scaling group of Amazon EC2 instances placed behind an Application Load Balancer with the round-robin routing algorithm. Ensure that the sticky sessions feature is enabled for the ALB is incorrect. You cannot set Auto Scaling for the master database on Amazon Aurora. You can only manually resize the instance size of the master node.
The option that says: Create an Amazon Aurora MySQL database instance and enable Aurora Auto Scaling for the master database. Create an Auto Scaling group of Amazon EC2 instances placed behind a Network Load Balancer with the least outstanding request (LOR) routing algorithm. Ensure that the sticky sessions feature is enabled for the NLB is incorrect because it is not possible to set Auto Scaling for the master database on Amazon Aurora. You can only manually resize the instance size of the master node. Take note that a Network Load Balancer doesn't fully support the least outstanding request (LOR) routing algorithm as this is commonly used in Application Load Balancers instead.
References:
Check out the Amazon Aurora and AWS Elastic Load Balancing Cheat Sheets:
Question 66: Skipped
An enterprise is in the process of integrating the systems of the smaller companies it has acquired in the past few months. The company wants to create an AWS Landing Zone that will allow hundreds of new employees to use their corporate credentials to log in to the AWS Console. The company is using a Microsoft Active Directory (AD) service for user authentication and has an AWS Direct Connect connection to AWS. The newly acquired companies come from a wide range of engineering fields so it is required that the solution will be able to federate third-party services and providers as well as custom applications.
Which of the following implementations will meet the company requirements with the LEAST amount of management overhead?
Configure AWS IAM Identity Center with AWS Organizations to manage SSO access and permissions on AWS. Set up a two-way forest trust relationship between the AWS Directory service and the company Active Directory to allow users to use their corporate credentials when logging in to AWS. Leverage on the third-party integration support of AWS IAM Identity Center.
(Correct)
Create an Active Directory Federation Services (AD FS) portal page with the company branding. Integrate third-party applications on this portal with SAML 2.0 support. Use single sign-on with the AD FS to connect the company Active Directory to AWS. Configure the Identity Provider (IdP) to use form-based authentication with the portal page.
Create an Active Directory Federation Services (AD FA) with SAML 2.0 to connect the company Active Directory to AWS. Configure the AD FS to use Regex with the AD naming convention for the security group. This will allow federation on all AWS accounts. Configure single sign-on integrations for third party applications by adding them to the AD FS server.
Connect the company on-premises Active Directory using the AWS Directory Service AD connector to create a single sign-on experience for users. Configure IAM and service roles to enable federation support. Configure single sign-on integrations for connections with third-party applications.
Explanation
AWS IAM Identity Center (successor to AWS Single Sign-On) expands the capabilities of AWS Identity and Access Management (IAM) to provide a central place that brings together administration of users and their access to AWS accounts and cloud applications. Although the service name AWS Single Sign-On has been retired, the term single sign-on is still used throughout this guide to describe the authentication scheme that allows users to sign in one time to access multiple applications and websites.
With IAM Identity Center, you can manage sign-in security for your workforce by creating or connecting your users and groups to AWS in one place. With multi-account permissions, you can assign your workforce identities access to AWS accounts. You can use application assignments to assign your users access to software as a service (SaaS) applications. With a single click, IAM Identity Center enabled application admins can assign access to your workforce users and can also use application assignments to assign your users access to the software as a service (SaaS) applications.
AWS IAM Identity Center has integration with Microsoft AD through the AWS Directory Service. This means your employees can sign in to your AWS access portal using their corporate Active Directory credentials. To grant Active Directory users access to accounts and applications, you simply add them to the appropriate Active Directory groups. For example, you can grant the DevOps group SSO access to your production AWS accounts. Users added to the DevOps group are then granted SSO access to these AWS accounts automatically. This automation makes it easy to onboard new users and gives existing users access to new accounts and applications quickly.
You can configure one and two-way external and forest trust relationships between your AWS Directory Service for Microsoft Active Directory and on-premises directories, as well as between multiple AWS Managed Microsoft AD directories in the AWS cloud. AWS Managed Microsoft AD supports all three trust relationship directions: Incoming, Outgoing, and Two-way (Bi-directional). AWS Managed Microsoft AD supports both external and forest trusts.
Users in your self-managed Active Directory (AD) can also have SSO access to AWS accounts and cloud applications in the AWS access portal. To do that, AWS Directory Service has the following two options available:
Create a two-way trust relationship – When two-way trust relationships are created between AWS Managed Microsoft AD and a self-managed AD, users in your self-managed AD can sign in with their corporate credentials to various AWS services and business applications. One-way trusts do not work with AWS IAM Identity Center.
AWS IAM Identity Center (successor to AWS Single Sign-On) requires a two-way trust so that it has permission to read user and group information from your domain to synchronize user and group metadata. IAM Identity Center uses this metadata when assigning access to permission sets or applications. User and group metadata is also used by applications for collaboration, like when you share a dashboard with another user or group. The trust from AWS Directory Service for Microsoft Active Directory to your domain permits IAM Identity Center to trust your domain for authentication. The trust in the opposite direction grants AWS permissions to read user and group metadata.
Create an AD Connector – AD Connector is a directory gateway that can redirect directory requests to your self-managed AD without caching any information in the cloud.
Therefore, the correct answer is: Configure AWS IAM Identity Center with AWS Organizations to manage SSO access and permissions on AWS. Set up a two-way forest trust relationship between the AWS Directory service and the company Active Directory to allow users to use their corporate credentials when logging in to AWS. Leverage on the third-party integration support of AWS IAM Identity Center.
The option that says: Create an Active Directory Federation Services (AD FS) portal page with the company branding. Integrate third-party applications on this portal with SAML 2.0 support. Use single sign-on with the AD FS to connect the company Active Directory to AWS. Configure the Identity Provider (IdP) to use form-based authentication with the portal page is incorrect. This may be possible, but creating a form-based authentication defeats the purpose of a single sign-on. Also, this requires more additional management to create the AD FS portal page.
The option that says: Connect the company on-premises Active Directory using the AWS Directory Service AD connector to create a single sign-on experience for users. Configure IAM and service roles to enable federation support. Configure single sign-on integrations for connections with third-party applications is incorrect. This does not address the third-party integrations because when using the AD connector for SSO, you cannot use both on-premises AD and third-party integrations at the same time.
The option that says: Create an Active Directory Federation Services (AD FS) with SAML 2.0 to connect the company Active Directory to AWS. Configure the AD FS to use Regex with the AD naming convention for the security group. This will allow federation on all AWS accounts. Configure single sign-on integrations for third party applications by adding them to the AD FS server as a principal (trusted entity) is incorrect. This is possible but will require more management overhead compared to just using AWS IAM Identity Center service. Using Regex is favorable if you have established standard naming conventions, however, you may encounter some problems if other companies are not using your expected naming convention.
References:
Check out this AWS Directory Service Cheat Sheet:
Question 67: Skipped
A fintech startup has several resources provisioned on the AWS cloud. The majority of the company’s compute clusters are composed of an Application Load Balancer (ALB) in front of an Auto Scaling group of On-Demand Amazon EC2 instances. To lower down the overall cost, the management wants to have one EC2 instance terminated whenever the overall CPU utilization of the cluster is at 15% or lower.
Which of the following options should the solutions architect implement for a cost-effective and scalable architecture that satisfies the company requirement?
Use CloudWatch for the monitoring and configure the scaling in policy of the Auto Scaling group to terminate one EC2 instance when the CPU Utilization is 15% or below.(Correct)
Configure a monitoring script that sends out an email using SNS when the CPU utilization is less than 15% so the administrator can manually remove an EC2 instance.
Use scheduled actions in the Auto Scaling configuration to automatically terminate EC2 instances when the CPU Utilization hits below 15%.
Use AWS Lambda triggers to send a notification to the Auto Scaling group when the CPU utilization is less than 15% to kick off the scaling in policy to remove the EC2 instance.
Explanation
Target tracking scaling policies simplify how you configure dynamic scaling. You select a predefined metric or configure a customized metric, and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the fluctuations in the metric due to a fluctuating load pattern and minimizes rapid fluctuations in the capacity of the Auto Scaling group.
For example, you could use target tracking scaling to:
- Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 50 percent.
- Configure a target tracking scaling policy to keep the request count per target of your Elastic Load Balancing target group at 1000 for your Auto Scaling group.
Therefore, the correct answer is: Use CloudWatch for the monitoring and configure the scaling in policy of the Auto Scaling group to terminate one EC2 instance when the CPU Utilization is 15% or below.
The option that says: Use scheduled actions in the Auto Scaling configuration to automatically terminate EC2 instances when the CPU Utilization hits below 15% is incorrect. You can't use scheduled actions because the CPU usage of the instances can be unpredictable depending on the network traffic.
The option that says: Configure a monitoring script that sends out an email using SNS when the CPU utilization is less than 15% so the administrator can manually remove an EC2 instance is incorrect. You don't have to configure your own monitoring script. Amazon CloudWatch has integration with Amazon SNS to send scaling notifications to you.
The option that says: Use AWS Lambda triggers to send a notification to the Auto Scaling group when the CPU utilization is less than 15% to kick off the scaling in policy to remove the EC2 instance is incorrect. This may be possible but you don't have to create your own Lambda triggers. The Auto Scaling group has built-in functionality to automatically remove an EC2 instance depending on your CPU threshold setting.
References:
Check out this AWS Auto Scaling Cheat Sheet:
Question 68: Skipped
An Internet-of-Things (IoT) company is building a portal that stores data coming from its 20,000 gas sensors. The gas sensors, which have unique IDs, are used to detect a gas leak or other emissions inside the oil facility. Every 15 minutes, the sensors will send a data point throughout the day containing its ID, current gas level data, as well as the timestamp. Each data point contains critical information coming from the gas sensors. The company would like to query the information coming from a particular gas sensor for the past week and would like to delete all data that are older than eight weeks. The application is using a NoSQL database which is why they are using the Amazon DynamoDB service.
How would you implement this in the most cost-effective way?
Use one table every week, with a composite primary key which is the sensor ID as the partition key and the timestamp as the sort key.(Correct)
Use one table with a primary key which is the sensor ID. Use the timestamp as the hash key.
Use one table with a primary key which is the concatenated value of the sensor ID and the timestamp.
Use one table every week, with a primary key which is the concatenated value of the sensor ID and the timestamp.
Explanation
The gas sensors are generating a large amount of data every week. As mentioned in the question, there are 20,000 gas sensors that send data every 15 minutes. That translates to 1,920,000 records in a single day and with that large amount of data, there would be complexities in querying the DynamoDB database. In these scenarios, you can create new DynamoDB every week and then add a partition key and a sort key in the table.
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key.
Amazon DynamoDB supports two different kinds of primary keys:
Partition key – A simple primary key, composed of one attribute known as the partition key. DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored.
Partition key and sort key – Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored.
Take note that the partition key of an item is also known as its hash attribute. The term hash attribute derives from the use of an internal hash function in DynamoDB that evenly distributes data items across partitions, based on their partition key values.
Therefore, the correct answer is: Use one table every week, with a composite primary key which is the sensor ID as the partition key and the timestamp as the sort key.
The option that says: Use one table with a primary key which is the sensor ID. Use the timestamp as the hash key is incorrect. If you only use one table, the table would be very enormous and the searching will take a very long time.
The option that says: Use one table every week, with a primary key which is the concatenated value of the sensor ID and the timestamp is incorrect. You need a fixed value for the primary key so you shouldn't concatenate the timestamp to the sensor ID.
The option that says: Use one table with a primary key which is the concatenated value of the sensor ID and the timestamp is incorrect. If you only use one table, the table would be very enormous and the search will take a very long time.
References:
Check out this Amazon DynamoDB Cheat Sheet:
Question 73: Skipped
A company has adopted cloud-native computing best practices for its infrastructure. The company started using AWS CloudFormation templates for defining its cloud resources, and the templates are hosted in its private GitHub repository. As the developers continuously update the templates, the company has encountered several downtimes caused by misconfigured templates, wrong executions, or the creation of unnecessary environments. The management wants to streamline the process of testing the CloudFormation templates to prevent these errors. The Solutions Architect has been tasked to create an automated solution.
Which of the following options should be implemented to meet the company's requirements?
Create a pipeline in AWS CodePipeline that is triggered automatically for commits on the private GitHub repository. Have the pipeline create a change set and execute the CloudFormation template. Add an AWS CodeBuild stage on the pipeline to build and run test scripts to verify the new stack.
(Correct)
Create a pipeline in AWS CodePipeline that is triggered automatically for commits on the private GitHub repository. Have the pipeline create a change set from the CloudFormation template and execute it using AWS CodeDeploy. Add an AWS CodeBuild stage on the pipeline to build and run test scripts to verify the new stack.
Write an AWS Lambda function that builds any changes committed on the private GitHub repository. Store the generated artifacts on AWS CodeArtifact. Using AWS CodePipeline, create a change set with the new artifact and execute the AWS CloudFormation template. Add a CodeBuild action to run test scripts that verify the new stack.
Write an AWS Lambda function that syncs the private GitHub repository to AWS CodeCommit. Using AWS CodeDeploy, create a change set and execute the AWS CloudFormation template. Add an AWS CodeBuild stage on the deployment to build and run test scripts to verify the new stack.
Explanation
You can apply continuous delivery practices to your AWS CloudFormation stacks using AWS CodePipeline. AWS CodePipeline is a continuous delivery service for fast and reliable application and infrastructure updates. CodePipeline builds, tests, and deploys your code every time there is a code change based on the release process models you define.
With continuous delivery, you can automatically deploy CloudFormation template updates to your pipeline stages for testing and then promote them to production. For example, you could use CodePipeline to model an automated release process that provisions a test stack whenever an updated template is committed to a source repository (Git repositories managed by GitHub, AWS CodeCommit, and Atlassian Bitbucket) or uploaded to an Amazon S3 bucket. You can inspect the test stack and then approve it for the production stage, after which CodePipeline can delete the test stack and create a change set for final approval. When the change set is approved, CodePipeline can execute the change set and deploy the change to production.
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. AWS CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
Here is an example of a Quick Start from AWS for CI/CD Pipeline for AWS CloudFormation templates:
- A pipeline created by CodePipeline, which is triggered when a commit is made to the referenced branch of the GitHub repository used in the source stage.
- A build project in CodeBuild to run TaskCat and launch AWS CloudFormation templates for testing.
- An AWS Lambda function that merges the source branch of the GitHub repository with the release branch.
- AWS Identity and Access Management (IAM) roles for the Lambda function and the build project.
- An Amazon Simple Storage Service (Amazon S3) bucket to stash the build artifacts temporarily and to store the TaskCat report.
Therefore, the correct answer is: Create a pipeline in AWS CodePipeline that is triggered automatically for commits on the private GitHub repository. Have the pipeline create a change set and execute the CloudFormation template. Add an AWS CodeBuild stage on the pipeline to build and run test scripts to verify the new stack. With AWS CodePipeline, you can create change sets and automatically deploy CloudFormation template updates safely. You can have a CodeBuild stage for building artifacts and testing your new infra.
The option that says: Create a pipeline in AWS CodePipeline that is triggered automatically for commits on the private GitHub repository. Have the pipeline create a change set from the CloudFormation template and execute it using AWS CodeDeploy. Add an AWS CodeBuild stage on the pipeline to build and run test scripts to verify the new stack is incorrect. AWS CodeDeploy is unnecessary as you will not deploy the changes in your production environment. You need a pipeline to execute the CloudFormation change set and CodeBuild project to test your changes.
The option that says: Write an AWS Lambda function that syncs the private GitHub repository to AWS CodeCommit. Using AWS CodeDeploy, create a change set and execute the AWS CloudFormation template. Add an AWS CodeBuild stage on the deployment to build and run test scripts to verify the new stack is incorrect. You don't need a custom Lambda function as AWS CodePipiline supports executions from third-party Git sources.
The option that says: Write an AWS Lambda function that builds any changes committed on the private GitHub repository. Store the generated artifacts on AWS CodeArtifact. Using AWS CodePipeline, create a change set with the new artifact and execute the AWS CloudFormation template. Add a CodeBuild action to run test scripts that verify the new stack is incorrect. Building artifacts may take longer than the 15-minute maximum execution time of Lambda functions. AWS CodeArtifact is used to automatically fetch software packages and dependencies from public artifact repositories. AWS CodeBuild can be used to build and test artifacts before application deployments.
References:
Check out these AWS CodePipeline and AWS CloudFormation Cheat Sheets:
Question 74: Skipped
A company stores confidential financial documents as well as sensitive corporate information in an Amazon S3 bucket. There is a new security policy that prohibits any public S3 objects in the company's S3 bucket. In the event that a public object was identified, the IT Compliance team must be notified immediately and the object's permissions must be remediated automatically. The notification must be sent as soon as a public object was created in the bucket.
What is the MOST suitable solution that should be implemented by the Solutions Architect to comply with this data policy?
Enable object-level logging in the S3 bucket to automatically track S3 actions using CloudTrail. Set up an Amazon EventBridge rule with an SNS Topic to notify the IT Compliance team when a PutObject
API call with public-read permission is detected in the CloudTrail logs. Launch another CloudWatch Events rule that invokes an AWS Lambda function to turn the newly uploaded public object to private.
(Correct)
Integrate Amazon Lex with Amazon GuardDuty to detect public objects in the S3 bucket and to automatically update the permission of a public object to private. Associate an SNS Topic to Amazon Lex to notify the IT Compliance team via email if a public object was identified.
Set up a Systems Manager (SSM) Automation document that changes any public object in the bucket to private. Integrate Amazon EventBridge with AWS Lambda to create a scheduled process that checks the S3 bucket every hour. Configure the Lambda function to invoke the SSM Automation document when a public object is identified and to notify the IT Compliance team via email using Amazon SNS.
Automatically track S3 actions using the Trusted Advisor API and AWS Cloud Development Kit (CDK). Set up an Amazon EventBridge rule with an Amazon SNS Topic to notify the IT Compliance team when the Trusted Advisor detected a PutObject
API call with public-read permission. Launch another CloudWatch Events rule that invokes an AWS Lambda function to turn the newly uploaded public object to private.
Explanation
Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon S3. CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and from code calls to the Amazon S3 APIs. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon S3. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in the Event history. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, who made the request, when it was made, and additional details.
You can also get CloudTrail logs for object-level Amazon S3 actions. To do this, specify the Amazon S3 object for your trail. When an object-level action occurs in your account, CloudTrail evaluates your trail settings. If the event matches the object that you specified in a trail, the event is logged.
Amazon EventBridge is a serverless event bus service that you can use to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your applications, software as a service (SaaS) applications, and AWS services to targets such as AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts. A rule matches incoming events and sends them to targets for processing. A single rule can send an event to multiple targets, which then run in parallel. Rules are based either on an event pattern or a schedule. An event pattern defines the event structure and the fields that a rule matches. Rules that are based on a schedule perform an action at regular intervals.
Hence, the correct answer is: Enable object-level logging in the S3 bucket to automatically track S3 actions using CloudTrail. Set up an Amazon EventBridge rule with an SNS Topic to notify the IT Compliance team when a PutObject
API call with public-read permission is detected in the CloudTrail logs. Launch another CloudWatch Events rule that invokes an AWS Lambda function to turn the newly uploaded public object to private.
The option that says: Set up a Systems Manager (SSM) Automation document that changes any public object in the bucket to private. Integrate Amazon EventBridge with AWS Lambda to create a scheduled process that checks the S3 bucket every hour. Configure the Lambda function to invoke the SSM Automation document when a public object was identified and to notify the IT Compliance team via email using Amazon SNS is incorrect because the requirement says that the notification must be sent as soon as a public object was created in the bucket. Running the process to check for public objects ever hour will cause delays in detecting a public object in the S3 bucket.
The option that says: Automatically track S3 actions using the Trusted Advisor API and AWS Cloud Development Kit (CDK). Set up an Amazon EventBridge rule with an Amazon SNS Topic to notify the IT Compliance team when the Trusted Advisor detected a PutObject
API call with public-read permission. Launch another CloudWatch Events rule that invokes an AWS Lambda function to turn the newly uploaded public object to private is incorrect. Using the Trusted Advisor as a web service to track all S3 actions is not enough as it only provides high-level data about your S3 bucket, excluding the object-level permissions. A better solution is to enable the object-level logging in the S3 bucket.
The option that says: Integrate Amazon Lex with Amazon GuardDuty to detect public objects in the S3 bucket and to automatically update the permission of a public object to private. Associate an SNS Topic to Amazon Lex to notify the IT Compliance team via email if a public object was identified is incorrect because Amazon Lex is simply a service for building conversational interfaces into any application using voice and text. In addition, Amazon GuardDuty is just a threat detection service that analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. These two services are not suitable in detecting public objects in your S3 bucket.
References:
Check out this Amazon CloudWatch Cheat Sheet:
Question 75: Skipped
A company launched a high-performance computing (HPC) application inside the VPC of its AWS account. The application is composed of hundreds of private EC2 instances running in a cluster placement group, which allows the instances to communicate with each other at network speeds of up to 10 Gbps. There is also a custom cluster controller EC2 instance that closely controls and monitors the system performance of each instance. The cluster controller has the same instance type and AMI as the other instances. It is configured with a public IP address and runs outside the placement group. The Solutions Architect has been tasked to improve the network performance between the controller instance and the EC2 instances in the placement group.
Which option provides the MOST suitable solution that the Architect must implement to satisfy the requirement while maintaining low-latency network performance?
Terminate the custom cluster controller EC2 instance and stop all of the running instances in the existing placement group. Move the cluster controller instance to the existing placement group and restart all of the instances.
Attach an Elastic IP address to the custom cluster controller instance to increase its network capability to 10 Gbps.
Stop the custom cluster controller instance and move it to the existing placement group.
(Correct)
Terminate the custom cluster controller instance and re-launch it to the existing placement group. Attach an Elastic Network Adapter (ENA) to the cluster controller instance to increase its network performance. Change the placement strategy of the placement group to Spread
.
Explanation
Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. They are also recommended when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
A spread placement group is a group of instances that are each placed on distinct racks, with each rack having its own network and power source. Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other. Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks. Spread placement groups provide access to distinct racks, and are therefore suitable for mixing instance types or launching instances over time.
You can change the placement group for an instance in any of the following ways:
Move an existing instance to a placement group
Move an instance from one placement group to another
Remove an instance from a placement group
Before you move or remove the instance, the instance must be in the stopped state. You can move or remove an instance using the AWS CLI or an AWS SDK.
Hence, the correct answer is: Stop the custom cluster controller instance and move it to the existing placement group.
The option that says: Terminate the custom cluster controller EC2 instance and stop all of the running instances in the existing placement group. Move the cluster controller instance to the existing placement group and restart all of the instances is incorrect because you don't need to restart or terminate any instance to move a new EC2 instance to an existing placement group. You just have to stop the cluster controller instance, move it to the placement group, and restart it.
The option that says: Attach an Elastic IP address to the custom cluster controller instance to increase its network capability to 10 Gbps is incorrect because an Elastic IP is simply a static IPv4 address designed for dynamic cloud computing. It doesn't increase the network bandwidth of the instance to 10 Gbps either.
The option that says: Terminate the custom cluster controller instance and re-launch it to the existing placement group. Attach an Elastic Network Adapter (ENA) to the cluster controller instance to increase its network performance. Change the placement strategy of the placement group to Spread
is incorrect because using a Spread placement group will degrade the existing network performance of the architecture. The use of a Cluster placement group must be maintained. And while it is true that the Elastic Network Adapter (ENA) can increase the instance network performance, the described process of moving the custom cluster controller instance to the placement group is still incorrect. You can just stop the instance and directly include it to the placement group. There is no need to terminate the EC2 instance.
References:
Check out this Amazon EC2 Cheat Sheet:
The option that says: Ensure that Multi-AZ is enabled on Amazon RDS for MySQL and create one or more read replicas to ensure quick failover is incorrect. This may be possible since the database is currently on RDS with Multi-AZ. However, RDS automatic database failover can take up to 60 seconds for one standby instance and up to for two readable standby instances. A better option is to use RDS Proxy to fulfill the aforementioned requirements.
Parameters stored in Systems Manager are mutable. Any time you use a template containing Systems Manager parameters to create/update your stacks, CloudFormation uses the values for these Systems Manager parameters at the time of the create/update operation. So, as parameters are updated in Systems Manager, you can have the new value of the parameter take effect by just executing a stack update operation. The section in the output for Describe API will show an additional ‘ResolvedValue’ field that contains the resolved value of the Systems Manager parameter that was used for the last stack operation.