27 Mar 2024 evening study
Last updated
Last updated
Question 4: Correct
A media company has a suite of internet-facing web applications hosted in US West (N. California) region in AWS. The architecture is composed of several On-Demand Amazon EC2 instances behind an Application Load Balancer, which is configured to use public SSL/TLS certificates. The Application Load Balancer also enables incoming HTTPS traffic through the fully qualified domain names (FQDNs) of the applications for SSL termination. A Solutions Architect has been instructed to upgrade the corporate web applications to a multi-region architecture [-> Route53, failover? CloudFormation Stackset deploying to other regions? ] that uses various AWS Regions such as ap-southeast-2, ca-central-1, eu-west-3, and so forth.
Which of the following approach should the Architect implement to ensure that all HTTPS services will continue to work without interruption?
Use the AWS KMS in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.
In each new AWS Region, request for SSL/TLS certificates using AWS KMS for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
In each new AWS Region, request for SSL/TLS certificates using the AWS Certificate Manager for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
(Correct)
Use the AWS Certificate Manager service in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.
Explanation
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.
With AWS Certificate Manager, you can quickly request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally. Public and private certificates provisioned through AWS Certificate Manager for use with ACM-integrated services are free. You pay only for the AWS resources you create to run your application. With AWS Certificate Manager Private Certificate Authority, you pay monthly for the operation of the private CA and for the private certificates you issue.
You can use the same SSL certificate from ACM in more than one AWS Region but it depends on whether you’re using Elastic Load Balancing or Amazon CloudFront. To use a certificate with Elastic Load Balancing for the same site (the same fully qualified domain name, or FQDN, or set of FQDNs) in a different Region, you must request a new certificate for each Region in which you plan to use it. To use an ACM certificate with Amazon CloudFront, you must request the certificate in the US East (N. Virginia) region. ACM certificates in this region that are associated with a CloudFront distribution are distributed to all the geographic locations configured for that distribution.
Hence, the correct answer is the option that says: In each new AWS Region, request for SSL/TLS certificates using the AWS Certificate Manager for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
The option that says: In each new AWS Region, request for SSL/TLS certificates using AWS KMS for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region is incorrect because AWS KMS is not the right service to use to generate the SSL/TLS certificates. You have to utilize ACM instead.
The option that says: Use the AWS KMS in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add is incorrect. You have to use the AWS Certificate Manager (ACM) service to generate the certificates and not AWS KMS as this service is primarily used for data encryption. Moreover, you have to associate the certificates that were generated from the same AWS Region where the load balancer is launched.
The option that says: Use the AWS Certificate Manager service in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add is incorrect. You can only use the same SSL certificate from ACM in more than one AWS Region if you are attaching it to your CloudFront distribution only, and not to your Application Load Balancer. To use a certificate with Elastic Load Balancing for the same site (the same fully qualified domain name, or FQDN, or set of FQDNs) in a different Region, you must request a new certificate for each Region in which you plan to use it.
References:
Check out these AWS Certificate Manager and Elastic Load Balancer Cheat Sheets:
Question 9: Correct
A company wants to launch its online shopping website to give customers an easy way to purchase the products they need. The proposed setup is to host the application on an AWS Fargate cluster, utilize a Load Balancer to distribute traffic between the Fargate tasks, and use Amazon CloudFront for caching and content delivery. The company wants to ensure that the website complies with industry best practices and should be able to protect customers from common “man-in-the-middle” attacks for e-commerce websites such as DNS spoofing, HTTPS spoofing, or SSL hijacking. [-> WAF Rules]
Which of the following configurations will provide the MOST secure access to the website?
Use Route 53 for domain registration. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Use AWS Certificate Manager (ACM) to generate a valid 2048-bit TLS/SSL certificate for the domain name and configure the Application Load Balancer HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Register the domain name on Route 53 and enable DNSSEC validation for all public hosted zones to ensure that all DNS requests have not been tampered with during transit. Use AWS Certificate Manager (ACM) to generate a valid TLS/SSL certificate for the domain name. Configure the Application Load Balancer with an HTTPS listener to use the ACM TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
(Correct)
Register the domain name on Route 53. Since Route 53 only supports DNSSEC for registration, host the company DNS root servers on Amazon EC2 instances running the BIND service. Enable DNSSEC for DNS requests to ensure the replies have not been tampered with. Generate a valid certificate for the website domain name on AWS ACM and configure the Application Load Balancers HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Register the domain name on Route 53. Use a third-party DNS provider that supports the import of the customer-managed keys for DNSSEC. Import a 2048-bit TLS/SSL certificate from a third-party certificate service to AWS Certificate Manager (ACM). Configure the Application Load Balancer with an HTTPS listener to use the imported TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
Explanation
Amazon now allows you to enable Domain Name System Security Extensions (DNSSEC) signing for all existing and new public hosted zones, and enable DNSSEC validation for Amazon Route 53 Resolver. Amazon Route 53 DNSSEC provides data origin authentication and data integrity verification for DNS and can help customers meet compliance mandates, such as FedRAMP.
When you enable DNSSEC signing on a hosted zone, Route 53 cryptographically signs each record in that hosted zone. Route 53 manages the zone-signing key, and you can manage the key-signing key in AWS Key Management Service (AWS KMS). Amazon’s domain name registrar, Route 53 Domains, already supports DNSSEC, and customers can now register domains and host their DNS on Route 53 with DNSSEC signing enabled. When you enable DNSSEC validation on the Route 53 Resolver in your VPC, it ensures that DNS responses have not been tampered with in transit. This can prevent DNS Spoofing.
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. Using a valid SSL Certificate for your application load balancer ensures that all requests are encrypted on transit as well as protection against SSL hijacking.
CloudFront supports Server Name Indication (SNI) for custom SSL certificates, along with the ability to take incoming HTTP requests and redirect them to secure HTTPS requests to ensure that clients are always directed to the secure version of your website.
Therefore, the correct answer is: Register the domain name on Route 53 and enable DNSSEC validation for all public hosted zones to ensure that all DNS requests have not been tampered with during transit. Use AWS Certificate Manager (ACM) to generate a valid TLS/SSL certificate for the domain name. Configure the Application Load Balancer with an HTTPS listener to use the ACM TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront.
The option that says: Register the domain name on Route 53. Use a third-party DNS provider that supports the import of the customer-managed keys for DNSSEC. Import a 2048-bit TLS/SSL certificate from a third-party certificate service to AWS Certificate Manager (ACM). Configure the Application Load Balancer with an HTTPS listener to use the imported TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect. Although this is possible, you don’t have to rely on a third-party DNS provider as Route 53 supports DNSSEC signing. Also, ACM can secure a 2048-bit TLS/SSL Certificate for free so you don't have to buy certificates from other providers.
The option that says: Use Route 53 for domain registration. Use a third-party DNS service that supports DNSSEC for DNS requests that use the customer-managed keys. Use AWS Certificate Manager (ACM) to generate a valid 2048-bit TLS/SSL certificate for the domain name and configure the Application Load Balancer HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect. This is also possible, but you don't have to rely on a third-party DNS provider as Amazon Route 53 already supports DNSSEC signing.
The option that says: Register the domain name on Route 53. Since Route 53 only supports DNSSEC for registration, host the company DNS root servers on Amazon EC2 instances running the BIND service. Enable DNSSEC for DNS requests to ensure the replies have not been tampered with. Generate a valid certificate for the website domain name on AWS ACM and configure the Application Load Balancers HTTPS listener to use this TLS/SSL certificate. Use Server Name Identification and HTTP to HTTPS redirection on CloudFront is incorrect as this solution is no longer recommended. This setup was previously used as a workaround when DNSSEC signing was not supported natively yet in Amazon Route 53.
References:
Check out these AWS Cheat Sheets:
Question 11: Correct
A company processes several petabytes of images submitted by users on their photo hosting site every month. Each month, the images are processed in its on-premises data center by a High-Performance Computing (HPC) cluster with a capacity of 5,000 cores and 10 petabytes of data. Processing a month’s worth of images by thousands of jobs running in parallel takes about a week and the processed images are stored on a network file server, which also backups the data to a disaster recovery site.
The current data center is nearing its capacity so the users are forced to spread the jobs within the course of the month. This is not ideal for the requirement of the jobs, so the Solutions Architect was tasked to design a scalable solution that can exceed the current capacity with the least amount of management overhead while maintaining the current level of durability. [-> S3 - Lambda?]
Which of the following solutions will meet the company's requirements while being cost-effective?
Package the executable file for the job in a Docker image stored on Amazon Elastic Container Registry (Amazon ECR). Run the Docker images on Amazon Elastic Kubernetes Service (Amazon EKS). Auto Scaling can be handled automatically by EKS. Store the raw data temporarily on Amazon EBS SC1 volumes and then send the images to an Amazon S3 bucket after processing.
Create an Amazon SQS queue and submit the list of jobs to be processed. Create an Auto Scaling Group of Amazon EC2 Spot Instances that will process the jobs from the SQS queue. Share the raw data across all the instances using Amazon EFS. Store the processed images in an Amazon S3 bucket for long term storage.
Using a combination of On-demand and Reserved Instances as Task Nodes, create an EMR cluster that will use Spark to pull the raw data from an Amazon S3 bucket. List the jobs that need to be processed by the EMR cluster on a DynamoDB table. Store the processed images on a separate Amazon S3 bucket.
Utilize AWS Batch with Managed Compute Environments to create a fleet using Spot Instances. Store the raw data on an Amazon S3 bucket. Create jobs on AWS Batch Job Queues that will pull objects from the Amazon S3 bucket and temporarily store them to the EC2 EBS volumes for processing. Send the processed images back to another Amazon S3 bucket.
(Correct)
Explanation
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.
There is no additional charge for AWS Batch. You only pay for the AWS resources (e.g. EC2 instances or Fargate jobs) you create to store and run your batch jobs. From the AWS Batch use cases page, we can see an example similar to this scenario wherein Digital Media and Entertainment companies require highly scalable batch computing resources to enable accelerated and automated processing of data as well as the compilation and processing of files, graphics, and visual effects for high-resolution video content. Use AWS Batch to accelerate content creation, dynamically scale media packaging, and automate asynchronous media supply chain workflows.
In AWS Batch, job queues are mapped to one or more compute environments. Compute environments contain the Amazon ECS container instances that are used to run containerized batch jobs. A specific compute environment can also be mapped to one or many job queues. Within a job queue, the associated compute environments each have an order that's used by the scheduler to determine where jobs that are ready to be run should run.
Therefore, the correct answer is: Utilize AWS Batch with Managed Compute Environments to create a fleet using Spot Instances. Store the raw data on an Amazon S3 bucket. Create jobs on AWS Batch Job Queues that will pull objects from the Amazon S3 bucket and temporarily store them to the EC2 EBS volumes for processing. Send the processed images back to another Amazon S3 bucket.
The option that says: Package the executable file for the job in a Docker image stored on Amazon Elastic Container Registry (Amazon ECR). Run the Docker images on Amazon Elastic Kubernetes Service (Amazon EKS). Auto Scaling can be handled automatically by EKS. Store the raw data temporarily on Amazon EBS SC1 volumes and then send the images to an Amazon S3 bucket after processing is incorrect. Although this is possible, converting the application to a container and deploying it to an EKS cluster will entail a lot of changes for the application. Additionally, since you can’t quickly increase/decrease SC1 EBS volumes, creating a large volume to handle petabytes of data is not cost-effective.
The option that says: Using a combination of On-demand and Reserved Instances as Task Nodes, create an EMR cluster that will use Apache Spark to pull the raw data from an Amazon S3 bucket. List the jobs that need to be processed by the EMR cluster on a DynamoDB table. Store the processed images on a separate Amazon S3 bucket is incorrect as managing the EMR cluster and Apache Spark adds significant management overhead for this solution. There is also an additional cost for the EC2 instances that are constantly running even if there are only a few jobs that need to be run.
The option that says: Create an Amazon SQS queue and submit the list of jobs to be processed. Create an Auto Scaling Group of Amazon EC2 Spot Instances that will process the jobs from the SQS queue. Share the raw data across all the instances using Amazon EFS. Store the processed images in an Amazon S3 bucket for long term storage is incorrect as Amazon EFS is more expensive than storing the raw data on S3 buckets. This is also not efficient as listing the jobs on SQS Queue can cause some to be processed twice, depending on the state of your Spot instances.
References:
Check out this AWS Batch Cheat Sheet:
Question 18: Incorrect
A stocks brokerage firm hosts its legacy application on Amazon EC2 in a private subnet of its Amazon VPC. The application is accessed by the employees from their corporate laptops through a proprietary desktop program. The company network is peered with the AWS Direct Connect (DX) connection to provide a fast and reliable connection to the private EC2 instances inside the VPC. To comply with the strict security requirements of financial institutions, the firm is required to encrypt its network traffic [-> KMS? VPN Tunnel? ] that flows from the employees' laptops to the resources inside the VPC.
Which of the following solution will comply with this requirement while maintaining the consistent network performance of Direct Connect?
Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
(Incorrect)
Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
(Correct)
Using the current Direct Connect connection, create a new public virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
Using the current Direct Connect connection, create a new private virtual interface and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the Internet. Configure the employees’ laptops to connect to this VPN.
Explanation
To connect to services such as EC2 using just Direct Connect you need to create a private virtual interface. However, if you want to encrypt the traffic flowing through Direct Connect, you will need to use the public virtual interface of DX to create a VPN connection that will allow access to AWS services such as S3, EC2, and other services.
To connect to AWS resources that are reachable by a public IP address (such as an Amazon Simple Storage Service bucket) or AWS public endpoints, use a public virtual interface. With a public virtual interface, you can:
- Connect to all AWS public IP addresses globally.
- Create public virtual interfaces in any DX location to receive Amazon’s global IP routes.
- Access publicly routable Amazon services in any AWS Region (except for the AWS China Region).
To connect to your resources hosted in an Amazon Virtual Private Cloud (Amazon VPC) using their private IP addresses, use a private virtual interface. With a private virtual interface, you can:
- Connect VPC resources (such as Amazon Elastic Compute Cloud (Amazon EC2) instances or load balancers) on your private IP address or endpoint.
- Connect a private virtual interface to a DX gateway. Then, associate the DX gateway with one or more virtual private gateways in any AWS Region (except the AWS China Region).
- Connect to multiple VPCs in any AWS Region (except the AWS China Region), because a virtual private gateway is associated with a single VPC.
If you want to establish a virtual private network (VPN) connection from your company network to an Amazon Virtual Private Cloud (Amazon VPC) over an AWS Direct Connect (DX) connection, you must use a public virtual interface for your DX connection.
Therefore, the correct answer is: Using the current Direct Connect connection, create a new public virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the company network to route employee traffic to this VPN.
The option that says: Using the current Direct Connect connection, create a new private virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC with the BGP protocol using the DX connection. Configure the employees’ laptops to connect to this VPN is incorrect because you must use a public virtual interface for your AWS Direct Connect (DX) connection and not a private one. You won't be able to establish an encrypted VPN along with your DX connection if you create a private virtual interface.
The following options are incorrect because you need to establish the VPN connection through the DX connection, and not over the Internet.
- Using the current Direct Connect connection, create a new public virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the internet. Configure the employees’ laptops to connect to this VPN.
- Using the current Direct Connect connection, create a new private virtual interface, and input the network prefixes that you want to advertise. Create a new site-to-site VPN connection to the VPC over the internet. Configure the company network to route employee traffic to this VPN.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 21: Incorrect
ChatGPT - Correct and beyond
A company wants to implement a multi-account strategy [-> Org] that will be distributed across its several research facilities. There will be approximately 50 teams in total that will need their own AWS accounts. A solution is needed to simplify the DNS management as there is only one team that manages all the domains and subdomains for the whole organization. This means that the solution should allow private DNS to be shared among virtual private clouds (VPCs) in different AWS accounts. [-> Transit Gateway?]
Which of the following solutions has the LEAST complex DNS architecture and allows all VPCs to resolve the needed domain names?
On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Set up VPC peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this zone. Programmatically associate the VPCs from other accounts with this hosted zone.
(Correct)
On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Create a peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account.
Set up a VPC peering connection among the VPC of each account. Ensure that the each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “TRUE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account.
Set up Direct Connect connections among the VPCs of each account using private virtual interfaces. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “FALSE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. Programmatically associate the VPCs from other accounts with this hosted zone.
(Incorrect)
Explanation
When you create a VPC using Amazon VPC, Route 53 Resolver automatically answers DNS queries for local VPC domain names for EC2 instances (ec2-192-0-2-44.compute-1.amazonaws.com) and records in private hosted zones (acme.example.com). For all other domain names, Resolver performs recursive lookups against public name servers.
You also can integrate DNS resolution between Resolver and DNS resolvers on your network by configuring forwarding rules. Your network can include any network that is reachable from your VPC, such as the following:
- The VPC itself
- Another peered VPC
- An on-premises network that is connected to AWS with AWS Direct Connect, a VPN, or a network address translation (NAT) gateway
VPC sharing allows customers to share subnets with other AWS accounts within the same AWS Organization. This is a very powerful concept that allows for a number of benefits:
- Separation of duties: centrally controlled VPC structure, routing, IP address allocation.
- Application owners continue to own resources, accounts, and security groups.
- VPC sharing participants can reference security group IDs of each other.
- Efficiencies: higher density in subnets, efficient use of VPNs and AWS Direct Connect.
- Hard limits can be avoided, for example, 50 VIFs per AWS Direct Connect connection through simplified network architecture.
- Costs can be optimized through reuse of NAT gateways, VPC interface endpoints, and intra-Availability Zone traffic.
Essentially, we can decouple accounts and networks. In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner. You can simplify network topologies by interconnecting shared Amazon VPCs using connectivity features, such as AWS PrivateLink, AWS Transit Gateway, and Amazon VPC peering.
Therefore, the correct answer is: On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Set up VPC peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this zone. Programmatically associate the VPCs from other accounts with this hosted zone.
The option that says: Set up Direct Connect connections among the VPCs of each account using private virtual interfaces. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “FALSE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. Programmatically associate the VPCs from other accounts with this hosted zone is incorrect. Using AWS Direct Connect is not a suitable service to connect the various VPCs. In addition, attributes enableDnsHostnames
and enableDnsSupport
are set to “TRUE” by default and are needed for VPC resources to query Route 53 zone entries.
The option that says: Set up a VPC peering connection among the VPC of each account. Ensure that each VPC has the attributes enableDnsHostnames
and enableDnsSupport
set to “TRUE”. On Amazon Route 53, create a private hosted zone associated with the central account’s VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account is incorrect. You won't be able to resolve the hosted private zone entries even if you configure your Route 53 zone NS entry to use the central accounts' DNS servers.
The option that says: On AWS Resource Access Manager (RAM), set up a shared services VPC on your central account. Create a peering from this VPC to each VPC on the other accounts. On Amazon Route 53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this hosted zone. On each of the other AWS Accounts, create a Route 53 private hosted zone and configure the Name Server entry to use the DNS of the central account is incorrect. Although creating the shared services VPC is a good solution, configuring Route 53 Name Server (NS) records to point to the shared services VPC’s Route 53 is not enough. You need to associate the VPCs from other accounts to the hosted zone on the central account.
References:
Check out these Amazon VPC and Route 53 Cheat Sheets:
Question 29: Incorrect
A retail company hosts its web application on an Auto Scaling group of Amazon EC2 instances deployed across multiple Availability Zones. The Auto Scaling group is configured to maintain a minimum EC2 cluster size and automatically replace unhealthy instances. The EC2 instances are behind an Application Load Balancer so that the load can be spread evenly on all instances. The application target group health check is configured with a fixed HTTP page that queries a dummy item on the database. The web application connects to a Multi-AZ Amazon RDS MySQL instance. A recent outage caused a major loss to the company's revenue. Upon investigation, it was found that the web server metrics are within the normal range but the database CPU usage is very high, causing the EC2 health checks to timeout. Failing the health checks, the Auto Scaling group continuously replaced the unhealthy instances thus causing the downtime.
Which of the following options should the Solution Architect implement to prevent this from happening again and allow the application to handle more traffic in the future? (Select TWO.)
Change the target group health check to use a TCP check on the EC2 instances instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails.
Create an Amazon CloudWatch alarm to monitor the Amazon RDS MySQL instance if it has a high-load or in impaired status. Set the alarm action to recover the RDS instance. This will automatically reboot the database to reset the queries.
(Incorrect)
Change the target group health check to a simple HTML page instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails.
(Correct)
Reduce the load on the database tier by creating multiple read replicas for the Amazon RDS MySQL Multi-AZ cluster. Configure the web application to use the single reader endpoint of RDS for all read operations.
Reduce the load on the database tier by creating an Amazon ElastiCache cluster to cache frequently requested database queries. Configure the application to use this cache when querying the RDS MySQL instance.
(Correct)
Explanation
Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor one of the following:
The health of a specified resource, such as a web server - You can configure a health check that monitors an endpoint that you specify either by IP address or by the domain name. At regular intervals that you specify, Route 53 submits automated requests over the Internet to your application. You can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL.
The status of other health checks - You can create a health check that monitors whether Route 53 considers other health checks healthy or unhealthy. One situation where this might be useful is when you have multiple resources that perform the same function, such as multiple web servers, and your chief concern is whether some minimum number of your resources are healthy.
The status of an Amazon CloudWatch alarm - You can create CloudWatch alarms that monitor the status of CloudWatch metrics, such as the number of throttled read events for an Amazon DynamoDB database or the number of Elastic Load Balancing hosts that are considered healthy.
After you create a health check, you can get the status of the health check, get notifications when the status changes, and configure DNS failover. To improve resiliency and availability, Route 53 doesn't wait for the CloudWatch alarm to go into the ALARM state. The status of a health check changes from healthy to unhealthy based on the data stream and on the criteria in the CloudWatch alarm.
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks. Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check. If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
Each health check will be executed at configured intervals to all the EC2 instances so if the health check query page involves a database query, there will be several simultaneous queries to the database. This can increase the load of your database tier if there are many EC2 instances and the health check interval period is very quick.
Amazon ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. At the same time, it helps remove the complexity associated with deploying and managing a distributed cache environment.
ElastiCache for Memcached has multiple features to enhance reliability for critical production deployments:
- Automatic detection and recovery from cache node failures.
- Automatic discovery of nodes within a cluster enabled for automatic discovery so that no changes need to be made to your application when you add or remove nodes.
- Flexible Availability Zone placement of nodes and clusters.
- Integration with other AWS services such as Amazon EC2, Amazon CloudWatch, AWS CloudTrail, and Amazon SNS to provide a secure, high-performance, managed in-memory caching solution.
The option that says: Change the target group health check to a simple HTML page instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails is correct. Changing the target group health check to a simple HTML page will reduce the queries to the database tier. The Route 53 health check can act as the “external” check on a specific page that queries the database to ensure that the application is working as expected. The Route 53 health check has an overall lower request count compared to using the target group health check. [-> Route53 Health Check + CloudWatch Alarm]
The option that says: Reduce the load on the database tier by creating an Amazon ElastiCache cluster to cache frequently requested database queries. Configure the application to use this cache when querying the RDS MySQL instance is correct. Since this is a retail web application, most of the queries will be read-intensive as customers are searching for products. ElastiCache is effective at caching frequent requests, which overall improves the application response time and reduces database queries.
The option that says: Reduce the load on the database tier by creating multiple read replicas for the Amazon RDS MySQL Multi-AZ cluster. Configure the web application to use the single reader endpoint of RDS for all read operations is incorrect. This may be possible because creating read replicas is recommended to increase the read performance of an RDS cluster. However, this option does not resolve the original intention to reduce the number of repetitive queries hitting the database.
The option that says: Change the target group health check to use a TCP check on the EC2 instances instead of a page that queries the database. Create an Amazon Route 53 health check for the database dummy item web page to ensure that the application works as expected. Set up an Amazon CloudWatch alarm to send a notification to Admins when the health check fails is incorrect. An Application Load Balancer does not support a TCP health check. ALB only supports HTTP and HTTPS target health checks.
The option that says: Create an Amazon CloudWatch alarm to monitor the Amazon RDS MySQL instance if it has a high-load or in impaired status. Set the alarm action to recover the RDS instance. This will automatically reboot the database to reset the queries is incorrect. Recovering the database instance results in downtime. If you have the Multi-AZ enabled, the standby database will shoulder all the load causing it to crash too. It is better to scale the database by creating read replicas or adding an ElastiCache cluster in front of it.
References:
Check out these Amazon ElastiCache and Amazon RDS Cheat Sheets:
Question 31: Incorrect
A graphics design startup is using multiple Amazon S3 buckets to store high-resolution media files for their various digital artworks. After securing a partnership deal with a leading media company, the two parties shall be sharing digital resources with one another as part of the contract. The media company frequently performs multiple object retrievals from the S3 buckets every day, which increased the startup's data transfer costs.
As the Solutions Architect, what should you do to help the startup lower their operational costs?
Advise the media company to create their own S3 bucket. Then run the aws s3 sync s3://sourcebucket s3://destinationbucket
command to copy the objects from their S3 bucket to the other party's S3 bucket. In this way, future retrievals can be made on the media company's S3 bucket instead.
(Incorrect)
Enable the Requester Pays feature in all of the startup's S3 buckets to make the media company pay the cost of the data transfer from the buckets.
(Correct)
Provide cross-account access for the media company, which has permissions to access contents in the S3 bucket. Cross-account retrieval of S3 objects is charged to the account that made the request.
Create a new billing account for the social media company by using AWS Organizations. Apply SCPs on the organization to ensure that each account has access only to its own resources and each other's S3 buckets.
Explanation
In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.
You must authenticate all requests involving Requester Pays buckets. The request authentication enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket. After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-payer in their requests either in the header, for POST, GET and HEAD requests, or as a parameter in a REST request to show that they understand that they will be charged for the request and the data download.
Hence, the correct answer is to enable the Requester Pays feature in all of the startup's S3 buckets to make the media company pay the cost of the data transfer from the buckets.
The option that says: Advise the media company to create their own S3 bucket. Then run the aws s3 sync s3://sourcebucket s3://destinationbucket
command to copy the objects from their S3 bucket to the other party's S3 bucket. In this way, future retrievals can be made on the media company's S3 bucket instead is incorrect because sharing all the assets of the startup to the media entails a lot of costs considering that you will be charged for the data transfer charges made during the sync process.
Creating a new billing account for the social media company by using AWS Organizations, then applying SCPs on the organization to ensure that each account has access only to its own resources and each other's S3 buckets is incorrect because AWS Organizations does not create a separate billing account for every account under it. Instead, what AWS Organizations has is consolidated billing. You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts. Every organization in AWS Organizations has a master account that pays the charges of all the member accounts.
The option that says: Provide cross-account access for the media company, which has permissions to access contents in the S3 bucket. Cross-account retrieval of S3 objects is charged to the account that made the request is incorrect because cross-account access does not shoulder the charges that are made during S3 object requests. Unless Requester Pays is enabled on the bucket, the bucket owner is still the one that is charged.
Reference:
Check out this Amazon S3 Cheat Sheet:
Question 34: Correct
A company wants to host its internal web application in AWS. The front-end uses Docker containers and it connects to a MySQL instance as the backend database. The company plans to use AWS-managed container services [-> ECS] to reduce the overhead in managing the servers. The application should allow employees to access company documents, which are accessed frequently for the first 3 months and then rarely after that [S3 -> IA/Glacier -> Remove after 5 years]. As part of the company policy, these documents must be retained for at least five years. Because this is an internal web application, the company wants to have the lowest possible cost.
Which of the following implementations is the most cost-effective solution?
Deploy the Docker containers using Amazon Elastic Kubernetes Service (EKS)with auto-scaling enabled. Use Amazon EC2 Spot instances for the EKS cluster to further reduce costs. Use On-Demand instances for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years.
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Ensure that Spot Instance draining is enabled on the ECS agent config. Use Reserved instance for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years.
(Correct)
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 On-Demand instances. Use On-Demand instances as well for the Amazon RDS database and its read replicas. Create an Amazon EFS volume that is mounted on the EC2 instances to store the company documents. Create a cron job that will copy the documents to Amazon S3 Glacier after three months and then create a bucket lifecycle policy that will delete objects older than five years.
Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Use Spot instances for the Amazon RDS database and its read replicas. Create an encrypted ECS volume on the EC2 hosts that is shared with the containers to store the company documents. Set up a cron job that will delete the files after five years.
Explanation
A Spot Instance is an unused Amazon EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and adjusted gradually based on the long-term supply of and demand for Spot Instances. You can register Spot Instances to your Amazon ECS clusters. Amazon EC2 terminates, stops, or hibernates your Spot Instance when the Spot price exceeds the maximum price for your request or capacity is no longer available. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted. If Amazon ECS Spot Instance draining is enabled on the instance, ECS receives the Spot Instance interruption notice and places the instance in DRAINING status.
When a container instance is set to DRAINING
, Amazon ECS prevents new tasks from being scheduled for placement on the container instance. Service tasks on the draining container instance that are in the PENDING state are stopped immediately. If there are container instances in the cluster that are available, replacement service tasks are started on them. Spot Instance draining is disabled by default and must be manually enabled by adding the line ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
on your /etc/ecs/ecs.config
file.
Within the Spot provisioning model, you can provide an allocation strategy of either “Diversified” or “Lowest Price” which will define how the EC2 Spot Instances are provisioned. The recommended best practice is to select the “Diversified” strategy, to maximize provisioning choices, while reducing the costs. When this is combined with Spot Instance draining, you can allow your Spot instances to drain connections gracefully while having enough time for the cluster to spawn other Spot instance types to handle the load. When configured correctly, you can significantly reduce downtime of Spot instances or eliminate downtime entirely.
Therefore, the correct answer is: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Ensure that Spot Instance draining is enabled on the ECS agent config. Use Reserved instance for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years. With a diversified Spot instance type and Spot instance draining, you can allow your ECS cluster to spawn other EC2 instance types automatically to handle the load at a very low cost. Reserved instances are recommended cost-saving to RDS instances that will be running continuously for years.
The option that says: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 Spot Instances. Use Spot instances for the Amazon RDS database and its read replicas. Create an encrypted ECS volume on the EC2 hosts that is shared with the containers to store the company documents. Set up a cron job that will delete the files after five years is incorrect. Storing company documents on the EC2 instances will require more disk space on instances, which is unnecessary and expensive. Using Spot instances for RDS instances is not recommended as this will cause major downtime or data loss in case AWS terminates your spot instance.
The option that says: Deploy the Docker containers using Amazon Elastic Container Service (ECS) with Amazon EC2 On-Demand instances. Use On-Demand instances as well for the Amazon RDS database and its read replicas. Create an Amazon EFS volume that is mounted on the EC2 instances to store the company documents. Create a cron job that will copy the documents to Amazon S3 Glacier after three months and then create a bucket lifecycle policy that will delete objects older than five years is incorrect. This is possible, however, using EFS volumes is more expensive than just storing the files on Amazon S3 in the first place.
The option that says: Deploy the Docker containers using Amazon Elastic Kubernetes Service (EKS)with auto-scaling enabled. Use Amazon EC2 Spot instances for the EKS cluster to further reduce costs. Use On-Demand instances for the Amazon RDS database and its read replicas. Create an encrypted Amazon S3 bucket to store the company documents. Create a bucket lifecycle policy that will move the documents to Amazon S3 Glacier after three months and will delete objects older than five years is incorrect. This option is also possible, however, using on-demand instances for continuously running RDS instances is expensive. You can save costs by using Reserved instances for Amazon RDS.
References:
Check out this Amazon ECS Cheat Sheet:
Question 36: Correct
A top university has launched its serverless online portal using Lambda and API Gateway in AWS that enables its students to enroll, manage their class schedules, and see their grades online. After a few weeks, the portal abruptly stopped working and lost all of its data. The university hired an external cybersecurity consultant and based on the investigation, the outage was due to an SQL injection vulnerability on the portal's login page in which the attacker simply injected the malicious SQL code. You also need to track historical changes to the rules and metrics [-> AWS Config] associated with your firewall.
Which of the following is the most suitable and cost-effective solution to avoid another SQL Injection attack against their infrastructure in AWS? [-> WAF Rules]
Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
(Correct)
Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
Explanation
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. With AWS Config, you can track changes to WAF web access control lists (web ACLs). For example, you can record the creation and deletion of rules and rule actions, as well as updates to WAF rule configurations.
AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.
In this scenario, the best option is to deploy WAF in front of the API Gateway. Hence the correct answer is the option that says: Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
The option that says: Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect because you have to use AWS WAF in front of the API Gateway and not directly to the Lambda functions. AWS Firewall Manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations and hence, it is not suitable for tracking changes to WAF web access control lists. You should use AWS Config instead.
The option that says: Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect. Even though it is valid to use AWS WAF with CloudFront, it entails an additional and unnecessary cost to launch a CloudFront distribution for this scenario. There is no requirement that the serverless online portal should be scalable and be accessible around the globe hence, a CloudFront distribution is not necessary.
The option that says: Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations is incorrect. Launching a new Application Load Balancer entails additional cost and is not cost-effective. In addition, AWS Firewall manager is primarily used to manage your Firewall across multiple AWS accounts under your AWS Organizations. Using AWS Config is much more suitable for tracking changes to WAF web access control lists.
References:
Check out this AWS WAF Cheat Sheet:
AWS Security Services Overview - WAF, Shield, CloudHSM, KMS:
Question 37: Correct
A fintech startup has developed a cloud-based payment processing system that accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple, and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance [-> HTTPS & field-level encryption] test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content. [-> Cache-Control max-age]
In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?
Create an origin access control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio.
Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
(Correct)
Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio.
Explanation
Field-level encryption adds an additional layer of security, along with HTTPS, that lets you protect specific data throughout system processing so that only certain applications can see it. Field-level encryption allows you to securely upload user-submitted sensitive information to your web servers. The sensitive information provided by your clients is encrypted at the edge closer to the user and remains encrypted throughout your entire application stack, ensuring that only applications that need the data—and have the credentials to decrypt it—are able to do so.
To use field-level encryption, you configure your CloudFront distribution to specify the set of fields in POST requests that you want to be encrypted, and the public key to use to encrypt them. You can encrypt up to 10 data fields in a request. Hence, the correct answer for this scenario is the option that says: Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio.
You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution. To increase your cache hit ratio, you can configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
. The shorter the cache duration, the more frequently CloudFront forwards another request to your origin to determine whether the object has changed and, if so, to get the latest version.
The option that says: Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio is incorrect. Although it provides secure end-to-end connections to origin servers, it is better to add field-level encryption to protect the credit card information.
The option that says: Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
to increase your cache hit ratio is incorrect because a Signed URL provides a way to distribute private content but it doesn't encrypt the sensitive credit card information.
The option that says: Create an Origin Access Control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent
and Host
headers to your objects to increase your cache hit ratio is incorrect because OAC is mainly used to restrict access to objects in S3 bucket, but not provide encryption to specific fields.
References:
Check out this Amazon CloudFront Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 38: Correct
A company is hosting its production environment in AWS Fargate. To save costs, the Chief Information Officer (CIO) wants to deploy its new development environment workloads on its on-premises servers as this leverages existing capital investments. As the Solutions Architect, you have been tasked by the CIO to provide a solution that will:
have both on-premises and Fargate managed in the same cluster [-> EKS / ECS Anywhere]
easily migrate development environment workloads running on-premises to production environment running in AWS Fargate
ensure consistent tooling and API experience across container-based workloads
Which of the following is the MOST operationally efficient solution that meets these requirements?
Use EKS Amazon Anywhere to simplify on-premises Kubernetes management with default component configurations and automated cluster management tools. This makes it easy to migrate the development workloads running on-premises to EKS in an AWS region on Fargate.
Utilize Amazon ECS Anywhere to streamline software management on-premises and on AWS with a standardized container orchestrator. This makes it easy to migrate the development workloads running on-premises to ECS in an AWS region on Fargate.
(Correct)
Install and configure AWS Outposts in your on-premises data center. Run Amazon EKS Anywhere on AWS Outposts to launch container-based workloads. Migrate development workloads to production that is running on AWS Fargate.
Install and configure AWS Outposts in your on-premises data center. Run Amazon ECS on AWS Outposts to launch the development environment workloads. Migrate development workloads to production that is running on AWS Fargate.
Explanation
Amazon Elastic Container Service (ECS) Anywhere is a feature of Amazon ECS that lets you run and manage container workloads on your infrastructure. This feature helps you meet compliance requirements and scale your business without sacrificing your on-premises investments.
It ensures consistency with the same on-premises Amazon ECS tools when you migrate to AWS.
ECS Anywhere extends the reach of Amazon ECS to provide you with a single management interface for all of your container-based applications, irrespective of the environment they’re running in. As a result, you have a simple, consistent experience when it comes to cluster management, workload scheduling, and monitoring for both the cloud and on-premises. With ECS Anywhere, you do not need to install and maintain any container orchestration software, thus removing the need for your team to learn specialized knowledge domains and skillsets for disparate tooling.
ECS Anywhere makes it easy for you to run your applications in on-premises environments as long as desired and then migrate to the cloud with a single click at any time.
Therefore, the correct answer is: Utilize Amazon ECS Anywhere to streamline software management on-premises and on AWS with a standardized container orchestrator. This makes it easy to migrate the development workloads running on-premises to ECS in an AWS region on Fargate as it can have on-premises compute, EC2 instances, and Fargate in the same ECS cluster, making it easy to migrate ECS workloads running on-premises to ECS in an AWS region on Fargate or EC2 in the future if necessary.
The option that says: Use EKS Amazon Anywhere to simplify on-premises Kubernetes management with default component configurations and automated cluster management tools. This makes it easy to migrate the development workloads running on-premises to EKS in an AWS region on Fargate is incorrect because Amazon EKS Anywhere is not designed to run in the AWS cloud. It does not integrate with the Kubernetes Cluster API Provider for AWS.
The following sets of options are incorrect because AWS Fargate is not available on AWS Outposts, and Amazon EKS Anywhere isn't designed to run on AWS Outposts:
- Install and configure AWS Outposts in your on-premises data center. Run Amazon ECS on AWS Outposts to launch the development environment workloads. Migrate development workloads to production that is running on AWS Fargate.
- Install and configure AWS Outposts in your on-premises data center. Run Amazon EKS Anywhere on AWS Outposts to launch container-based workloads. Migrate development workloads to production that is running on AWS Fargate.
References:
Question 40: Incorrect
A leading financial company is planning to launch its MERN (MongoDB, Express, React, Node.js) application with an Amazon RDS MariaDB database to serve its clients worldwide. The application will run on both on-premises servers as well as Reserved EC2 instances. To comply with the company's strict security policy, the database credentials must be encrypted both at rest and in transit. These credentials will be used by the application servers to connect to the database. [-> IAM Role on EC2] The Solutions Architect is tasked to manage all of the aspects of the application architecture and production deployment.
How should the Architect automate the deployment process of the application in the MOST secure manner?
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk.
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Attach this IAM policy to the instance profile for CodeDeploy-managed EC2 instances. Associate the same policy as well to the on-premises instances. Using AWS CodeDeploy, launch the application packages to the Amazon EC2 instances and on-premises servers.
Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to the EC2 instances. Create an IAM Service Role that will be associated with the on-premises servers. Deploy the application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.
(Correct)
Upload the database credentials with key rotation in AWS Secrets Manager. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk.
(Incorrect)
Explanation
AWS Systems Manager Parameter Store (what's the differences between this and secret manager?] provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud.
Servers and virtual machines (VMs) in a hybrid environment require an IAM role to communicate with the Systems Manager service. The role grants AssumeRole
trust to the Systems Manager service. You only need to create the service role for a hybrid environment once for each AWS account.
Users in your company or organization who will use Systems Manager on your hybrid machines must be granted permission in IAM to call the SSM API.
Service role: A service role is an AWS Identity and Access Management (IAM) that grants permissions to an AWS service so that the service can access AWS resources. Only a few Systems Manager scenarios require a service role. When you create a service role for Systems Manager, you choose the permissions to grant in order for it to access or interact with other AWS resources.
Service-linked role: A service-linked role is predefined by Systems Manager and includes all the permissions that the service requires to call other AWS services on your behalf.
If you plan to use Systems Manager to manage on-premises servers and virtual machines (VMs) in what is called a hybrid environment, you must create an IAM role for those resources to communicate with the Systems Manager service.
Hence, the correct answer is: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to the EC2 instances. Create an IAM Service Role that will be associated with the on-premises servers. Deploy the application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.
The option that says: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk is incorrect. You can't deploy an application to your on-premises servers using Elastic Beanstalk. This is only applicable to your Amazon EC2 instances.
The option that says: Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Attach this IAM policy to the instance profile for CodeDeploy-managed EC2 instances. Associate the same policy as well to the on-premises instances. Using AWS CodeDeploy, launch the application packages to the Amazon EC2 instances and on-premises servers is incorrect. You have to use an IAM Role and not an IAM Policy to grant access to AWS Systems Manager Parameter Store.
The option that says: Upload the database credentials with key rotation in AWS Secrets Manager. Install the AWS SSM agent on all servers. Set up a new IAM role that enables access and decryption of the database credentials from SSM Parameter Store. Associate this role to all on-premises servers and EC2 instances. Use Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using Elastic Beanstalk is incorrect. Although you can store the database credentials to AWS Secrets Manager, you still can't deploy an application to your on-premises servers using Elastic Beanstalk.
References:
Check out this AWS Systems Manager Cheat Sheet: