30 Mar 2024 Morning study
Question 41: Correct
A data analytics startup has been chosen to develop a data analytics system that will track all statistics in the Fédération Internationale de Football Association (FIFA) World Cup, which will also be used by other 3rd-party analytics sites. The system will record, store and provide statistical data reports about the top scorers, goal scores for each team, average goals, average passes, average yellow/red cards per match, and many other details. FIFA fans all over the world [-> CloudFront CDN] will frequently access the statistics reports every day and thus, it should be durably stored, highly available, and highly scalable. [-> DynamoDB] In addition, the data analytics system will allow the users to vote for the best male and female FIFA player as well as the best male and female coach. Due to the popularity of the FIFA World Cup event, it is projected that there will be over 10 million queries on game day and could spike to 30 million queries over the course of time.
Which of the following is the most cost-effective solution that will meet these requirements?
1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas. 2. Set up a batch job that puts reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.
(Correct)
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table. 4. Delete the old DynamoDB tables every day.
1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.
Explanation
In this scenario, you are required to have the following:
A durable storage for the generated reports.
A database that is highly available and can scale to handle millions of queries.
A Content Delivery Network that can distribute the report files to users all over the world.
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere. It’s a simple storage service that offers industry leading durability, availability, performance, security, and virtually unlimited scalability at very low costs.
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups.
Amazon RDS uses the MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. The source DB instance becomes the primary DB instance. Updates made to the primary DB instance are asynchronously copied to the read replica.
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
Hence, the following option is the best solution that satisfies all of these requirements:
1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Set up a batch job that puts reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.
In the above, S3 provides durable storage; Multi-AZ RDS with Read Replicas provide a scalable and highly available database and CloudFront provides the CDN.
The following option is incorrect:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
Although the database is scalable and highly available, it neither has any durable data storage nor a CDN.
The following option is incorrect:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.
Although this option handles and provides a better read capability for the system, it is still lacking a durable storage and a CDN.
The following option is incorrect:
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table.
4. Delete the old DynamoDB tables every day.
The above is not a cost-effective solution to maintain both RDS and a DynamoDB instance.
References:
Check out this Amazon RDS Cheat Sheet:
Question 47: Correct
A company has production, development, and test environments in its software development department, and each environment contains tens to hundreds of EC2 instances, along with other AWS services. Recently, Ubuntu released a series of security patches for a critical flaw that was detected in their OS. Although this is an urgent matter, there is no guarantee yet that these patches will be bug-free and production-ready hence, the company mucst immediately patch all of its affected Amazon EC2 instances in all the environments, except for the production environment. The EC2 instances in the production environment will only be patched after it has been verified that the patches work effectively. Each environment also has different baseline patch requirements [-> Patch manager - Tagging EC2 based on env & OS; Create patch baseline; Patch Groups] that needed to be satisfied.
Using the AWS Systems Manager service, how should you perform this task with the least amount of effort?
Schedule a maintenance period in AWS Systems Manager Maintenance Windows for each environment, where the period is after business hours so as not to affect daily operations. During the maintenance period, Systems Manager will execute a cron job that will install the required patches for each EC2 instance in each environment. After that, verify in Systems Manager Managed Instances that your environments are fully patched and compliant.
Tag each instance based on its environment and OS. Create various shell scripts for each environment that specifies which patch will serve as its baseline. Using AWS Systems Manager Run Command, place the EC2 instances into Target Groups and execute the script corresponding to each Target Group.
Tag each instance based on its OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and then apply the patches specified in the corresponding patch baseline to each Patch Group. Afterward, verify that the patches have been installed correctly using Patch Compliance. Record the changes to patch and association compliance statuses using AWS Config.
Tag each instance based on its environment and OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and apply the patches specified in the corresponding patch baseline to each Patch Group.
(Correct)
Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type.
Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager Maintenance Window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. For each auto-approval rule that you create, you can specify an auto-approval delay. This delay is the number of days of wait after the patch was released, before the patch is automatically approved for patching.
A patch group is an optional means of organizing instances for patching. For example, you can create patch groups for different operating systems (Linux or Windows), different environments (Development, Test, and Production), or different server functions (web servers, file servers, databases). Patch groups can help you avoid deploying patches to the wrong set of instances. They can also help you avoid deploying patches before they have been adequately tested. You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a patch group must be defined with the tag key: Patch Group
. After you create a patch group and tag instances, you can register the patch group with a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching execution.
Hence, the correct answer is: Tag each instance based on its environment and OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and apply the patches specified in the corresponding patch baseline to each Patch Group.
The option that says: Tag each instance based on its environment and OS. Create various shell scripts for each environment that specifies which patch will serve as its baseline. Using AWS Systems Manager Run Command, place the EC2 instances into Target Groups and execute the script corresponding to each Target Group is incorrect as this option takes more effort to perform because you are using Systems Manager Run Command instead of Patch Manager. The Run Command service enables you to automate common administrative tasks and perform ad hoc configuration changes at scale, however, it takes a lot of effort to implement this solution. You can use Patch Manager instead to perform the task required by the scenario since you need to perform this task with the least amount of effort.
The option that says: Tag each instance based on its OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and then apply the patches specified in the corresponding patch baseline to each Patch Group. Afterward, verify that the patches have been installed correctly using Patch Compliance. Record the changes to patch and association compliance statuses using AWS Config is incorrect. You should be tagging instances based on the environment and its OS type in which they belong and not just its OS type. This is because the type of patches that will be applied varies between the different environments. With this option, the Ubuntu EC2 instances in all of your environments, including in production, will automatically be patched.
The option that says: Schedule a maintenance period in AWS Systems Manager Maintenance Windows for each environment, where the period is after business hours so as not to affect daily operations. During the maintenance period, Systems Manager will execute a cron job that will install the required patches for each EC2 instance in each environment. After that, verify in Systems Manager Managed Instances that your environments are fully patched and compliant is incorrect because this is not the simplest way to address the issue using AWS Systems Manager. The AWS Systems Manager Maintenance Windows feature lets you define a schedule for when to perform potentially disruptive actions on your instances such as patching an operating system, updating drivers, or installing software or patches. Each Maintenance Window has a schedule, a maximum duration, a set of registered targets (the instances that are acted upon), and a set of registered tasks. Although this solution may work, it entails a lot of configuration and effort to implement.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 48: Incorrect
A company has several AWS accounts that are managed using AWS Organizations. The company created only one organizational unit (OU) so all child accounts are members of the Production OU. The Solutions Architects control access to certain AWS services using SCPs that define the restricted services. The SCPs are attached at the root of the organization so that they will be applied to all AWS accounts under the organization. The company recently acquired a small business firm and its existing AWS account was invited to join the organization. Upon onboarding, the administrators of the small business firm cannot apply the required AWS Config rules to meet the parent company’s security policies.
Which of the following options will allow the administrators to update the AWS Config rules on their AWS account without introducing long-term management overhead?
Remove the SCPs on the organization’s root and apply them to the Production OU instead. Create a temporary Onboarding OU that has an attached SCP allowing changes to AWS Config. Add the new account to this temporary OU and make the required changes before moving it to Production OU.
(Correct)
Instead of using a “deny list” to AWS services on the organization’s root SCPs, use an “allow list” to allow only the required AWS services. Temporarily add the AWS Config service on the “allow list” for the principals of the new account and make the required changes.
Update the SCPs applied in the root of the AWS organization and remove the rule that restricts changes to the AWS Config service. Deploy a new AWS Service Catalog to the whole organization containing the company’s AWS Config policies.
Add the new account to a temporary Onboarding organization unit (OU) that has an attached SCP allowing changes to AWS Config. Perform the needed changes while on this temporary OU before moving the new account to Production OU.
(Incorrect)
Explanation
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new AWS accounts and allocate resources, group accounts to organize your workflows, apply policies to accounts or groups for governance, and simplify billing by using a single payment method for all of your accounts.
With AWS Organizations, you can consolidate multiple AWS accounts into an organization that you create and centrally manage. You can create member accounts and invite existing accounts to join your organization. You can organize those accounts into groups and attach policy-based controls.
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled.
An SCP restricts permissions for IAM users and roles in member accounts, including the member account's root user. Any account has only those permissions allowed by every parent above it. If a permission is blocked at any level above the account, either implicitly (by not being included in an Allow policy statement) or explicitly (by being included in a Deny policy statement), a user or role in the affected account can't use that permission, even if the account administrator attaches the AdministratorAccess
IAM policy with */* permissions to the user.
AWS strongly recommends that you don't attach SCPs to the root of your organization without thoroughly testing the impact that the policy has on accounts. Instead, create an OU that you can move your accounts into one at a time, or at least in small numbers, to ensure that you don't inadvertently lock users out of key services.
Therefore, the correct answer is: Remove the SCPs on the organization’s root and apply them to the Production OU instead. Create a temporary Onboarding OU that has an attached SCP allowing changes to AWS Config. Add the new account to this temporary OU and make the required changes before moving it to Production OU. It is not recommended to attach the SCPs to the root of the organization, so it is better to move all the SCPs to the Production OU. This way, the temporary Onboarding OU can have an independent SCP to allow the required changes on AWS Config. Then, you can move the new AWS account to the Production OU.
The option that says: Update the SCPs applied in the root of the AWS organization and remove the rule that restricts changes to the AWS Config service. Deploy a new AWS Service Catalog to the whole organization containing the company’s AWS Config policies is incorrect. Although AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS, this will cause possible problems in the future for the administrators. Removing the AWS Config restriction on the root of the AWS organization's SCP will allow all Admins on all AWS accounts to manage/change/update their own AWS Config rules.
The option that says: Add the new account to a temporary Onboarding organization unit (OU) that has an attached SCP allowing changes to AWS Config. Perform the needed changes while on this temporary OU before moving the new account to Production OU is incorrect. If the SCP applied on the organization's root has a "deny" permission, all OUs under the organization will inherit that rule. You cannot override an explicit "deny" permission with an explicit "allow" applied to the temporary Onboarding OU.
The option that says: Instead of using a “deny list” to AWS services on the organization’s root SCPs, use an “allow list” to allow only the required AWS services. Temporarily add the AWS Config service on the “allow list” for the principals of the new account and make the required changes is incorrect. This is possible, however, it will cause more management problems in the future as you will have to update the "allow list" for any service that users may require in the future.
References:
Check out these AWS Organizations and SCP Comparison Cheat Sheets:
Question 50: Correct
A tech company plans to host a website using an Amazon S3 bucket. The solutions architect created a new S3 bucket called “www.tutorialsdojo.com" in us-west-2 AWS region, enabled static website hosting, and uploaded the static web content files including the index.html file. The custom domain www.tutorialsdojo.com
has been registered using Amazon Route 53 to be associated with the S3 bucket. The next day, a new Route 53 Alias record set was created which points to the S3 website endpoint: http://www.tutorialsdojo.com.s3-website-us-west-2.amazonaws.com
. Upon testing, users cannot see any content on the bucket. Both the domains tutorialsdojo.com
and www.tutorialsdojo.com
do not work properly.
Which of the following is the MOST likely cause of this issue that the Architect should fix?
Route 53 is still propagating the domain name changes. Wait for another 12 hours and then try again.
The site does not work because you have not set a value for the error.html file, which is a required step.
The S3 bucket does not have public read access which blocks the website visitors from seeing the content.(Correct)
The site will not work because the URL does not include a file name at the end. This means that you need to use this URL instead:
www.tutorialsdojo.com/index.html
Explanation
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts. To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket. This bucket must have public read access. It is intentional that everyone in the world will have read access to this bucket.
When you configure an Amazon S3 bucket for website hosting, you must give the bucket the same name as the record that you want to use to route traffic to the bucket. For example, if you want to route traffic for example.com
to an S3 bucket that is configured for website hosting, the name of the bucket must be example.com
.
If you want to route traffic to an S3 bucket that is configured for website hosting but the name of the bucket doesn't appear in the Alias Target list in the Amazon Route 53 console, check the following:
- The name of the bucket exactly matches the name of the record, such as tutorialsdojo.com
or www.tutorialsdojo.com
.
- The S3 bucket is correctly configured for website hosting.
In this scenario, the static S3 website does not work because the bucket does not have a public read access.
Therefore, the correct answer is: The S3 bucket does not have public read access which blocks the website visitors from seeing the content. This is the root cause why the static S3 website is inaccessible.
The option that says: The site does not work because you have not set a value for the error.html file, which is a required step is incorrect as the error.html is not required and won't affect the availability of the static S3 website.
The option that says: The site will not work because the URL does not include a file name at the end. This means that you need to use this URL instead: www.tutorialsdojo.com/index.html is incorrect as it is not required to manually append the exact filename in S3.
The option that says: Route 53 is still propagating the domain name changes. Wait for another 12 hours and then try again is incorrect as the Route 53 domain name propagation does not take that long. Remember that Amazon Route 53 is designed to propagate updates you make to your DNS records to its worldwide network of authoritative DNS servers within 60 seconds under normal conditions.
References:
Check out this Amazon S3 Cheat Sheet:
Question 52: Correct
A company currently hosts its online immigration system [-> Regional] on one large Amazon EC2 instance with attached EBS volumes to store all of the applicants' data. The registration system accepts the information from the user including documents and photos and then performs automated verification and processing to check if the applicant is eligible for immigration. The immigration system becomes unavailable at times when there is a surge of applicants using the system. The existing architecture needs improvement as it takes a long time for the system to complete the processing and the attached EBS volumes are not enough to store the ever-growing data being uploaded by the users. [-> EFS; EC2; ASG; ALB]
Which of the following options is the recommended option to achieve high availability and more scalable data storage?
Use SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.
Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region.
(Correct)
Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.
Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region.
Explanation
In this scenario, you need to overhaul the existing immigration service to upgrade its storage and computing capacity. Since EBS Volumes can only provide limited storage capacity and are not scalable, you should use S3 instead. The system goes down at times when there is a surge of requests which indicates that the existing large EC2 instance could not handle the requests any longer. In this case, you should implement a highly-available architecture and a queueing system with SQS and Auto Scaling.
The option that says: Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region is correct. This option provides high availability and scalable data storage with S3. Auto-scaling of EC2 instances reduces the overall processing time and SQS helps in distributing the tasks to a group of EC2 instances.
The option that says: Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region is incorrect because EBS is not an easily scalable and durable storage solution compared to Amazon S3. Using SQS is more suitable in distributing the tasks to an Auto Scaling group of EC2 instances and not SNS.
The option that says: Use SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect because SNS is not a valid choice in this scenario. Using SQS is more suitable in distributing the tasks to an Auto Scaling group of EC2 instances and not SNS.
The option that says: Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect. Having a large EBS volume attached to each of the EC2 instance of the auto-scaling group is not economical. And it will be hard to sync the growing data across these EBS volumes. You should use S3 instead.
References:
Check out this Amazon S3 Cheat Sheet:
Question 56: Correct
A company has launched a company-wide bug bounty program to find and patch up security vulnerabilities in your web applications as well as the underlying cloud resources. As the solutions architect, you are focused on checking system vulnerabilities on AWS resources for DDoS attacks [-> Shield advanced]. Due to budget constraints, the company cannot afford to enable AWS Shield Advanced to prevent higher-level attacks. [-> WAF? NACL? ALB + WAF; CloudFront; CloudWatch alerts for CPUUtilization & NetworkIn
metrics ]
Which of the following are the best techniques to help mitigate Distributed Denial of Service (DDoS) attacks for cloud infrastructure hosted in AWS? (Select TWO.)
Use Reserved EC2 instances to ensure that each instance has the maximum performance possible. Use AWS WAF to protect your web applications from common web exploits that could affect application availability.
Use an Application Load Balancer (ALB) to reduce the risk of overloading your application by distributing traffic across many backend instances. Integrate AWS WAF and the ALB to protect your web applications from common web exploits that could affect application availability.
(Correct)
Use S3 as a POSIX-compliant storage instead of EBS Volumes for storing data. Install the SSM agent to all of your instances and use AWS Systems Manager Patch Manager to automatically patch your instances.
Use an Amazon CloudFront distribution for both static and dynamic content of your web applications. Add CloudWatch alerts to automatically look and notify the Operations team for high
CPUUtilization
andNetworkIn
metrics, as well as to trigger Auto Scaling of your EC2 instances.(Correct)
Add multiple Elastic Network Interfaces to each EC2 instance and use Enhanced Networking to increase the network bandwidth.
Explanation
The following options are the correct answers in this scenario as they can help mitigate the effects of DDoS attacks:
- Use an Amazon CloudFront distribution for both static and dynamic content of your web applications. Add CloudWatch alerts to automatically look and notify the Operations team for high CPUUtilization
and NetworkIn
metrics, as well as to trigger Auto Scaling of your EC2 instances.
- Use an Application Load Balancer (ALB) to reduce the risk of overloading your application by distributing traffic across many backend instances. Integrate AWS WAF and the ALB to protect your web applications from common web exploits that could affect application availability.
Amazon CloudFront is a content delivery network (CDN) service that can be used to deliver your entire website, including static, dynamic, streaming, and interactive content. Persistent TCP connections and variable time-to-live (TTL) can be used to accelerate delivery of content, even if it cannot be cached at an edge location. This allows you to use Amazon CloudFront to protect your web application, even if you are not serving static content. Amazon CloudFront only accepts well-formed connections to prevent many common DDoS attacks like SYN floods and UDP reflection attacks from reaching your origin.
Larger DDoS attacks can exceed the size of a single Amazon EC2 instance. To mitigate these attacks, you will want to consider options for load balancing excess traffic. With Elastic Load Balancing (ELB), you can reduce the risk of overloading your application by distributing traffic across many backend instances. ELB can scale automatically, allowing you to manage larger volumes of unanticipated traffic, like flash crowds or DDoS attacks.
Another way to deal with application layer attacks is to operate at scale. In the case of web applications, you can use ELB to distribute traffic to many Amazon EC2 instances that are overprovisioned or configured to auto scale for the purpose of serving surges of traffic, whether it is the result of a flash crowd or an application layer DDoS attack. Amazon CloudWatch alarms are used to initiate Auto Scaling, which automatically scales the size of your Amazon EC2 fleet in response to events that you define. This protects application availability even when dealing with an unexpected volume of requests.
The option that says: Use S3 as a POSIX-compliant storage instead of EBS Volumes for storing data. Install the SSM agent to all of your instances and use AWS Systems Manager Patch Manager to automatically patch your instances is incorrect because using S3 instead of EBS Volumes is mainly for addressing scalability to your storage requirements and not for avoiding DDoS attacks. In addition, Amazon S3 is not a POSIX-compliant storage.
The option that says: Add multiple Elastic Network Interfaces to each EC2 instance and use Enhanced Networking to increase the network bandwidth is incorrect. Even if you add multiple ENIs and are using Enhanced Networking to increase the network throughput of the instances, the CPU of the instance will be saturated with the DDoS requests which will cause the application to be unresponsive.
The option that says: Use Reserved EC2 instances to ensure that each instance has the maximum performance possible. Use AWS WAF to protect your web applications from common web exploits that could affect application availability is incorrect because using Reserved EC2 instances does not provide any additional computing performance compared to other EC2 types.
References:
Check out this Amazon CloudFront Cheat Sheet:
Best practices on DDoS Attack Mitigation:
Question 57: Correct
A multinational financial firm plans to do a multi-regional deployment of its cryptocurrency trading application that’s being heavily used in the US and in Europe. The containerized application uses Kubernetes and has Amazon DynamoDB Global Tables as a centralized database to store and sync the data from two regions.
The architecture has distributed computing resources with several public-facing Application Load Balancers (ALBs). The Network team of the firm manages the public DNS internally and wishes to make the application available through an apex domain for easier access. S3 Multi-Region Access Points are also used for object storage workloads and hosting static assets.
Which is the MOST operationally efficient solution that the Solutions Architect should implement to meet the above requirements?
Set up an AWS Transit Gateway with a multicast domain that targets specific ALBs on the required AWS Regions. Create a public record in Amazon Route 53 using the static IP address of the AWS Transit Gateway.
Set up an AWS Global Accelerator, which has several endpoint groups that target specific endpoints and ALBs on the required AWS Regions. Create a public alias record in Amazon Route 53 that points your custom domain name to the DNS name assigned to your accelerator.
(Correct)
Launch an AWS Global Accelerator with several endpoint groups that target the ALBs in all the relevant AWS Regions. Create an Amazon Route 53 Resolver Inbound Endpoint that points your custom domain name to the CNAME assigned to your accelerator.
Launch an AWS Transit Gateway that targets specific ALBs on the required AWS Regions. Create a CNAME record in Amazon Route 53 that directly points your custom domain name to the DNS name assigned to the AWS Transit Gateway.
Explanation
AWS Global Accelerator is a service in which you create accelerators to improve the performance of your applications for local and global users. Depending on the type of accelerator you choose, you can gain additional benefits:
-With a standard accelerator, you can improve the availability of your internet applications that are used by a global audience. With a standard accelerator, Global Accelerator directs traffic over the AWS global network to endpoints in the nearest Region to the client.
-With a custom routing accelerator, you can map one or more users to a specific destination among many destinations.
For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions. The service reacts instantly to changes in health or configuration to ensure that internet traffic from clients is always directed to healthy endpoints.
Custom routing accelerators only support virtual private cloud (VPC) subnet endpoint types and route traffic to private IP addresses in that subnet.
In most scenarios, you can configure DNS to use your custom domain name (such as www.tutorialsdojo.com
) with your accelerator instead of using the assigned static IP addresses or the default DNS name. [1] First, using Amazon Route 53 or another DNS provider, create a domain name, and then [2] add or update DNS records with your Global Accelerator IP addresses. Or you can associate your custom domain name with the DNS name for your accelerator. Complete the DNS configuration and wait for the changes to propagate over the internet. Now when a client makes a request using your custom domain name, the DNS server resolves it to the IP addresses in random order or to the DNS name for your accelerator.
[Q] To use your custom domain name with Global Accelerator when you use Route 53 as your DNS service, you [A] create an alias record that points your custom domain name to the DNS name assigned to your accelerator. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com
, and for subdomains, such as www.tutorialsdojo.com
.
Hence, the correct answer is: Set up an AWS Global Accelerator, which has several endpoint groups that target specific endpoints and ALBs on the required AWS Regions. Create a public alias record in Amazon Route 53 that points your custom domain name to the DNS name assigned to your accelerator.
The option that says: Set up an AWS Transit Gateway with a multicast domain that targets specific ALBs on the required AWS Regions. Create a public record in Amazon Route 53 using the static IP address of the AWS Transit Gateway is incorrect because an AWS Transit Gateway is not meant to be used to distribute traffic to multiple ALBs. In addition, a multicast domain is primarily used in delivering a single stream of data to multiple receiving computers simultaneously. Transit Gateway supports routing multicast traffic between subnets of attached VPCs and not ALBs. You have to use an AWS Global Accelerator instead since you can configure an accelerator to have several endpoint groups that point to multiple ALBs.
The option that says: Launch an AWS Transit Gateway that targets specific ALBs on the required AWS Regions. Create a CNAME record in Amazon Route 53 that directly points your custom domain name to the DNS name assigned to the AWS Transit Gateway is incorrect. As mentioned in the above rationale, an AWS Transit Gateway is not suitable to be used to integrate all the ALBs. Creating a CNAME record is also not right since the scenario explicitly mentioned that you have to use the apex domain. Remember that a CNAME record cannot be used for apex domain configuration in Amazon Route 53. You have to create a public alias record instead.
The option that says: Launch an AWS Global Accelerator with several endpoint groups that target the ALBs in all the relevant AWS Regions. Create an Amazon Route 53 Resolver Inbound Endpoint that points your custom domain name to the CNAME assigned to your accelerator is incorrect. Although the use of AWS Global Accelerator is a valid solution, creating a Route53 Resolver Inbound Endpoint is irrelevant. An Inbound Resolver endpoint simply allows DNS queries to your Amazon VPCs from your on-premises network or another VPC.
References:
Check out this AWS Global Accelerator Cheat Sheet:
Question 58: Incorrect
A media company processes and converts its video collection using the AWS Cloud. The videos are processed by an Auto Scaling group of Amazon EC2 instances which scales based on the number of videos on the Amazon Simple Queue Service (SQS) queue. Each video takes about 20-40 minutes to be processed.
To ensure videos are processed, the management has set a redrive policy on the SQS queue to be used as a dead-letter queue. The visibility timeout has been set to 1 hour and the maxReceiveCount
has been set to 1. When there are messages on the dead-letter queue, an Amazon CloudWatch alarm has been set up to notify the development team.
Within a few days of operation, the dead-letter queue received several videos that failed to process. The development received notifications of messages on the dead-letter queue but they did not find any operational errors based on the application logs.
Which of the following options should the solutions architect implement to help solve the above problem?
Configure a higher delivery delay setting on the Amazon SQS queue. This will give time for the consumers more time to pick up the messages on the SQS queue.
Reconfigure the SQS redrive policy and set
maxReceiveCount
to 10. This will allow the consumers to retry the messages before sending them to the dead-letter queue.(Correct)
The videos were not processed because the Amazon EC2 scale-up process takes too long. Set a minimum number of EC2 instances on the Auto Scaling group to solve this.
Some of the videos took longer than 1 hour to process. Update the visibility timeout for the Amazon SQS queue to 2 hours to solve this problem.
(Incorrect)
Explanation
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn't succeed.
Occasionally, producers and consumers might fail to interpret aspects of the protocol that they use to communicate, causing message corruption or loss. Also, the consumer's hardware errors might corrupt message payload. If a message can't be consumed successfully, you can send it to a dead-letter queue (DLQ). Dead-letter queues let you isolate problematic messages to determine why they are failing.
The [A] Maximum receives value determines [Q] when a message will be sent to the DLQ. [A] If the ReceiveCount for a message exceeds the maximum receive count for the queue, Amazon SQS moves the message to the associated DLQ (with its original message ID).
As you redrive your messages, the [Q] redrive status will [A] show you the most recent message redrive status for your dead-letter queue.
The [Q] redrive policy [A] specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times. The [Q] maxReceiveCount is [A] the number of times a consumer tries receiving a message from a queue without deleting it before being moved to the dead-letter queue. [Q] Setting the maxReceiveCount to a low value, such as 1 would [A] result in any failure to receive a message to cause the message to be moved to the dead-letter queue. Such failures include network errors and client dependency errors.
The [Q] redrive allow policy [A] specifies which source queues can access the dead-letter queue. This policy applies to a potential dead-letter queue. You can choose whether to allow all source queues, allow specific source queues, or deny all source queues. The default is to allow all source queues to use the dead-letter queue.
Therefore, the correct answer is: Reconfigure the SQS redrive policy and set maxReceiveCount
to 10. This will allow the consumers to retry the message from the dead-letter queue. This setting ensures that any message that failed to be processed will be sent back to the queue to be picked up by other consumers and re-processed.
The option that says: The videos were not processed because the Amazon EC2 scale-up process takes too long. Set a minimum number of EC2 instances on the Auto Scaling group to solve this is incorrect. The Auto Scaling group responds to the number of messages on the queue, setting a fixed minimum number of instances is not cost-effective when there are no messages on the SQS queue.
The option that says: Some of the videos took longer than 1 hour to process. Update the visibility timeout for the Amazon SQS queue to 2 hours to solve this problem is incorrect. Even though the visibility timeout is set for longer, the failed videos to be processed should be retried again by other consumers. That's why a maxReceiveCount
setting is a much better option.
The option that says: Configure a higher delivery delay setting on the Amazon SQS queue. This will give time for the consumers more time to pick up the messages on the SQS queue is incorrect. This setting does not affect the videos that were already on the queue, picked up for processing, but failed to process completely.
References:
Check out these Amazon SQS and AWS Auto Scaling Cheat Sheets:
Question 59: Correct
A company runs several clusters of Amazon EC2 instances in AWS. An unusual API activity and port scanning in the VPC have been identified by the security team. They noticed that there are multiple port scans being triggered to the EC2 instances from a specific IP address. To fix the issue immediately, the solutions architect has decided to simply block the offending IP address. The solutions architect is also instructed to fortify their existing cloud infrastructure security from the most frequently occurring network [WAF Rules? NACL ] and transport layer DDoS attacks. [-> AWS advanced shield]
Which of the following is the most suitable method to satisfy the above requirement in AWS?
Deny access from the IP Address block by adding a specific rule to all of the Security Groups. Use a combination of AWS WAF and AWS Config to protect your cloud resources against common web attacks.
Block the offending IP address using Route 53. Use Amazon Macie to automatically discover, classify, and protect sensitive data in AWS, including DDoS attacks.
Change the Windows Firewall settings to deny access from the IP address block. Use Amazon GuardDuty to detect potentially compromised instances or reconnaissance by attackers, and AWS Systems Manager Patch Manager to properly apply the latest security patches to all of your instances.
Deny access from the IP Address block in the Network ACL. Use AWS Shield Advanced to protect your cloud resources.
(Correct)
Explanation
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
Therefore the correct answer is: Deny access from the IP Address block in the Network ACL. Use AWS Shield Advanced to protect your cloud resources.
The option that says: Block the offending IP address using Route 53. Use Amazon Macie to automatically discover, classify, and protect sensitive data in AWS, including DDoS attacks is incorrect because Amazon Macie is just a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. It does not provide security against DDoS attacks. In addition, you cannot block the offending IP address using Route 53. You should use Network ACL for this scenario.
The option that says: Change the Windows Firewall settings to deny access from the IP address block. Use Amazon GuardDuty to detect potentially compromised instances or reconnaissance by attackers, and AWS Systems Manager Patch Manager to properly apply the latest security patches to all of your instances is incorrect. You have to use Network ACL to block the specific IP address to your network and not just change the firewall of your Windows server. Amazon GuardDuty and AWS Systems Manager Patch Manager are not suitable to fortify your AWS Cloud against DDoS attacks.
The option that says: Deny access from the IP Address block by adding a specific rule to all of the Security Groups. Use a combination of AWS WAF and AWS Config to protect your cloud resources against common web attacks is incorrect because it is still better to block the offending IP address on the Network ACL level as you cannot directly deny an IP address in your Security Group. AWS WAF and AWS Config are helpful to improve the security of your cloud infrastructure in AWS but these services are not enough to protect your infrastructure against DDoS attacks. You have to use AWS Shield Advanced in this scenario.
References:
Check out these AWS WAF and AWS Shield Cheat Sheets:
Question 64: Correct
A company is hosting a multi-tier web application in AWS. It is composed of an Application Load Balancer and EC2 instances across three Availability Zones. During peak load, its stateless web servers operate at 95% utilization. The system is set up to use Reserved Instances to handle the steady-state load and On-Demand Instances to handle the peak load. Your manager instructed you to review the current architecture and do the necessary changes to improve the system.
Which of the following provides the most cost-effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load?
Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load.
(Correct)
Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load.
Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.
Explanation
The scenario requires a cost-effective architecture to allow the application to recover quickly, hence, using an Auto Scaling group is a must to handle the peak load and improve both the availability and scalability of the application.
Setting up a diversified allocation strategy for your Spot Fleet is a best practice to increase the chances that a spot request can be fulfilled by EC2 capacity in the event of an outage in one of the Availability Zones. You can include each AZ available to you in the launch specification. And instead of using the same subnet each time, use three unique subnets (each mapping to a different AZ).
Therefore the correct answer is: Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load. The Spot instances are the most cost-effective for handling the temporary peak loads of the application.
The option that says: Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load is incorrect. Even though it uses Auto Scaling, Reserved Instances cost more than Spot instances so it is more suitable to use the latter to handle the peak load.
The following options are incorrect because they did not mention the use of Auto Scaling Groups, which is a requirement for this architecture:
- Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.
- Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
Question 65: Correct
A company has several development teams using AWS CodeCommit to store their source code. With the number of code updates every day, the management is having difficulty tracking if the developers are adhering to company security policies. On a recent audit, the security team found several IAM access keys and secret keys in the CodeCommit repository. This is a big security risk so the company wants to have an automated solution that will scan the CodeCommit repositories for committed IAM credentials and delete/disable the IAM keys for those users.
Which of the following options will meet the company requirements?
Using a development instance, use the AWS Systems Manager Run Command to scan the AWS CodeCommit repository for IAM credentials on a daily basis. If credentials are found, rotate them using AWS Secrets Manager. Notify the user of the violation.
Scan the CodeCommit repositories for IAM credentials using Amazon Macie. Using machine learning, Amazon Macie can scan your repository for security violations. If violations are found, invoke an AWS Lambda function to notify the user and delete the IAM keys.
Download and scan the source code from AWS CodeCommit using a custom AWS Lambda function. Schedule this Lambda function to run daily. If credentials are found, notify the user of the violation, generate new IAM credentials and store them in AWS KMS for encryption.
Write a custom AWS Lambda function to search for credentials on new code submissions. Set the function trigger as AWS CodeCommit push events. If credentials are found, notify the user of the violation, and disable the IAM keys.
(Correct)
Explanation
AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud.
You can configure a CodeCommit repository so that code pushes or other events trigger actions, such as sending a notification from Amazon Simple Notification Service (Amazon SNS) or invoking a function in AWS Lambda. You can create up to 10 triggers for each CodeCommit repository.
Triggers are commonly configured to:
- Send emails to subscribed users every time someone pushes to the repository.
- Notify an external build system to start a build after someone pushes to the main branch of the repository.
Scenarios like notifying an external build system require writing a Lambda function to interact with other applications. The email scenario simply requires creating an Amazon SNS topic. You can create a trigger for a CodeCommit repository so that events in that repository trigger notifications from an Amazon Simple Notification Service (Amazon SNS) topic.
You can also create an AWS Lambda trigger for a CodeCommit repository so that events in the repository invoke a Lambda function. For example, you can create a Lambda function that will scan the CodeCommit code submissions for IAM credentials, and then send out notifications or perform corrective actions.
When you use the Lambda console to create the function, you can create a CodeCommit trigger for the Lambda function. Here is an example of the trigger for all push events:
Therefore, the correct answer is: Write a custom AWS Lambda function to search for credentials on new code submissions. Set the function trigger as AWS CodeCommit push events. If credentials are found, notify the user of the violation, and disable the IAM keys.
The option that says: Using a development instance, use the AWS Systems Manager Run Command to scan the AWS CodeCommit repository for IAM credentials on a daily basis. If credentials are found, rotate them using AWS Secrets Manager. Notify the user of the violation is incorrect. You cannot rotate IAM keys on AWS Secrets Manager. Using the Run Command on a development instance just for scanning the repository is costly. It is cheaper to just write your own Lambda function to do the scanning.
The option that says: Download and scan the source code from AWS CodeCommit using a custom AWS Lambda function. Schedule this Lambda function to run daily. If credentials are found, notify the user of the violation, generate new IAM credentials and store them in AWS KMS for encryption is incorrect. You store encryption keys on AWS KMS, not IAM keys.
The option that says: Scan the CodeCommit repositories for IAM credentials using Amazon Macie. Using machine learning, Amazon Macie can scan your repository for security violations. If violations are found, invoke an AWS Lambda function to notify the user and delete the IAM keys is incorrect. Amazon Macie is designed to use machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie primarily scans Amazon S3 buckets for data security and data privacy.
References:
Check out the AWS CodeCommit Cheat Sheet:
Question 66: Incorrect
A company hosts its multi-tiered web application on a fleet of Auto Scaling EC2 instances spread across two Availability Zones. The Application Load Balancer is in the public subnets and the Amazon EC2 instances are in the private subnets. After a few weeks of operations, the users are reporting that the web application is not working properly. Upon testing, the Solutions Architect found that the website is accessible and the login is successful. However, when the “find a nearby store” function is clicked on the website, the map loads only about 50% of the time when the page is refreshed. This function involves a third-party RESTful API call to a maps provider. Amazon EC2 NAT instances are used for these outbound API calls.
Which of the following options are the MOST likely reason for this failure and the recommended solution?
One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future.
The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it.
This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability.
(Correct)
This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic.
(Incorrect)
Explanation
You can use a NAT device to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but prevent the Internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the Internet or other AWS services, and then sends the response back to the instances. When traffic goes to the Internet, the source IPv4 address is replaced with the NAT device’s address, and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances’ private IPv4 addresses.
You can either use a managed NAT device offered by AWS called a NAT gateway, or you can create your own NAT device in an EC2 instance, referred to here as a NAT instance. The bandwidth of NAT instances depends on the instance size – higher instances size will have high bandwidth capacity. NAT instances are managed by the customer so if the instance goes down, there could be a potential impact on the availability of your application. You have to manually check and fix the NAT instances.
AWS recommends NAT gateways because they provide better availability and bandwidth over NAT instances. The NAT gateway service is also a managed service that does not require your administration efforts. NAT Gateways are highly available. In each Availability Zone, they are implemented with redundancy. It is managed by AWS so you do not have to perform maintenance or monitoring if it is UP. NAT gateways automatically scale in bandwidth so you don’t have to choose instance types.
Check out the full comparison in the table below:
The correct answer is: This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability. This is very likely as we have two subnets in the scenario and NAT instances reside in only one AZ. With a failure rate of 50%, one of the NAT instances must have been down. AWS does not automatically recover the failed NAT instances. AWS recommends using NAT gateways because they provide better availability and bandwidth. Even if NAT gateway is deployed on a single AZ, AWS implements redundancy to ensure that it is always available on that AZ.
The option that says: One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future is incorrect. Network ACLs affect all the subnets associated with it. If there is a misconfigured rule, the other subnets will be affected too, which could result in a 100% failure of requests to the third-party provider.
The option that says: The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it is incorrect. If there is a failure on one availability zone of the third-party provider, the traffic should have stopped sending to that AZ so this failure is most likely caused by a local failure in your VPC.
The option that says: This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic is incorrect. If the NAT instances are overloaded, you will notice inconsistent performance or slowdown for the third-party requests. And this failure should have been gone during off-peak hours. If the failure rate is 50% of the requests, it is most likely that one of the NAT instances is down.
References:
Check out this Amazon VPC Cheat Sheet:
Question 69: Incorrect
An international foreign exchange company has a serverless forex trading application that was built using AWS SAM and is hosted on AWS Serverless Application Repository. They have millions of users worldwide who use their online portal 24/7 to trade currencies. However, they are receiving a lot of complaints that it takes a few minutes for their users to log in to their portal lately, including occasional HTTP 504 errors. As the Solutions Architect, you are tasked to optimize the system and to significantly reduce the time to log in to improve the customers' satisfaction.
Which of the following should you implement in order to improve the performance of the application with minimal cost? (Select TWO.)
Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.
(Correct)
Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a
Cache-Control max-age
directive to your objects, and specify the longest practical value formax-age
.(Incorrect)
Use Lambda@Edge to allow your Lambda functions to customize content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users.
(Correct)
Set up multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. Deploy the Lambda function in each region using AWS SAM, in order to handle the requests faster.
Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.
Explanation
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers. You can use Lambda functions to change CloudFront requests and responses at the following points:
- After CloudFront receives a request from a viewer (viewer request)
- Before CloudFront forwards the request to the origin (origin request)
- After CloudFront receives the response from the origin (origin response)
- Before CloudFront forwards the response to the viewer (viewer response)
In the given scenario, you can use Lambda@Edge to allow your Lambda functions to customize the content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users. In addition, you can set up an origin failover by creating an origin group with two origins with one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin fails. This will alleviate the occasional HTTP 504 errors that users are experiencing.
The option that says: Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user is incorrect. Although this may resolve the performance issue, this solution entails a significant implementation cost since you have to deploy your application to multiple AWS regions. Remember that the scenario asks for a solution that will improve the performance of the application with minimal cost.
The option that says: Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age
directive to your objects, and specify the longest practical value for max-age
is incorrect because improving the cache hit ratio for the CloudFront distribution is irrelevant in this scenario. You can improve your cache performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content. However, take note that the problem in the scenario is the sluggish authentication process of your global users and not just the caching of the static objects.
References:
Check out these Amazon CloudFront and AWS Lambda Cheat Sheets:
Question 72: Correct
An accounting firm hosts a mix of Windows and Linux Amazon EC2 instances in its AWS account. The solutions architect has been tasked to conduct a monthly performance check on all production instances. There are more than 200 On-Demand EC2 instances running in their production environment and it is required to ensure that each instance has a logging feature that collects various system details such as memory usage, disk space, and other metrics. The system logs will be analyzed using AWS Analytics tools and the results will be stored in an S3 bucket.
Which of the following is the most efficient way to collect and analyze logs from the instances with minimal effort?
Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
Set up and install AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
(Correct)
Set up and install the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
Explanation
To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the unified CloudWatch agent which has the following advantages:
- You can collect both logs and advanced metrics with the installation and configuration of just one agent.
- The unified agent enables the collection of logs from servers running Windows Server.
- If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collection of additional system metrics, for in-guest visibility.
- The unified agent provides better performance.
CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.
CloudWatch Logs Insights includes a purpose-built query language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included for several types of AWS service logs.
Therefore, the correct answer is: Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs, then analyze the log data with CloudWatch Logs Insights.
The option that says: Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances is incorrect. Although this is a valid solution, this entails a lot of effort to implement as you have to allocate time to install the AWS CDK to each instance and develop a custom monitoring solution. Traffic Mirroring is simply an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon EC2 instances. As per the scenario, you are specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and not cost-efficient to enable detailed monitoring in CloudWatch in order to meet the requirements since this can be done using CloudWatch Logs.
The option that says: Setting up and installing the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs, then analyzing the log data with CloudWatch Logs Insights is incorrect. Although this is also a valid solution, it is more efficient to use a CloudWatch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an issue with SSM Agent is time-consuming hence, for more efficient instance monitoring, you can use the CloudWatch Agent instead to send the log data to Amazon CloudWatch Logs.
The option that says: Setting up and installing AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically, then setting up a CloudWatch dashboard to properly analyze the log data of all instances is incorrect. AWS Inspector is simply a security assessments service that only helps you in checking for unintended network accessibility of your EC2 instances and for vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon CloudWatch dashboard is not suitable since it's primarily used for scenarios where you have to monitor your resources in a single view, even those resources that are spread across different AWS Regions. It is better to use CloudWatch Logs Insights instead since it enables you to interactively search and analyze your log data.
References:
Check out this Amazon CloudWatch Cheat Sheet:
CloudWatch Agent vs SSM Agent vs Custom Daemon Scripts
Last updated