22 Mar 2024 noon study
Question 13: Skipped
A research company hosts its internal applications inside AWS VPCs in multiple AWS Accounts. The internal applications are accessed securely from inside the company network using an AWS Site-to-Site VPN connection. VPC peering connections has been established from the company’s main AWS account to VPCs in other AWS Accounts. The company has recently announced that employees will be allowed to work remotely if they are connected using a VPN. [-> AWS Client VPN on each laptop + VPN endpoint + VPC Route configuration] The solutions architect has been tasked to create a scalable and reliable AWS Client VPN solution that employees can use when working remotely.
Which of the following options is the most cost-effective implementation to meet the company requirements with minimal changes to the current setup?
Install the AWS Client VPN on the company data center. Create AWS Transit Gateway to connect the main VPC and other VPCs into a single hub. Update the VPC route configurations to allow communication with the internal applications.
Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in the same VPC region in the main AWS account. Update the VPC route configurations to allow communication with the internal applications.
(Correct)
Install the AWS Client VPN on the company data center. Configure connectivity between the existing AWS Site-to-Site VPN and client VPN endpoint.
Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in each of the AWS accounts. Update each VPC route configuration to allow communication with the internal applications.
Explanation
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
The administrator is responsible for setting up and configuring the service. This involves creating the Client VPN endpoint, associating the target network, configuring the authorization rules, and setting up additional routes (if required).
The client is the end user. This is the person who connects to the Client VPN endpoint to establish a VPN session. The client establishes the VPN session from their local computer or mobile device using an OpenVPN-based VPN client application.
The following are the key components for using AWS Client VPN.
-Client VPN endpoint — Your Client VPN administrator creates and configures a Client VPN endpoint in AWS. Your administrator controls which networks and resources you can access when establishing a VPN connection.
-VPN client application — The software application that you use to connect to the Client VPN endpoint and establish a secure VPN connection.
-Client VPN endpoint configuration file — A configuration file that's provided to you by your Client VPN administrator. The file includes information about the Client VPN endpoint and the certificates required to establish a VPN connection. You load this file into your chosen VPN client application.
The configuration for this scenario includes a target VPC (VPC A) that is peered with an additional VPC (VPC B). We recommend this configuration if you need to give clients access to the resources inside a target VPC and other VPCs that are peered with it (such as VPC B).
Therefore, the correct answer is: Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in the same VPC region in the main AWS account. Update the VPC route configurations to allow communication with the internal applications. This solution is cost-effective and requires minimal changes to the current network setup.
The option that says: Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in each of the AWS accounts. Update each VPC route configuration to allow communication with the internal applications is incorrect. You only need to create a Client VPN endpoint on the main VPC account that has Site-to-Site VPN.
The option that says: Install the AWS Client VPN on the company data center. Create AWS Transit Gateway to connect the main VPC and other VPCs into a single hub. Update the VPC route configurations to allow communication with the internal applications is incorrect. This may be possible, however, this will require reconfiguring the existing VPC peering between the main VPC and the other VPCs in other AWS accounts. Additionally, the AWS Client VPN should be installed on the employee workstations.
The option that says: Install the AWS Client VPN on the company data center. Configure connectivity between the existing AWS Site-to-Site VPN and client VPN endpoint is incorrect. This configuration is impossible and the AWS Client VPN should be installed on the employee workstations and not in the company data center.
References:
Check out these Amazon VPC and VPC Peeing Cheat Sheets:
Question 17: Skipped
A top Internet of Things (IoT) company has developed a wrist-worn activity tracker for soldiers deployed in the field. The device acts as a sensor to monitor the health and vital statistics of the wearer. It is expected that there would be thousands of devices that will send data to the server every minute and after 5 years, the number will increase to tens of thousands. One of the requirements is that the application should be able to accept the incoming data, run it through ETL to store in a data warehouse [-> EMR - Redshift], and archive the old data. [-> S3? ] The officers in the military headquarters should have a real-time dashboard [-> Kinesis Firehose] to view the sensor data. [-> QuickSight? Kinesis?? WRONG]
[-> Kinesis Firehose -> S3; Lambda -> EMR -> Redshift)
Which of the following options is the most suitable architecture to implement in this scenario?
Store the raw data directly in an Amazon S3 bucket with a lifecycle policy to store in Glacier after a month. Register the S3 bucket as a source on AWS Lake Formation. Launch an EMR cluster access the data lake, runs it through ETL, and then output that data to Amazon Redshift.
Store the data directly to DynamoDB. Launch a data pipeline that starts an EMR cluster using data from DynamoDB and sends the data to S3 and Redshift.
Send the raw data directly to Amazon Kinesis Data Firehose for processing and output the data to an S3 bucket. For archiving, create a lifecycle policy from S3 to Glacier. Use Lambda to process the data through Amazon EMR and sends the output to Amazon Redshift.
(Correct)
Leverage Amazon Athena to accept the incoming data and store them using DynamoDB. Setup a cron job that takes data from the DynamoDB table and sends it to an Amazon EMR cluster for ETL, then outputs the result to Amazon Redshift.
Explanation
Whenever there is a requirement for real-time data collection or analysis, always consider Amazon Kinesis.
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New Relic, and Sumo Logic. Kinesis Data Firehose is part of the Kinesis streaming data platform, along with Kinesis Data Streams and Kinesis Video Streams.
Therefore, the correct answer is: Send the raw data directly to Amazon Kinesis Data Firehose for processing and output the data to an S3 bucket. For archiving, create a lifecycle policy from S3 to Glacier. Use Lambda to process the data through Amazon EMR and sends the output to Amazon Redshift. Amazon Kinesis will ingest the data in real-time, transform it, and output it to an Amazon S3 bucket.
The option that says: Store the data directly to DynamoDB. Launch a data pipeline that starts an EMR cluster using data from DynamoDB and sends the data to S3 and Redshift is incorrect. For the collection of real-time data, AWS recommends Amazon Kinesis for ingestion. After ingestion, you can output the data into several AWS services for further processing.
The option that says: Store the raw data directly in an Amazon S3 bucket with a lifecycle policy to store in Glacier after a month. Register the S3 bucket as a source on AWS Lake Formation. Launch an EMR cluster access the data lake, runs it through ETL, and then output that data to Amazon Redshift is incorrect. Amazon S3 can only accept data using HTTP requests, and the data sent by IoT may be in a different format. For ingesting the data, Amazon Kinesis should be the first one to accept, process it, and store in Amazon S3.
The option that says: Leverage Amazon Athena to accept the incoming data and store them using DynamoDB. Setup a cron job that takes data from the DynamoDB table and sends it to an Amazon EMR cluster for ETL, then outputs the result to Amazon Redshift is incorrect. Amazon Athena is a query service to query data and analyze big data, and not suitable for ingesting real-time IoT data.
References:
Check out this Amazon Kinesis Cheat Sheet:
Question 19: Skipped
A leading e-commerce company plans to launch a donation website for all the victims of the recent super typhoon in South East Asia for its Corporate and Social Responsibility program. The company will advertise its program on TV and on social media, which is why they anticipate incoming traffic on their donation website. Donors can send their donations in cash, which can be transferred electronically, or they can simply post their home address where a team of volunteers can pick up their used clothes, canned goods, and other donations. Donors can optionally write a positive and encouraging message to the victims along with their donations. These features of the donation website will eventually result in a high number of write operations on their database tier considering that there are millions of generous donors around the globe who want to help. [-> Aurora? Dynamo DAX?]
[ DynamoDB with provisioned write throughput - SQS - AS EC2]
Which of the following options is the best solution for this scenario?
Use an Oracle database hosted on an extra large Dedicated EC2 instance as your database tier and an SQS queue for buffering the write operations.
Amazon DynamoDB with a provisioned write throughput. Use an SQS queue to buffer the large incoming traffic to your Auto Scaled EC2 instances, which processes and writes the data to DynamoDB.(Correct)
Use DynamoDB as a database storage and a CloudFront web distribution for hosting static resources.
Use an Amazon RDS instance with Provisioned IOPS.
Explanation
In this scenario, the application is write-intensive which means that we have to implement a solution to buffer and scale out the services according to the incoming traffic. SQS can be used to handle the large incoming requests and you can use DynamoDB to store large amounts of data.
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as dead-letter queues and cost allocation tags.
Amazon SQS queues can deliver very high throughput. Standard queues support a nearly unlimited number of transactions per second (TPS) per action. By default, FIFO queues support up to 3,000 messages per second with batching.
Therefore, the correct answer is: Amazon DynamoDB with a provisioned write throughput. Use an SQS queue to buffer the large incoming traffic to your Auto Scaled EC2 instances, which processes and writes the data to DynamoDB. The SQS queue can act as a buffer to temporarily store all the requests while waiting for them to be processed and written to the DynamoDB table.
The option that says: Use DynamoDB as a database storage and a CloudFront web distribution for hosting static resources is incorrect. CloudFront is used for caching static content, not for hosting the static website resources
The option that says: Use an Oracle database hosted on an extra large Dedicated EC2 instance as your database tier and an SQS queue for buffering the write operations is incorrect. AWS does not recommend hosting a database on EC2 instances. RDS instances are designed for hosting databases.
The option that says: Use an Amazon RDS instance with Provisioned IOPS is incorrect. This may be possible, however, this is significantly more expensive than using a DynamoDB table and adding an SQS queue.
References:
Check out this Amazon SQS Cheat Sheet:
Question 22: Skipped
A company requires regular processing of a massive amount of product catalogs that need to be handled per batch [-> Step Functions to orchestrate batch processing workflows]. The data need to be processed regularly by on-demand workers [-> EC2]. The company instructed its solutions architect to design a workflow orchestration system that will enable them to reprocess failures [-> SQS - DLQ? WRONG] and handle multiple concurrent operations.
What is the MOST suitable solution that the solutions architect should implement in order to manage the state of every workflow?
Store workflow data in an Amazon RDS with AWS Lambda functions polling the RDS database instance for status changes. Set up worker Lambda functions to process the next workflow steps, then use Amazon QuickSight to visualize workflow states directly out of Amazon RDS.
Use Amazon MQ to set up a batch process workflow that handles the processing for a single batch. Develop worker jobs using AWS Lambda functions.
Set up a workflow using AWS Config and AWS Step Functions in order to orchestrate multiple concurrent workflows. Visualize the status of each workflow using the AWS Management Console. Store the historical data in an S3 bucket and visualize the data using Amazon QuickSight.
Implement AWS Step Functions to orchestrate batch processing workflows. Use the AWS Management Console to monitor workflow status and manage failure reprocessing
(Correct)
Explanation
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications.
In this scenario, the company can leverage AWS Step Functions to design and manage the workflows required for processing massive amounts of product catalogs. Step Functions can orchestrate these workflows by defining them as a series of steps or states, where each state can represent a specific task in the process, such as data validation, transformation, or loading. Furthermore, AWS Step Functions offers built-in mechanisms for error handling and retry policies, which are essential for managing failures in any of the workflow steps. Through the AWS Management Console, you can actively monitor each workflow's status, seeing real-time updates on successes, failures, and ongoing tasks. This setup makes it much easier for your solutions architect and operations team to quickly spot and troubleshoot any issues.
Hence, the correct answer is: Implement AWS Step Functions to orchestrate batch processing workflows. Use the AWS Management Console to monitor workflow status and manage failure reprocessing.
The option that says: Store workflow data in an Amazon RDS with AWS Lambda functions polling the RDS database instance for status changes. Set up worker Lambda functions to process the next workflow steps, then use Amazon QuickSight to visualize workflow states directly out of Amazon RDS is incorrect. While this could work, it involves more overhead in managing database connections and polling mechanisms, making it less efficient for orchestrating complex workflows compared to using a dedicated service like AWS Step Functions.
The option that says: Set up a workflow using AWS Config and AWS Step Functions in order to orchestrate multiple concurrent workflows. Visualize the status of each workflow using the AWS Management Console. Store the historical data in an S3 bucket and visualize the data using Amazon QuickSight is incorrect. AWS Config is primarily a service for auditing and evaluating the configurations of your AWS resources, not for workflow orchestration.
The option that says: Use Amazon MQ to set up a batch process workflow that handles the processing for a single batch. Develop worker jobs using AWS Lambda functions is incorrect. Amazon MQ is simply a message broker service for messaging between software components but does not offer the same level of workflow orchestration or state management capabilities as AWS Step Functions. It's more suited for decoupling applications and integrating different systems rather than managing complex workflows.
References:
Check out this AWS Step Functions Cheat Sheet:
Question 23: Skipped
A digital banking company runs its production workload on the AWS cloud. The company has enabled multi-region support on an AWS CloudTrail trail. As part of the company security policy, the creation of any IAM users must be approved by the security team. When an IAM user is created, all of the permissions from that user must be removed automatically [-> Lambda]. A notification [-> SNS] must then be sent to the security team to approve the user creation.
Which of the following options should the solutions architect implement to meet the company requirements? (Select THREE.)
Configure an event filter in AWS CloudTrail for the CreateUser event and send a notification to an Amazon Simple Notification Service (Amazon SNS) topic.
Use Amazon EventBridge to invoke an AWS Fargate tasks that will remove permissions on the newly created IAM user.
Create a rule in Amazon EventBridge that will check for patterns in AWS CloudTrail API calls with the CreateUser eventName.
(Correct)
Use Amazon EventBridge to invoke an AWS Step Function state machine that will remove permissions on the newly created IAM user.
(Correct)
Send a message to an Amazon Simple Notification Service (Amazon SNS) topic. Have the security team subscribe to the SNS topic.
(Correct)
Use Amazon AWS Audit Manager to continually audit newly created users and send a notification to the security team.
Explanation
Using IAM, you can manage access to AWS services and resources securely. You can create and manage AWS users and groups and use permissions to allow and deny those users and groups access to AWS resources.
You can create an Amazon EventBridge rule with an event pattern that matches a specific IAM API call or multiple IAM API calls. Then, associate the rule with an Amazon Simple Notification Service (Amazon SNS) topic. When the rule runs, an SNS notification is sent to the corresponding subscriptions.
Amazon EventBridge works with AWS CloudTrail, a service that records actions from AWS services. CloudTrail captures API calls made by or on behalf of your AWS account from the EventBridge console and to EventBridge API operations.
Using the information collected by CloudTrail, you can determine what request was made to EventBridge, the IP address from which the request was made, who made the request, when it was made, and more. When an event occurs in EventBridge, CloudTrail records the event in Event history. You can view, search, and download recent events in your AWS account.
The option that says: Use Amazon EventBridge to invoke an AWS Step Function state machine that will remove permissions on the newly created IAM user is correct. When Amazon EventBridge detects a pattern from the specified events, it can trigger an AWS Step Function that has specific actions.
The option that says: Send a message to an Amazon Simple Notification Service (Amazon SNS) topic. Have the security team subscribe to the SNS topic is correct. If you create a pattern in Amazon EventBridge, it can then send a message to an SNS Topic once the event is detected.
The option that says: Create a rule in Amazon EventBridge that will check for patterns in AWS CloudTrail API calls with the CreateUser eventName is correct. With Amazon EventBridge, you define patterns to scan events that happen in your AWS account, such as creating an IAM user.
The option that says: Configure an event filter in AWS CloudTrail for the CreateUser event and send a notification to an Amazon Simple Notification Service (Amazon SNS) topic is incorrect. You can filter events on CloudTrail by searching for keywords, however, you can't configure it to send notifications to Amazon SNS. You should use EventBridge for this scenario.
The option that says: Use Amazon AWS Audit Manager to continually audit newly created users and send a notification to the security team is incorrect. AWS Audit Manager is used to map compliance requirements to AWS usage data with prebuilt and custom frameworks and automated evidence collection, not for scanning IAM user creation events.
The option that says: Use Amazon EventBridge to invoke an AWS Fargate tasks that will remove permissions on the newly created IAM user is incorrect. Amazon EventBridge cannot directly invoke AWS Fargate tasks.
References:
Check out these Amazon CloudWatch and AWS Simple Notification Service Cheat Sheets:
Question 30: Skipped [revew - what's UDP?]
A company has an on-premises data center that is hosting its gaming service. Its primary function is player-matching and is accessible from players around the world. The gaming service prioritizes network speed for the users so all traffic to the servers uses User Datagram Protocol (UDP) [-> ?] [NACL to deny non-UDP traffic. As more players join, the company is having difficulty scaling its infrastructure so it plans to migrate the service to the AWS cloud. The Solutions Architect has been tasked with the migration and AWS Shield Advanced has been enabled already to protect all public-facing resources.
Which of the following actions should the Solutions Architect implement to achieve the company requirements? (Select TWO.) [-> Aurora, ECS / EC2 ?]
[UDP -> NLB ; NACL to deny non-UDP network]
Place the Auto Scaling of Amazon EC2 instances behind an Internet-facing Application Load Balancer (ALB). For the domain name, create an Amazon Route 53 entry that is Aliased to the FQDN of the ALB.
Create an AWS WAF rule that will explicitly block all non-UDP traffic. Ensure that the AWS WAF rule is associated with the load balancer of the EC2 instances.
Set up network ACL rules on the VPC to deny all non-UDP traffic. Ensure that the NACL is associated with the load balancer subnets.
(Correct)
Place the Auto Scaling of Amazon EC2 instances behind a Network Load Balancer (NLB). For the domain name, create an Amazon Route 53 entry that points to the Elastic IP address of the NLB.
(Correct)
Create an Amazon CloudFront distribution and set the Load Balancer as the origin. Use only secure protocols on the distribution origin settings.
Explanation
A Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.
For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port. A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime. Different UDP flows have different source IP addresses and ports so that they can be routed to different targets.
When you create Network Access Control Lists (NACLs), you can specify both allow and deny rules. This is useful if you want to explicitly deny certain types of traffic to your application. For example, you can define IP addresses (as CIDR ranges), protocols, and destination ports that are denied access to the entire subnet. If your application is used only for TCP traffic, you can create a rule to deny all UDP traffic or vice versa. This option is useful when responding to DDoS attacks because it lets you create your own rules to mitigate the attack when you know the source IPs or other signatures.
If you are subscribed to AWS Shield Advanced, you can register Elastic IPs (EIPs) as Protected Resources. DDoS attacks against EIPs that have been registered as Protected Resources are detected more quickly, which can result in a faster time to mitigate.
The option that says: Place the Auto Scaling of Amazon EC2 instances behind a Network Load Balancer (NLB). For the domain name, create an Amazon Route 53 entry that points to the Elastic IP address of the NLB is correct. The service uses UDP traffic and prioritizes network speed, so a Network Load Balancer is ideal for this scenario. You can create a Route 53 entry pointing to the NLB’s Elastic IP.
The option that says: Set up network ACL rules on the VPC to deny all non-UDP traffic. Ensure that the NACL is associated with the load balancer subnets is correct. Since all traffic to the servers is UDP, you can set NACL to block all non-UDP traffic which can help block attacks such as traffic coming from TCP connections.
The option that says: Place the Auto Scaling of Amazon EC2 instances behind an internet-facing Application Load Balancer (ALB). For the domain name, create an Amazon Route 53 entry that is Aliased to the FQDN of the ALB is incorrect. UDP traffic operates at Layer 4 of the OSI model while an ALB operates at Layer 7. A Network Load Balancer is a better fit for this scenario.
The option that says: Create an AWS WAF rule that will explicitly block all non-UDP traffic. Ensure that the AWS WAF rule is associated with the load balancer of the EC2 instances is incorrect. AWS WAF rules cannot protect a Network Load Balancer yet. It is better to use NACL rules to block the non-UDP traffic.
The option that says: Create an Amazon CloudFront distribution and set the Load Balancer as the origin. Use only secure protocols on the distribution origin settings is incorrect. Although CloudFront does provide caching and security settings for the origin, it helps mitigate DDoS attacks to your instances or NLBs because AWS Shield Standard is enabled by default. For more advanced DDoS mitigation AWS Shield Advanced offers integration with NACLs that will block traffic at the edge of the AWS Network.
References:
Check out these Application Load Balancer and AWS Shield Cheat Sheets:
Question 32: Skipped
A retail company runs its two-tier e-commerce website on its on-premises data center. The application runs on a LAMP stack behind a load balancing appliance. The operations team uses SSH to login to the application servers to deploy software updates and install patches on the system [-> Patch Manager?]. The website has been a target of multiple cyber-attacks recently such as:
- Distributed Denial of Service (DDoS) attacks [-> Shield Advanced]
- SQL Injection attacks [-> WAF Rules]
- Dictionary attacks to SSH accounts on the web servers [-> Stop SSH]
The solutions architect plans to migrate the whole system to AWS to improve its security and availability. The following approaches are laid out to address the company concerns:
- Fix SQL injection attacks by reviewing existing application code and logic.
- Use the latest Amazon Linux AMIs to ensure that initial security patches are installed.
- Install the AWS Systems Manager agent on the instances to manage OS patching.
Which of the following are additional recommended actions to address the identified attacks while maintaining high availability [-> Multi AZ] and security for the application?
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login on the EC2 instances but with limited access from the company IP address only. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Enable AWS Shield Standard to protect the instances from DDoS attacks.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login only on a bastion host with limited access from the company IP address. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers and enable AWS Shield Standard for DDoS protection.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution. Enable AWS Shield Advanced for added protection.
(Correct)
Explanation
AWS Systems Manager Session Manager allows you to manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, on-premises instances, and virtual machines (VMs) through an interactive one-click browser-based shell or through the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
A distributed denial of service (DDoS) attack is an attack in which multiple compromised systems attempt to flood a target, such as a network or web application, with traffic. A DDoS attack can prevent legitimate users from accessing a service and can cause the system to crash due to the overwhelming traffic volume.
AWS provides two levels of protection against DDoS attacks: AWS Shield Standard and AWS Shield Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against the most common, frequently occurring network and transport layer DDoS attacks that target your website or applications. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. When you subscribe to AWS Shield Advanced and add specific resources to be protected, AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the resources.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting. You can also customize rules that filter out specific traffic patterns.
Therefore, the correct answer is: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution. Enable AWS Shield Advanced for added protection. AWS SSM Session manager allows secure remote access to your instances without using SSH login. AWS WAF rules can block common web exploits like SQL injection attacks and AWS Shield Advanced provides added protection for DDoS attacks.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login only on a bastion host with limited access from the company IP address. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers and enable AWS Shield Standard for DDoS protection is incorrect. You don't need to use a bastion host if you have AWS SSM agent installed on the instances. You can use SSM Session Manager to login to the servers. AWS Shield Standard is already enabled by default but you should enable AWS Shield Advanced for a higher level of protection.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login on the EC2 instances but with limited access from the company IP address only. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Enable AWS Shield Standard to protect the instances from DDoS attacks is incorrect. You don't need to use a bastion host if you have AWS SSM agent installed on the instances. Using a Single-AZ RDS instance is not recommended if you require a highly available database.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution is incorrect. AWS WAF may protect you from common web attacks, but you still need to enable AWS Shield Advanced for a higher level of protection against DDoS attacks. Using a Single-AZ RDS instance is not recommended if you require a highly available database.
References:
Check out these AWS Systems Manager and AWS WAF Cheat Sheets:
Question 35: Skipped
A supermarket chain is planning to launch an online shopping website to allow its loyal shoppers to buy their groceries online. Since there are a lot of online shoppers at any time of the day, the website should be highly available 24/7 and fault tolerant. Which of the following options provides the best architecture that meets the above requirement? [>1 AZ; AS EC2; RDS Multi AZ]
Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an Amazon RDS database running in a single Reserved EC2 Instance.
Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and a RDS configured with Multi-AZ Deployments.(Correct)
Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and one RDS instance deployed with Read Replicas in the two separate Availability Zones.
Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances each and an RDS instance deployed with Read Replicas in two separate Availability Zones.
Explanation
For high availability, it is best to always choose an RDS instance configured with Multi-AZ deployments. You should also deploy your application across multiple Availability Zones to improve fault tolerance. AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to set up application scaling for multiple resources across multiple services in minutes.
Hence, the correct answer is the option that says: Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an RDS configured with Multi-AZ Deployments.
The option that says: Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances each and an RDS instance deployed with Read Replicas in two separate Availability Zones is incorrect because a Read Replica is primarily used to improve the scalability of the database. This architecture will neither be highly available 24/7 nor fault-tolerant in the event of an AZ outage.
The option that says: Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and one RDS instance deployed with Read Replicas in the two separate Availability Zones is incorrect. Although the EC2 instances are highly available, the database tier is not. You have to use an Amazon RDS database with Multi-AZ configuration.
The option that says: Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an Amazon RDS database running in a single Reserved EC2 Instance is incorrect. With this architecture, the single Reserved Instance is only deployed in a single AZ. In the event of an AZ outage, the entire database will be unavailable.
References:
Check out this AWS Auto Scaling Cheat Sheet:
Question 38: Skipped
A law firm has decided to use Amazon S3 buckets for storage after an extensive Total Cost of ownership (TCO) analysis comparing S3 versus acquiring more storage for its on-premises hardware. The attorneys, paralegals, clerks, and other employees of the law firm will be using Amazon S3 buckets to store their legal documents and other media files. For a better user experience, the management wants to implement a single-sign-on system in which the user can just use their existing Active Directory login to access the S3 storage to avoid having to remember yet another password. [-> STS]
Which of the following options should the solutions architect implement for the above requirement and also provide a mechanism that restricts access for each user to a designated user folder in a bucket? (Select TWO.)
Set up a matching IAM user and IAM Policy for every user in your corporate directory that needs access to a folder in the bucket.
Set up a federation proxy or a custom identity provider and use AWS Security Token Service to generate temporary tokens. Use an IAM Role to enable access to AWS services.
(Correct)
Configure an IAM Policy that restricts access only to the user-specific folders in the Amazon S3 Bucket.
(Correct)
Use Amazon Connect to integrate the on-premises Active Directory with Amazon S3 and AWS IAM.
Configure an IAM user that provides access for the user and an IAM Policy that restricts access only to the user-specific folders in the S3 Bucket.
Explanation
Federation enables you to manage access to your AWS Cloud resources centrally. With federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application.
Your users might already have identities outside of AWS, such as in your corporate directory. If those users need to work with AWS resources (or work with applications that access those resources) then those users also need AWS security credentials. You can use an IAM role to specify permissions for users whose identity is federated from your organization or a third-party identity provider (IdP). Setting up an identity provider for federated access is required for integrating your on-premises Active Directory with AWS.
Therefore, the following options are the correct answers:
- Set up a federation proxy or a custom identity provider and use AWS Security Token Service to generate temporary tokens. Use an IAM Role to enable access to AWS services.
- Configure an IAM Policy that restricts access only to the user-specific folders in the Amazon S3 Bucket.
The option that says: Using Amazon Connect to integrate the on-premises Active Directory with Amazon S3 and AWS IAM is incorrect because Amazon Connect is simply an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.
The option that says: Configure an IAM user that provides access for the user and an IAM Policy that restricts access only to the user-specific folders in the S3 Bucket is incorrect because you have to use an IAM Role, instead of an IAM user, to provide the access needed to your AWS Resources.
The option that says: Set up a matching IAM user and IAM Policy for every user in your corporate directory that needs access to a folder in the bucket is incorrect because you should be creating IAM Roles rather than IAM Users.
References:
Check out this AWS Identity & Access Management (IAM) Cheat Sheet:
Question 40: Skipped (review)
A company deployed a blockchain application in AWS a year ago using AWS Opsworks. There has been a lot of security patches lately for the underlying Linux servers of the blockchain application, which means that the Opsworks stack instances should be updated.
In this scenario, which of the following are the best practices when updating an AWS stack? (Select TWO.)
Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.(Correct)
Use WAF to deploy the security patches.
On Windows-based instances, run the Update Dependencies stack command.
Use CloudFormation to deploy the security patches.
Delete the entire stack and create a new one.
Run the Update Dependencies stack command for Linux based instances.(Correct)
Explanation
Linux operating system providers supply regular updates, most of which are operating system security patches but can also include updates to installed packages. You should ensure that your instances' operating systems are current with the latest security patches.
By default, AWS OpsWorks Stacks automatically installs the latest updates during setup, after an instance finishes booting. AWS OpsWorks Stacks does not automatically install updates after an instance is online, to avoid interruptions such as restarting application servers. Instead, you manage updates to your online instances yourself, so you can minimize any disruptions.
AWS recommends that you use one of the following to update your online instances:
- Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.
- On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates on the specified instances.
The following options are incorrect as these are irrelevant in updating your online instances in OpsWorks:
- Delete the entire stack and create a new one.
- Use CloudFormation to deploy the security patches.
- On Windows-based instances, run the Update Dependencies stack command.
- Use WAF to deploy the security patches.
References:
Check out this AWS OpsWorks Cheat Sheet:
Question 45: Skipped
A retail company has an online shopping website that provides cheap bargains and discounts on various products. The company has recently moved its infrastructure from its previous hosting provider to AWS. The architecture uses an Application Load Balancer (ALB) in front of an Auto Scaling group of Spot and On-Demand EC2 instances. The solutions architect must set up a CloudFront web distribution that uses a custom domain name and the origin should point to the new ALB.
Which of the following options is the correct implementation of an end-to-end HTTPS connection from the origin to the CloudFront viewers?
Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to HTTPS Only in CloudFront, then use an SSL/TLS certificate from a third-party certificate authority which was imported to S3.
Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to Match Viewer to support both HTTP or HTTPS in CloudFront then use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attach it in your ALB. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
(Correct)
Upload a self-signed certificate in the ALB. Set the Viewer Protocol Policy to
HTTPS Only
in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
Explanation
Remember that there are rules on which type of SSL Certificate to use if you are using an EC2 or an ELB as your origin. This question is about setting up an end-to-end HTTPS connection between the Viewers, CloudFront, and your custom origin, which is an ALB instance.
The certificate issuer you must use depends on whether you want to require HTTPS between viewers and CloudFront or between CloudFront and your origin:
HTTPS between viewers and CloudFront
- You can use a certificate that was issued by a trusted certificate authority (CA) such as Comodo, DigiCert, Symantec or other third-party providers.
- You can use a certificate provided by AWS Certificate Manager (ACM)
HTTPS between CloudFront and a custom origin
- If the origin is not an ELB load balancer, such as Amazon EC2, the certificate must be issued by a trusted CA such as Comodo, DigiCert, Symantec or other third-party providers.
- If your origin is an ELB load balancer, you can also use a certificate provided by ACM.
If you're using your own domain name, such as tutorialsdojo.com, you need to change several CloudFront settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), or import a certificate from a third-party certificate authority into ACM or the IAM certificate store. Lastly, you should set the Viewer Protocol Policy to HTTPS Only in CloudFront.
Hence, the option that says: Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attach it in your ALB. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is the correct answer in this scenario.
The option that says: Upload a self-signed certificate in the ALB. Set the Viewer Protocol Policy to HTTPS only
in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is incorrect because you cannot directly upload a self-signed certificate in your ALB.
The option that says: Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to Match Viewer to support both HTTP or HTTPS in CloudFront then use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is incorrect because you have to set the Viewer Protocol Policy to HTTPS Only
.
The option that says: Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to HTTPS Only in CloudFront, then use an SSL/TLS certificate from a third-party certificate authority which was imported to S3 is incorrect because you cannot use an SSL/TLS certificate from a third-party certificate authority which was imported to S3.
References:
Check out this Amazon CloudFront Cheat Sheet:
Last updated