Design for new solutions (29%)
Last updated
Last updated
Question 1: Correct
ChatGPT (Correct and beyond) -
A company develops Docker containers to host web applications on its on-premises data center. The company wants to migrate its workload to the cloud and use AWS Fargate [-> ECS; Bridge Network mode]. The solutions architect has created the necessary task definition and service for the Fargate cluster. For security requirements, the cluster is placed on a private subnet [NAT Gateway; No need Public IP] in the VPC that has no direct connection outside of the VPC. The following error is received when trying to launch the Fargate task:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection
Which of the following options should be able to fix this issue?
Update the AWS Fargate task definition and set the auto-assign public IP option to DISABLED. Launch a NAT gateway on the private subnet of the VPC and update the route table of the private subnet to route requests to the Internet.
Update the AWS Fargate task definition and set the auto-assign public IP option to DISABLED. Launch a NAT gateway on the public subnet of the VPC and update the route table of the private subnet to route requests to the Internet.
(Correct)
This is a limitation of the “awsvpc
” network mode. Update the AWS Fargate definition to use the “bridge
” network mode instead to allow connections to the Internet.
Update the AWS Fargate task definition and set the auto-assign public IP option to ENABLED. Create a gateway VPC endpoint for Amazon ECR. Update the route table to allow AWS Fargate to pull images on Amazon ECR via the endpoint.
Explanation
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
Fargate allocates the right amount of compute resources, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment.
The CannotPullContainer error (500)
is caused by the Connection timed out
when connecting to Amazon ECR. This indicates that when creating a task, the container image specified could not be retrieved.
When a Fargate task is launched, its elastic network interface (ENI) requires a route to the Internet to pull container images. If you receive an error similar to the following when launching a task, it is because a route to the Internet does not exist:
CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection"
To resolve this issue, you can:
- For tasks in public subnets, specify ENABLED for Auto-assign public IP when launching the task.
- For tasks in private subnets, specify DISABLED for Auto-assign public IP when launching the task, and configure a NAT gateway in your VPC to route requests to the Internet.
Therefore, the correct answer is: Update the AWS Fargate task definition and set the auto-assign public IP option to DISABLED. Launch a NAT gateway on the public subnet of the VPC and update the route table of the private subnet to route requests to the internet. The NAT gateway in the public subnet should have a public IP address and a route to the Intenet Gateway. The tasks in the private subnet will send Internet traffic to the NAT gateway to be able pull the images on Amazon Elastic Container Registry.
The option that says: Update the AWS Fargate task definition and set the auto-assign public IP option to ENABLED. Create a gateway VPC endpoint for Amazon ECR. Update the route table to allow AWS Fargate to pull images on Amazon ECR via the endpoint is incorrect. Since the Fargate tasks are on private subnet, you don't need to enable the auto-assign public IP option. Additionally, you should interface VPC endpoint, not gateway VPC endpoint.
The option that says: Update the AWS Fargate task definition and set the auto-assign public IP option to DISABLED. Launch a NAT gateway on the private subnet of the VPC and update the route table of the private subnet to route requests to the Internet is incorrect. The NAT gateway should be placed in a public subnet because it needs a Public IP address and a direct route to the Internet Gateway (IGW). If it is placed on a private subnet, it will have the same routing limitation as those resources in the private subnet.
The option that says: This is a limitation of the “awsvpc
” network mode. Update the AWS Fargate definition to use the “bridge
” network mode instead to allow connections to the Internet is incorrect. AWS Fargate only supports the "awsvpc
" network mode. Each task is allocated its own elastic network interface (ENI) that is used for communication inside the VPC.
References:
Check out this AWS Fargate Cheat Sheet:
Question 2: Correct
A tech company will soon launch a new smartwatch that will collect statistics and usage information from its users. The solutions architect was tasked to design a data storage and retrieval solution [-> S3?] for the receiving application. The application is expected to ingest millions of records per minute [Kinesis? DynamoDB?] from its worldwide user base. For the storage requirements:
Each record is less than 4KB in size.
Data must be stored durably.
For running the application for a year, the estimated storage requirement is around 10-15 TB.
Which of the following options is the recommended storage solution while being the most cost-effective?
Configure the application to receive the records and set the storage to a DynamoDB table. Configure proper scaling on the DynamoDB table and enable the DynamoDB table Time to Live (TTL) setting to delete records after 120 days.
(Correct)
Use Amazon Kinesis Data Stream to ingest and store the records. Set a custom data retention period of 120 days for the data stream. Send the streamed data to an Amazon S3 bucket for added durability.
Configure the application to ingest the records and store each record on a dedicated Amazon S3 bucket. Ensure that a unique filename is set for each object. Create an S3 bucket lifecycle policy to expire objects that are older than 120 days.
Configure the application to receive the records and store the records to Amazon Aurora Serverless. Write an AWS Lambda function that runs a query to delete records older than 120 days. Schedule the function to run every night.
Explanation
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
TTL is useful if you store items that lose relevance after a specific time. The following are example TTL use cases:
-Remove user or sensor data after one year of inactivity in an application.
-Archive expired items to an Amazon S3 data lake via Amazon DynamoDB Streams and AWS Lambda.
-Retain sensitive data for a certain amount of time according to contractual or regulatory obligations.
When enabling TTL on a DynamoDB table, you must identify a specific attribute name that the service will look for when determining if an item is eligible for expiration. After you enable TTL on a table, a per-partition scanner background process automatically and continuously evaluates the expiry status of items in the table.
The scanner background process compares the current time, in Unix epoch time format in seconds, to the value stored in the user-defined attribute of an item. If the attribute is a Number data type, the attribute’s value is a timestamp in Unix epoch time format in seconds, and the timestamp value is older than the current time but not five years older or more (in order to avoid a possible accidental deletion due to a malformed TTL value), then the item is set to expire.
Therefore, the correct answer is: Configure the application to receive the records and set the storage to a DynamoDB table. Configure proper scaling on the DynamoDB table and enable the DynamoDB table Time to Live (TTL) setting to delete records after 120 days. DynamoDB has a feature to delete items after a defined timestamp. This is cost-effective because it does not consume any write throughput.
The option that says: Use Amazon Kinesis Data Stream to ingest and store the records. Set a custom data retention period of 120 days for the data stream. Send the streamed data to an Amazon S3 bucket for added durability is incorrect. This may be possible; however, you don't need to store the records on Amazon S3. Amazon Kinesis already has durable storage which is backed by S3, so this solution is redundant. Additionally, each retrieval, write, and delete on the Kinesis stream will incur additional costs.
The option that says: Configure the application to receive the records and store the records to Amazon Aurora Serverless. Write an AWS Lambda function that runs a query to delete records older than 120 days. Schedule the function to run every night is incorrect. Since the application will constantly receive records, you won't get the cost-effectiveness benefit of using Amazon Aurora Serverless. The DB will constantly be running anyway.
The option that says: Configure the application to ingest the records and store each record on a dedicated Amazon S3 bucket. Ensure that a unique filename is set for each object. Create an S3 bucket lifecycle policy to expire objects that are older than 120 days is incorrect. This is possible; however, it does not meet the low-latency retrieval time for the records.
References:
Check out these Amazon DynamoDB and Amazon Kinesis Cheat Sheets:
Question 3: Correct
A company has several IoT enabled devices and sells them to customers around the globe. Every 5 minutes, each IoT device sends back a data file that includes the device status and other information to an Amazon S3 bucket. Every midnight, a Python cron job [-> S3 Event] runs from an Amazon EC2 instance to read and process each data file on the S3 bucket and loads the values on a designated Amazon RDS database. The cron job takes about 10 minutes [-> Lambda] to process a day’s worth of data. After each data file is processed, it is eventually deleted from the S3 bucket. The company wants to expedite the process and access the processed data on the Amazon RDS as soon as possible.
Which of the following actions would you implement to achieve this requirement with the LEAST amount of effort?
Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket.
(Correct)
Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access time.
Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Set up an Amazon EventBridge rule to trigger the Lambda function whenever an upload event on the S3 bucket occurs.
Convert the Python script cron job to an AWS Lambda function. Create an Amazon EventBridge rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing time.
Explanation
The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket. Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer.
Currently, Amazon S3 can publish notifications for the following events:
New object created events — Amazon S3 supports multiple APIs to create objects. You can request a notification when only a specific API is used (for example, s3:ObjectCreated:Put), or you can use a wildcard (for example, s3:ObjectCreated:*) to request a notification when an object is created regardless of the API used.
Object removal events — Amazon S3 supports deletes of versioned and unversioned objects. For information about object versioning, see Object Versioning and Using versioning.
Restore object events — Amazon S3 supports the restoration of objects archived to the S3 Glacier storage classes. Your request to be notified of object restoration completion by using s3:ObjectRestore:Completed. You use s3:ObjectRestore:Post to request notification of the initiation of a restore.
Reduced Redundancy Storage (RRS) object lost events — Amazon S3 sends a notification message when it detects that an object of the RRS storage class has been lost.
Replication events — Amazon S3 sends event notifications for replication configurations that have S3 Replication Time Control (S3 RTC) enabled. It sends these notifications when an object fails replication when an object exceeds the 15-minute threshold, when an object is replicated after the 15-minute threshold, and when an object is no longer tracked by replication metrics. It publishes a second event when that object replicates to the destination Region.
Enabling notifications is a bucket-level operation; that is, you store notification configuration information in the notification subresource associated with a bucket. After creating or changing the bucket notification configuration, typically you need to wait 5 minutes for the changes to take effect. Amazon S3 supports the following destinations where it can publish events - Amazon Simple Notification Service (Amazon SNS) topic, Amazon Simple Queue Service (Amazon SQS) queue, and AWS Lambda.
Therefore, the correct answer is: Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket because this provides the best processing and access time. Each of the data files will be processed almost immediately once uploaded on the S3 bucket.
The option that says: Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Set up an Amazon EventBridge rule to trigger the Lambda function whenever an upload event on the S3 bucket occurs is incorrect. Although this is possible, you do not have to use CloudTrail and CloudWatch Events to satisfy the given requirement. This solution entails a lot of steps. You can simply use the Amazon S3 event notification feature that can trigger the Lambda function directly.
The option that says: Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access time is incorrect. This solution is unreliable since the Amazon EC2 instances can process the same data file at the same time, and because of the limitations of cron
, the minimum interval for processing is only 1 minute.
The option that says: Convert the Python script cron job to an AWS Lambda function. Create an Amazon EventBridge rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing time is incorrect. The scheduled CloudWatch events rule can only have a minimum of 1-minute intervals. Using Amazon S3 Event notifications as triggers will result in almost near real-time processing of the data files.
References:
Check out this Amazon S3 Cheat Sheet:
Question 5: Skipped
An international humanitarian aid organization has a requirement to store 20 TB worth of scanned files for their relief operations which can grow to up to a total of 50 TB of data [-> S3]. There is also a requirement to have a website with a search feature [-> CloudSearch] in place that can be used to easily find a certain item through the thousands of scanned files. The new system is expected to run for more than 3 years.
Which of the following is the most cost-effective option in implementing the search feature in their system?
Use EFS to store and serve the scanned files. Install a 3rd-party search software on an Auto Scaling group of On-Demand EC2 Instances and an Elastic Load Balancer.
Use S3 for both storing and searching the scanned files by utilizing the native search capabilities of S3.
Set up a new S3 bucket with standard storage to store and serve the scanned files. Use CloudSearch for query processing and use Elastic Beanstalk to host the website across multiple availability zones.
(Correct)
Design the new system on a CloudFormation template. Use an EC2 instance running NGINX web server and an open source search application. Launch multiple standard EBS volumes with RAID configuration to store the scanned files with a search index.
Explanation
Amazon Simple Storage Service (S3) is an excellent object-based storage that is highly durable and scalable. However, its native search capability is not effective. Hence, you need to have a separate service to handle the search feature.
Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost-effective to set up, manage, and scale a search solution for your website or application. Amazon CloudSearch supports 34 languages and popular search features such as highlighting, autocomplete, and geospatial search.
The option that says: Set up a new S3 bucket with standard storage to store and serve the scanned files. Use CloudSearch for query processing and use Elastic Beanstalk to host the website across multiple availability zones is correct because it uses S3 to store the images, which is a durable and scalable solution. It also uses CloudSearch for query processing, and with a multi-AZ implementation, it achieves high availability.
The option that says: Use EFS to store and serve the scanned files. Install a 3rd-party search software on an Auto Scaling group of On-Demand EC2 Instances and an Elastic Load Balancer is incorrect. It is stated in the scenario that the new system is expected to run for more than 3 years which means that using Reserved EC2 instances would be a more cost-effective choice than using On-Demand instances. In addition, purchasing and installing a 3rd-party search software might be more expensive than just using Amazon CloudSearch.
The option that says: Design the new system on a CloudFormation template. Use an EC2 instance running NGINX web server and an open source search application. Launch multiple standard EBS volumes with RAID configuration to store the scanned files with a search index is incorrect because a system composed of RAID configuration of EBS volumes is not a durable and scalable solution compared to S3.
The option that says: Use S3 for both storing and searching the scanned files by utilizing the native search capabilities of S3 is incorrect as the native search capability of S3 is not effective. It is better to use CloudSearch or another service that provides search functionality.
References:
Check out this Amazon CloudSearch Cheat Sheet:
Question 6: Skipped
An e-commerce company is running a three-tier application on AWS. The application includes a web tier as frontend, an application tier as backend, and the database tier that stores the transactions and users' data. The database is currently hosted on an extra-large instance with 128 GB of memory. For the company’s business continuity and disaster recovery plan, the Solutions Architect must ensure a Recovery Time Objective (RTO) of 5 minutes and a Recovery Point Objective (RPO) of 1 hour [-> Warm Standby] on the backup site in the event that the application goes down. There is also a requirement for the backup site to be at least 250 miles away from the primary site.
Which of the following solutions must the Solutions Architect implement to meet the company’s disaster recovery requirements while keeping the cost at a minimum [-> Not active/active] ?
Use a pilot light strategy for the backup region. Configure the primary database to replicate data to a large standby instance in the backup region. In case of a disaster, vertically resize the database instance to meet the full demand. Create an AWS CloudFormation template to quickly provision the same web servers, application servers, and load balancers on the backup region. Update the Amazon Route 53 records to point to the backup region.
On the backup region, create a scaled-down version of the fully functional environment with one EC2 instance of the web server and application server in their own Auto Scaling groups behind Application Load Balancers. Create a standby database instance that replicates data from the primary database. In case of disaster, scale the instances to meet the demand and update the Amazon Route 53 record to point to the backup region.
(Correct)
Create a frequently scheduled backup of the application and database that will be stored on an Amazon S3 bucket. Configure Amazon S3 Cross-Region Replication (CRR) on the bucket to copy the backups to another region. In case of disaster, use an AWS CloudFormation template to quickly replicate the same resources to the backup region and restore the data from the S3 bucket. [Too slow, not meeting 5min RTO]
Use a multi-region strategy for the backup region to comply with the tight RTO and RPO requirements. Create a fully functional web, application, and database tier on the backup region with the same capacity [X] as the primary region. Set the database on the backup region on standby mode. In case of disaster, update the Amazon Route 53 record to point to the backup region.
Explanation
Requirements: RTO <= 5m (>= Warm Standby); RPO <= 1hr (>= Pilot Light) -> Warm Standby
Backup and Restore
Hours
24 hours or less
Data and applications are backed up using point-in-time backups into the Disaster Recovery (DR) Region. This data can be restored when necessary to recover from a disaster.
Pilot Light
Minutes
Hours
Data is replicated from one region to another, and a copy of the core workload infrastructure is provisioned. Essential resources for data replication and backup, like databases and object storage, are always on. Other elements, such as application servers, are off until needed.
Warm Standby
Seconds
Minutes
A scaled-down but fully functional version of the workload is always running in the DR Region. Business-critical systems are fully duplicated and always on, but with a scaled-down fleet. The system can be scaled up quickly for recovery.
Multi-region Active-Active
Near zero
Potentially zero
The workload is deployed to, and actively serving traffic from, multiple AWS Regions. This strategy involves synchronizing data across regions, allowing for seamless failover and recovery with minimal to no data loss and downtime.
Having backups and redundant workload components in place is the start of your DR strategy. RTO and RPO are your objectives for the restoration of your workload. Set these based on business needs.
Recovery Time Objective (RTO) is defined by the organization. RTO is the maximum acceptable delay between the interruption of service and restoration of service. This determines what is considered an acceptable time window when service is unavailable.
Recovery Point Objective (RPO) is defined by the organization. RPO is the maximum acceptable amount of time since the last data recovery point. This determines what is considered an acceptable loss of data between the last recovery point and the interruption of service.
When architecting a multi-region disaster recovery strategy for your workload, you should choose one of the following multi-region strategies. They are listed in increasing order of complexity, and decreasing order of RTO and RPO. DR Region refers to an AWS Region other than the one primarily used for your workload (or any AWS Region if your workload is on-premises).
- Backup and restore (RPO in hours, RTO in 24 hours or less): Back up your data and applications using point-in-time backups into the DR Region. Restore this data when necessary to recover from a disaster.
- Pilot light (RPO in minutes, RTO in hours): Replicate your data from one region to another and provision a copy of your core workload infrastructure. Resources required to support data replication and backup such as databases and object storage are always on. Other elements such as application servers are loaded with application code and configurations, but are switched off and are only used during testing or when Disaster Recovery failover is invoked.
- Warm standby (RPO in seconds, RTO in minutes): Maintain a scaled-down but fully functional version of your workload always running in the DR Region. Business-critical systems are fully duplicated and are always on, but with a scaled down fleet. When the time comes for recovery, the system is scaled up quickly to handle the production load.
- Multi-region (multi-site) active-active (RPO near zero, RTO potentially zero): Your workload is deployed to, and actively serving traffic from, multiple AWS Regions. This strategy requires you to synchronize data across Regions.
The difference between Pilot Light and Warm Standby can sometimes be difficult to understand. Both include an environment in your DR Region with copies of your primary region assets. The distinction is that Pilot Light cannot process requests without additional action taken first, while Warm Standby can handle traffic (at reduced capacity levels) immediately. Pilot Light will require you to turn on servers, possibly deploy additional (non-core) infrastructure and then scale up. In Warm Standby, it only requires you to scale up your resources since all the necessary components are already deployed and running). You can choose between these two disaster recovery strategies based on your RTO and RPO needs.
Therefore, the correct answer is: On the backup region, create a scaled-down version of the fully functional environment with one EC2 instance of the web server and application server in their own Auto Scaling groups behind Application Load Balancers. Create a standby database instance that replicates data from the primary database. In case of disaster, scale the instances to meet the demand and update the Amazon Route 53 record to point to the backup region.
The option that says: Create a frequently scheduled backup of the application and database that will be stored on an Amazon S3 bucket. Configure Amazon S3 Cross-Region Replication (CRR) on the bucket to copy the backups to another region. In case of disaster, use an AWS CloudFormation template to quickly replicate the same resources to the backup region and restore the data from the S3 bucket is incorrect. Basically, this is a backup and restore strategy that has RPO in hours and RTO in 24 hours or less. Even with using CloudFormation, provisioning resources and restoring the backups from an Amazon S3 bucket may take a long time. Take note that the required RTO is only 5 minutes.
The option that says: Use a pilot light strategy for the backup region. Configure the primary database to replicate data to a large standby instance in the backup region. In case of a disaster, vertically resize the database instance to meet the full demand. Create an AWS CloudFormation template to quickly provision the same web servers, application servers, and load balancers on the backup region. Update the Amazon Route 53 records to point to the backup region is incorrect. Although this reduces the recovery time because the database instance is already replicated, you may still miss the 5 minute RTO because provisioning the needed servers and load balancers may take several minutes.
The option that says: Use a multi-region strategy for the backup region to comply with the tight RTO and RPO requirements. Create a fully functional web, application, and database tier on the backup region with the same capacity as the primary region. Set the database on the backup region on standby mode. In case of disaster, update the Amazon Route 53 record to point to the backup region is incorrect. Although this meets the RTO and RPO requirements, keeping a multi-region strategy is very expensive. With the given cost-effectiveness requirement, using the warm standby strategy is the most suitable one to use in this scenario.
References:
Backup and Restore vs Pilot Light vs Warm Standby vs Multi-Site:
Question 9: Skipped
An enterprise software company has just recently started [-> CloudFormation] using AWS as their cloud infrastructure. They are building an enterprise proprietary issue tracking system which would be accessed by their customers worldwide. Hence, the CTO carefully instructed you to ensure that the architecture of the issue tracking system is both scalable and highly available to avoid any complaints from the clients. It is expected that the application will have a steady-state usage and the database would be used for online transaction processing (OLTP) [-> ?]. Which of the following would be the best architecture setup to satisfy the above requirement?
Use multiple On-Demand EC2 instances to host the application and a highly scalable DynamoDB for the database. Use ElastiCache for in-memory data caching for your database to improve performance.
Launch an Auto Scaling group of Spot EC2 instances with an ELB in front to handle the load balancing. Leverage on CloudFront in distributing your static content and a RDS instance with Read Replicas.
Use a CloudFormation template to launch an Auto Scaling group of EC2 instances across multiple Availability Zones which are all connected via an ELB to handle the load balancing. Leverage on CloudFront in distributing your static content and a RDS instance with Multi-AZ deployments configuration.(Correct)
Use a Dedicated EC2 instance as the application server and Redshift as a petabyte-scale data warehouse service. Use ElastiCache for in-memory data caching for your database to improve performance.
Explanation
It is recommended to use a CloudFormation template to launch your architecture in AWS. An Auto Scaling group of EC2 instances across multiple Availability Zones with an ELB in front is a highly available and scalable architecture. In addition, leveraging on CloudFront in distributing your static content improves the load times of the system. Finally, an RDS instance with Multi-AZ deployments configuration can ensure the availability of your database in case one instance goes down.
Therefore, the correct answer is: Use a CloudFormation template to launch an Auto Scaling group of EC2 instances across multiple Availability Zones which are all connected via an ELB to handle the load balancing. Leverage on CloudFront in distributing your static content and a RDS instance with Multi-AZ deployments configuration. This offers high availability and scalability for the application.
The option that says: Use a Dedicated EC2 instance as the application server and Redshift as a petabyte-scale data warehouse service. Use ElastiCache for in-memory data caching for your database to improve performance is incorrect. Using Dedicated EC2 instances without Auto Scaling is not a scalable nor a highly available architecture. In addition, RedShift is not applicable to OLTP but rather with OLAP.
The option that says: Launch an Auto Scaling group of Spot EC2 instances [X] with an ELB in front to handle the load balancing. Leverage on CloudFront in distributing your static content and an RDS instance with Read Replicas is incorrect. Spot EC2 instances are not suitable for steady state usage and Read Replicas only provide limited availability compared to Multi-AZ Deployments.
The option that says: Use multiple On-Demand EC2 instances to host the application and a highly scalable DynamoDB for the database. Use ElastiCache for in-memory data caching for your database to improve performance is incorrect. This does not use Auto Scaling and is not deployed across multiple Availability Zones.
References:
Check out this Amazon EC2 Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 18: Skipped
A popular news website that uses an Oracle database is currently deployed in the company's on-premises network. Due to its growing number of readers, the company decided to move its infrastructure to AWS where they can further improve the performance of the website [-> Database Migration Service]. The company earns from the advertisements placed on the website so you were instructed to ensure that the website remains available in case of database server failures. [-> Multi AZ - Primary/Secondary (Standby) DB] Their team of content writers constantly upload new articles every day including the wee hours of the morning to cover breaking news. [-> no spot instances]
In this scenario, how can you implement a highly available architecture to meet the requirement?
Create an Oracle Real Application Clusters (RAC) in RDS which provides a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for the news website.
Create an Oracle database instance in RDS with Recovery Manager (RMAN) which performs backup and recovery tasks on your database and automates the administration of your backup strategies.
Create an Oracle database in RDS with Multi-AZ deployments.(Correct)
Create an Oracle database in RDS with Read Replicas.
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operations without the need for manual administrative intervention.
Therefore, the correct answer is: Create an Oracle database in RDS with Multi-AZ deployments. This ensures high availability even if the primary database instance goes down.
The option that says: Create an Oracle database in RDS with Read Replicas is incorrect because the content writers won't be able to upload their articles to the Read Replicas in the event that the primary database goes down.
The following options are incorrect because Oracle RMAN and RAC are not supported in RDS:
- Create an Oracle database instance in RDS with Recovery Manager (RMAN) which performs backup and recovery tasks on your database and automates the administration of your backup strategies
- Create an Oracle Real Application Clusters (RAC) in RDS which provides a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for the news website.
References:
Check out this Amazon RDS Cheat Sheet:
Question 24: (review)
A leading commercial bank has multiple AWS accounts that are consolidated using AWS Organizations. They are building an online portal for foreclosed real estate properties that they own. The online portal is designed to use SSL for better security. The bank would like to implement a separation of responsibilities [-> IAM Policy / SCP / ? ] between the DevOps team and their cybersecurity team. The DevOps team is entitled to manage and log in to the EC2 instances while the cybersecurity team has exclusive access to the application's X.509 certificate, which contains the private key and is stored in AWS Certificate Manager (ACM).
Which of the following options would satisfy the company requirements?
Set up a Service Control Policy (SCP) that authorizes access to the certificate store only for the cybersecurity team and then add a configuration to terminate the SSL on the ELB.
Upload the X.509 certificate to an S3 bucket owned by the cybersecurity team and accessible only by the IAM role of the EC2 instances. Use the Systems Manager Session Manager as the HTTPS session manager for the application.
Configure an IAM policy that authorizes access to the certificate store only for the cybersecurity team and then add a configuration to terminate the SSL on the ELB.(Correct)
Use the AWS Config service to configure the EC2 instances to retrieve the X.509 certificate upon boot from a CloudHSM that is managed by the cybersecurity team.
Explanation
In this scenario, the best solution is to set the appropriate IAM policy to both the DevOps and cybersecurity teams and then add a configuration to terminate the SSL on the ELB.
Take note that you can either terminate the SSL on the ELB side or on the EC2 instance. If you choose the former, the X.509 certificate will only be present in the ELB and if you choose the latter, the X.509 certificate will be stored inside the EC2 instance.
Since we don't want the DevOps team to have access to the certificate, it is best to terminate the SSL on the ELB level rather than the EC2.
Therefore, the correct answer is: Configure an IAM policy that authorizes access to the certificate store only for the cybersecurity team and then adding a configuration to terminate the SSL on the ELB.
The option that says: Use the AWS Config service to configure the EC2 instances to retrieve the X.509 certificate upon boot from a CloudHSM that is managed by the cybersecurity team is incorrect. The AWS Config service simply enables you to assess, audit, and evaluate the configurations of your AWS resources. It does not grant any permission or access. In addition, CloudHSM is a managed hardware security module (HSM) in the AWS Cloud that handles encryption keys and not SSL certificates.
The option that says: Upload the X.509 certificate to an S3 bucket owned by the cybersecurity team and accessible only by the IAM role of the EC2 instances and using the Systems Manager Session Manager as the HTTPS session manager for the application is incorrect because the Systems Manager Session Manager service simply provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. This service does not handle SSL connections. It is also a security risk to store X.509 certificates in an S3 bucket. It should be stored in the AWS Certificate Manager.
The option that says: Set up a Service Control Policy (SCP) that authorizes access to the certificate store only for the cybersecurity team and then adding a configuration to terminate the SSL on the ELB is incorrect. A service control policy (SCP) simply determines what services and actions can be delegated by administrators to the users and roles in the accounts that the SCP is applied to. It does not grant any permissions, unlike an IAM Policy.
References:
Check out these AWS Elastic Load Balancing (ELB) and IAM Cheat Sheets:
Service Control Policies (SCP) vs IAM Policies:
Question 32: Skipped
A company runs its internal tool on AWS. It is used for logistics and shipment tracking for the company’s warehouse. With the current system process, the application receives an order and it sends an email to the employees with the information needed for the package shipment. After the employees prepare the order and ship the package, they reply to the email so that the application can mark the order as shipped [-> Step Function] . The company wants to migrate to a serverless application model to stop relying on emails and minimize the operational overhead for the application. [API Gateway, Lambda, Step Functions (Packages of Lambdas; can mark order status), DynamoDB, SQS, SNS]
Which of the following options should the Solutions Architect implement to meet the company requirements?
Store the order information on an Amazon DynamoDB table. Create an AWS Step Functions workflow that will be triggered for every new order. Have the workflow mark the order as “in progress” and print the shipping label for the package. Once the package is scanned and leaves the warehouse, trigger an AWS Lambda function to mark the order as “shipped” and complete the Step Functions workflow.
(Correct)
Store order information on an Amazon SQS queue when a new order is created. Schedule an AWS Lambda function to poll the queue every 5 minutes and start processing if any orders are found. Use another Lambda function to print the shipping labels for the package. Once the package is scanned and leaves the warehouse, use Amazon Pinpoint to send a notification to customers regarding the status of their order.
Create AWS Batch jobs corresponding to different tasks needed to ship a package. Write an AWS Lambda function with AWS Batch as the trigger to create and print the shipping label for the package. Once the package is scanned and leaves the warehouse, trigger another Lambda function to move the AWS Batch job to the next stage of the shipping process.
Use an Amazon EFS volume to store the new order information. Configure an instance [X - not serverless] to pull the order information from the EFS share and print the shipping label for the package. Once the package is scanned and leaves the warehouse, remove the order information on the EFS share by using an Amazon API Gateway call to the instances.
Explanation
AWS Step Functions is a serverless orchestration service that lets you combine AWS Lambda functions and other AWS services to build business-critical applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications execute, Step Functions maintains the application state, tracking exactly which workflow step your application is in and storing an event log of data that is passed between application components.
Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state.
With Step Functions' built-in controls, you examine the state of each step in your workflow to make sure that your application runs in order and as expected. Depending on your use case, you can have Step Functions call AWS services, such as AWS Lambda, to perform tasks.
Step Functions is ideal for coordinating session-based applications. You can use Step Functions to coordinate all of the steps of a checkout process on an e-commerce site, for example. Step Functions can read and write from Amazon DynamoDB as needed to manage inventory records.
You can use Step Functions to make decisions about how best to process data, for example, to do post-processing of groups of satellite images to determine the amount of trees per acre of land. Depending on the size and resolution of the image, this Step Functions workflow will determine whether to use AWS Lambda or AWS Fargate to complete the post-processing of each file in order to optimize runtime and costs.
Using AWS Step Functions, you define your workflows as state machines, which transform complex code into easy-to-understand statements and diagrams. Building apps and confirming that they are implementing your desired functionality is quicker and easier.
Therefore, the correct answer is: Store the order information on an Amazon DynamoDB table. Create an AWS Step Functions workflow that will be triggered for every new order. Have the workflow mark the order as “in progress” and print the shipping label for the package. Once the package is scanned and leaves the warehouse, trigger an AWS Lambda function to mark the order as “shipped” and complete the Step Functions workflow. Step Functions is suitable to orchestrate this workflow to update a DynamoDB table for the order progress as well trigger AWS Lambda functions for various actions.
The option that says: Create AWS Batch jobs corresponding to different tasks needed to ship a package. Write an AWS Lambda function with AWS Batch as the trigger to create and print the shipping label for the package. Once the package is scanned and leaves the warehouse, trigger another Lambda function to move the AWS Batch job to the next stage of the shipping process is incorrect. AWS Batch is not designed to orchestrate a workflow. AWS Batch is used to run batch jobs such as transaction reporting or analysis reporting, which usually run as stand-alone jobs.
The option that says: Store order information on an Amazon SQS queue when a new order is created. Schedule an AWS Lambda function to poll the queue every 5 minutes and start processing if any orders are found. Use another Lambda function to print the shipping labels for the package. Once the package is scanned and leaves the warehouse, use Amazon Pinpoint to send a notification to customers regarding the status of their order is incorrect. This may be possible; however, polling the SQS every 5 minutes is not efficient compared to writing to the DynamoDB table. If there are no orders for the past 5 minutes, the Lambda function will still run.
The option that says: Use an Amazon EFS volume to store the new order information. Configure an instance to pull the order information from the EFS share and print the shipping label for the package. Once the package is scanned and leaves the warehouse, remove the order information on the EFS share by using an Amazon API Gateway call to the instances is incorrect. Using an EFS share will require EC2 instances and will increase the operational overhead needed to manage the infrastructure. This is not a serverless solution.
References:
Check out the AWS Step Functions Cheat Sheet:
Question 33: Skipped
A startup is running its customer support application in the AWS cloud. The application is hosted on a set of Auto Scaling Amazon EC2 on-demand instances placed behind an Elastic Load Balancer. The web application runs on large EC2 instance sizes to properly process the high volume of data that are stored in DynamoDB. New application version deployment is done once a week and requires an automated way of creating and testing a new Amazon Machine Image for the application servers. To meet the growing number of support tickets being sent, it was decided that a new video chat feature should be implemented as part of the customer support app, but should be hosted on a different set of servers to allow users to chat with a representative. The startup decided to streamline the deployment process and use AWS OpsWorks as an application lifecycle tool to simplify the management of the app and reduce time-consuming deployment cycles.
What is the most cost-efficient and flexible way to integrate the new video chat module in AWS?
Create two AWS OpsWorks stacks, each with two layers, and two custom recipes.
Create an AWS OpsWorks stack, with one layer, and one custom recipe.
Create two AWS OpsWorks stacks, each with two layers, and one custom recipe.
Create an AWS OpsWorks stack, with two layers, and one custom recipe.(Correct)
Explanation
AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises. With OpsWorks Stacks, you can
model your application as a stack containing different layers, such as load balancing, database, and application server.
deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases.
set automatic scaling for your servers based on preset schedules or in response to changing traffic levels
uses lifecycle hooks to orchestrate changes as your environment scales.
In OpsWorks, you will be provisioning a
stack and layers.
Stack:
the top-level AWS OpsWorks Stacks entity. It represents a set of instances that you want to manage collectively, typically because they have a common purpose such as serving PHP applications. In addition to serving as a container, a stack
handles tasks that apply to the group of instances as a whole, such as managing applications and cookbooks.
Every stack contains one or more layers
Layer:
each of which represents a stack component, such as a load balancer or a set of application servers. -
Each layer in a stack must have at least one instance and can optionally have multiple instances.
You must create and configure an appropriate layer, and add the instance to the layer.
In the scenario, it tells us that the video chat feature should be implemented as part of the customer support application, but should be hosted on a different set of servers. This means that the chat feature is part of the stack, but should be in a different layer since it will be using a different set of servers. Hence, we have to use one stack and two layers to meet the requirement.
Therefore, the correct answer is: Create an AWS OpsWorks stack with two layers and one custom recipe. Only one stack would be sufficient and two layers would be required for handling separate requirements. One custom recipe for DynamoDB would be required.
These options are incorrect because two OpsWorks stacks are unnecessary since the new video chat feature is still a part of the customer support website but just deployed on a different set of servers. Hence, this should be deployed on a different layer and not on an entirely different stack.
- Create two AWS OpsWorks stacks, each with two layers, and two custom recipes
- Create two AWS OpsWorks stacks, each with two layers, and one custom recipe
The option that says: Create an AWS OpsWorks stack with one layer and one custom recipe is incorrect. It would be a better solution to create two separate layers: one customer support web servers and one for the video chat feature.
References:
Check out this AWS OpsWorks Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 37: Skipped (review)
An international insurance company has clients all across the globe. The company has financial files that are stored in an Amazon S3 bucket which is behind CloudFront. At present, their clients can access their data by directly using an S3 URL or using their CloudFront distribution [-> Presigned URL; OAC]. The company wants to deliver their content to a specific client in California and they need to make sure that only that client can access the data.
Which of the following options is a valid solution that meets the above requirements? (Select TWO.)
'a valid solution that meets the above requirements' - meaning it's not combination of 2 options, it's bout each option gotta be independently valid.
Create a new S3 bucket in US West (N. California) region and upload the files. Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else.
(Correct)
Use CloudFront signed URLs to ensure that only their client can access the files. Create an origin access control (OAC) and give it permission to read the files in the bucket. Remove permission to use Amazon S3 URLs to read the files for anyone else.
(Correct)
Use CloudFront signed URLs to ensure that only their client can access the files. Enable field-level encryption in your CloudFront distribution.
Create a new S3 bucket in US West (N. California) region and upload the files. Set up an origin access control (OAC) and give it permission to read the files in the bucket. Enable HTTPS in your CloudFront distribution.
Use CloudFront Signed Cookies to ensure that only their client can access the files. Enable HTTPS in your CloudFront distribution.
Explanation
Many companies that distribute content over the internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content by using CloudFront, you can do the following:
- Require that your users access your private content by using special CloudFront signed URLs or signed cookies.
- Require that your users access your Amazon S3 content by using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't necessary, but it is recommended to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.
All objects and buckets by default are private. The presigned URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions. You can generate a presigned URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned object URL without writing any code. Anyone who receives a valid presigned URL can then programmatically upload an object.
Therefore, the correct answer is: Create a new S3 bucket in US West (N. California) region and upload the files. Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else and Use CloudFront signed URLs to ensure that only their client can access the files. Create an origin access control (OAC) and give it permission to read the files in the bucket. Remove permission to use Amazon S3 URLs to read the files for anyone else. Using a presigned URL to your S3 bucket will prevent other users from getting your private data which is intended to a certain client. A combination of Signed URL and OAC is also a valid solution that meets the requirement.
The option that says: Use CloudFront Signed Cookies to ensure that only their client can access the files. Enable HTTPS in your CloudFront distribution is incorrect. The signed cookies feature is primarily used if you want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of website. In addition, this solution is not complete since the users can bypass the restrictions by simply using the direct S3 URLs.
The option that says: Use CloudFront signed URLs to ensure that only their client can access the files. Enable field-level encryption in your CloudFront distribution is incorrect. Although this solution is valid, the users can still bypass the restrictions in CloudFront by simply connecting to the direct S3 URLs.
The option that says: Create a new S3 bucket in US West (N. California) region and upload the files. Set up an origin access control (OAC) and give it permission to read the files in the bucket. Enable HTTPS in your CloudFront distribution is incorrect. An Origin Access Control (OAC) will only require your client to only access the files by using the CloudFront URL and not through a direct S3 URL. This can be a possible solution if it mentions the use of Signed URL or Signed Cookies.
References:
Check out this Amazon CloudFront Cheat Sheet:
S3 Pre-signed URLs vs CloudFront Signed URLs vs Origin Access Control (OAC)
Comparison of AWS Services Cheat Sheets:
Question 39: Skipped
A software development company implements cloud best practices on its AWS infrastructure. The solutions architect has been instructed to manage its AWS cloud infrastructure as code to automate its software build, test, and deploy process. The company would like to have the ability to easily deploy exact copies of different versions of your cloud infrastructure, stage changes into different environments, revert back to previous versions, and identify the specific versions running in the VPC. [-> CloudFormation] Plus, all new public-facing applications should also have a global content delivery network (CDN) service. -> [CloudFront]
Which of the following options is the recommended action to meet the company requirement?
Use CloudWatch as the CDN and CloudFormation to manage the cloud architecture.
Use AWS CloudFormation to manage the cloud architecture and CloudFront as the CDN.(Correct)
Use CloudWatch as the CDN and Elastic Beanstalk to deploy and manage the cloud architecture.
Use CloudFront as the CDN and Elastic Beanstalk to deploy and manage the cloud architecture.
Explanation
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This file serves as the single source of truth for your cloud environment.
Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
Therefore, the correct answer is: Use AWS CloudFormation to manage the cloud architecture and CloudFront as the CDN.
The option that says: Using CloudWatch as the CDN and Elastic Beanstalk to deploy and manage the cloud architecture is incorrect. CloudWatch is not a CDN service.
The option that says: Use CloudWatch as the CDN and CloudFormation to manage the cloud architecture is incorrect. As with the case above, CloudWatch is not a CDN service.
The option that says: Use CloudFront as the CDN and Elastic Beanstalk to deploy and manage the cloud architecture is incorrect. Even though Elastic Beanstalk enables you to quickly deploy and manage applications in the AWS Cloud, you still can't manage the cloud infrastructure as an application code with it. The best option is to use CloudFormation.
References:
Check out this AWS CloudFormation Cheat Sheet:
Elastic Beanstalk vs CloudFormation vs OpsWorks vs CodeDeploy:
Comparison of AWS Services Cheat Sheets:
Question 40: Skipped
A mobile game startup is building an immersive augmented reality (AR), massively multiplayer, first-person online shooter game. All of their servers, databases, and resources are hosted in their cloud infrastructure in AWS. Upon testing the new game, it was noted that the loading time of the game assets and data are quite sluggish including their static content [-> S3 + CloudFront ?] . You recommended adding caching [-> Redis] to the application to improve load times.
In this scenario, which of the following cache services can you recommend for their gaming applications?
Use CloudFront to distribute their static content and an Apache Ignite ElastiCache [Not a service provided by AWS] as an in-memory data store.
Use CloudFront to distribute their static content and DynamoDB as an in-memory data store. [x - DynamoDB is not in-memory DS]
Use ElastiCache to distribute their static content and CloudFront as an in-memory data store. [x - reversed]
Use CloudFront to distribute their static content and ElastiCache as an in-memory data store.(Correct)
Explanation
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
In this scenario, the best option is to use a combination of Amazon CloudFront for caching static content and Amazon ElastiCache as the in-memory data store.
Therefore the correct answer is: Use CloudFront to distribute their static content and ElastiCache as an in-memory data store.
The option that says: Use ElastiCache to distribute their static content and CloudFront as an in-memory data store is incorrect. ElastiCache is used for caching database queries and is not suitable for distributing static content.
The option that says: Use CloudFront to distribute their static content and an Apache Ignite ElastiCache as an in-memory data store is incorrect. Although Apache Ignite is an in-memory data store, only Redis and Memcached are supported in ElastiCache.
The option that says: Use CloudFront to distribute their static content and DynamoDB as an in-memory data store is incorrect. DynamoDB is a NoSQL database and hence, not suitable for caching and in using it as an in-memory data store.
References:
Check out these Amazon Elasticache and Amazon CloudFront Cheat Sheets:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 41: Skipped
A company is migrating a legacy Oracle database [-> Database Migration Service] from their on-premises data center to AWS. It will be deployed in an existing EBS-backed EC2 instance with multiple EBS volumes attached. For the migration, a new volume must be created for the Oracle database and then attached to the instance. This will be used by a financial web application and will primarily store historical financial data that are infrequently accessed. [-> Cold HDD EBS]
Which of the following is the MOST cost-effective and throughput-oriented [-> Cold HDD - defines performance in terms of throughput rather than IOPS] solution that the solutions architect should implement?
Migrate the database using the AWS Application Migration Service and use a Throughput Optimized (st1) EBS volume.
Migrate the database using the AWS Database Migration Service and use a Provisioned IOPS (io1) EBS volume.
Migrate the database using the AWS Database Migration Service and use a Cold HDD (sc1) EBS volume.
(Correct)
Migrate the database using the AWS Application Migration Service and use a General Purpose (gp2) EBS Volume.
Explanation
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases. AWS Database Migration Service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. With AWS Database Migration Service, you can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3.
Cold HDD volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than Throughput Optimized HDD, this is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, Cold HDD provides inexpensive block storage. Take note that bootable Cold HDD volumes are not supported.
Cold HDD provides the lowest cost HDD volume and is designed for less frequently accessed workloads. Therefore, the correct answer is: Migrate the database using the AWS Database Migration Service and use a Cold HDD (sc1) EBS volume.
The option that says: Migrate the database using the AWS Database Migration Service and use a Provisioned IOPS (io1) EBS volume is incorrect because it costs more than the Cold HDD and thus, not cost-effective for this scenario. It provides the highest performance SSD volume for mission-critical low-latency or high-throughput workloads, which is not needed in the scenario.
The option that says: Migrate the database using the AWS Application Migration Service and use a Throughput Optimized (st1) EBS volume is incorrect. Although it is cheaper than SSD, it is primarily designed and used for frequently accessed throughput-intensive workloads. Cold HDD perfectly fits the description as it is used for their infrequently accessed data and provides the lowest cost, unlike Throughput Optimized HDD. In addition, the AWS Application Migration Service (MGN) is primarily used to migrate on-premises virtual machines from VMware vSphere, Windows Hyper-V, or Microsoft Azure only. You have to use the AWS Database Migration Service instead in this situation.
The option that says: Migrate the database using the AWS Application Migration Service and use a General Purpose (gp2) EBS Volume is incorrect because a General purpose SSD volume costs more and it is mainly used for a wide variety of workloads. It is recommended to be used as system boot volumes, virtual desktops, low-latency interactive apps, and many more. Moreover, you have to use the AWS Database Migration Service instead in this scenario and not AWS Application Migration Service (MGN).
References:
Check out this Amazon EBS Cheat Sheet:
Question 42: Skipped
A company wants to create a new service that will complement the launch of its new product. The site must be highly-available and scalable to handle the unpredictable workload , and should also be stateless and REST compliant [API Gateway] The solution needs to have multiple persistent storage layers for service object metadata [-> DynamoDB] and durable storage for static content [S3]. All requests to the service should be authenticated and securely processed [Cognito]. The company also wants to keep the costs at a minimum.
Which of the following is the recommended solution that will meet the company requirements?
Package the REST service on a Docker-based container and run it using the AWS Fargate service. Create a cross-zone Application Load Balancer in front of the Fargate service. Control user access to the API by using Amazon Cognito user pools. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create an encrypted Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket.
Package the REST service on a Docker-based container and run it using the AWS Fargate service. Create an Application Load Balancer in front of the Fargate service. Create a custom authenticator that will control access to the API. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create an Amazon S3 bucket to store the static content and enable secure-signed requests for the objects. Proxy the data through the REST service.
Configure Amazon API Gateway with the required resources and methods. Create unique Lambda functions to process each resource and configure the API Gateway methods with proxy integration to the respective Lambda functions. Control user access to the API by using Amazon Cognito user pools. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create a secured Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket.
(Correct)
Configure Amazon API Gateway with the required resources and methods. Create unique Lambda functions to process each resource and configure the API Gateway methods with proxy integration to the respective Lambda functions. Control user access to the API by using the API Gateway custom authorizer. Store service object metadata in an Amazon ElastiCache Multi-AZ cluster. Create a secured Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket.
Explanation
Amazon API Gateway Lambda proxy integration is a simple, powerful, and nimble mechanism to build an API with a setup of a single API method. The Lambda proxy integration allows the client to call a single Lambda function in the backend. The function accesses many resources or features of other AWS services, including calling other Lambda functions.
In Lambda proxy integration, when a client submits an API request, API Gateway passes to the integrated Lambda function the raw request as-is, except that the order of the request parameters is not preserved. This request data includes the request headers, query string parameters, URL path variables, payload, and API configuration data. The configuration data can include current deployment stage name, stage variables, user identity, or authorization context (if any).
A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Google, Facebook, Amazon, or Apple, and through SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK).
User pools provide:
- Sign-up and sign-in services.
- A built-in, customizable web UI to sign in users.
- Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers from your user pool.
- User directory management and user profiles.
- Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.
- Customized workflows and user migration through AWS Lambda triggers.
Amazon S3 is a simple key-based object store whose scalability and low cost make it ideal for storing large datasets or objects. When finding objects based on attributes or other metadata, a common solution is to build an external index such as a DynamoDB Table that maps queryable attributes to the S3 object key. DynamoDB is a NoSQL data store that can be used for storing the index itself.
One way of securing objects that are shared on S3 buckets is by using presigned URLs. When you create a presigned URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object), and expiration date and time. The presigned URLs are valid only for the specified duration.
Anyone who receives the presigned URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a presigned URL.
With the above solutions, the correct answer is: Configure Amazon API Gateway with the required resources and methods. Create unique Lambda functions to process each resource and configure the API Gateway methods with proxy integration to the respective Lambda functions. Control user access to the API by using Amazon Cognito user pools. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create a secured Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket.
The option that says: Package the REST service on a Docker-based container and run it using the AWS Fargate service. Create an Application Load Balancer in front of the Fargate service. Create a custom authenticator that will control access to the API. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create an Amazon S3 bucket to store the static content and enable secure-signed requests for the objects. Proxy the data through the REST service is incorrect. Although running Docker-based containers on Amazon Fargate is possible, this solution does not offer the lowest possible cost to satisfy the given scenario. AWS Lambda is suited for creating serverless/stateless APIs and costs cheaper than AWS Fargate.
The option that says: Package the REST service on a Docker-based container and run it using the AWS Fargate service. Create a cross-zone Application Load Balancer in front of the Fargate service. Control user access to the API by using Amazon Cognito user pools. Store service object metadata in an Amazon DynamoDB table with Auto Scaling enabled. Create an encrypted Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket is incorrect. Running a Fargate cluster continuously is more expensive than running Lambda functions which only runs on-demand.
The option that says: Configure Amazon API Gateway with the required resources and methods. Create unique Lambda functions to process each resource and configure the API Gateway methods with proxy integration to the respective Lambda functions. Control user access to the API by using the API Gateway custom authorizer. Store service object metadata in an Amazon ElastiCache Multi-AZ cluster. Create a secured Amazon S3 bucket to store the static content. Generate presigned URLs when referencing objects stored on the S3 bucket is incorrect because it is recommended to use Amazon Cognito user pools for user access controls compared to an API Gateway custom authorizer. It is more robust and offers more features because it is designed to handle user access.
References:
Check out these AWS Comparison Cheat Sheets:
Question 43: Skipped (review)
An insurance company collects contributions from its clients and invests them in the stock market. Using the on-premises data center, the company ingests raw data feeds from the stock market, transforms it, and sends [ETL ?] it to the internal Apache Kafka cluster for processing. The management wants to send the cluster’s output to Amazon Web Services [On-Prem to AWS -> Direct Connect] by building a scalable and near-real-time solution [-> Kinesis Data Stream ] that will provide the stock market data to its web application. The application is a critical production component so the solution needs to have a consistent high-performance network.
Which of the following actions should the solutions architect implement to fulfill the requirements? (Select THREE.)
Write a Lambda function to process the Amazon Kinesis data stream and write a GraphQL API in AWS AppSync to invoke the function. Send the callback messages to connected clients by using the @connections
command for the API.
To have consistent performance, request for an AWS Direct Connect connection from the on-premises data center to the AWS VPC.
(Correct)
Pull the messages from the on-premises Apache Kafka cluster by using a fleet of Amazon EC2 instances in an Auto Scaling Group. Send the data into an Amazon Kinesis Data Stream by using Amazon Kinesis Producer Library.
(Correct)
To have a consistent performance while being cost-effective, configure a Site-to-Site VPN from the on-premises data center to the AWS VPC.
Write a Lambda function to process the Amazon Kinesis data stream and create a WebSocket API in Amazon API Gateway to invoke the function. Send the callback messages to connected clients by using the @connections
command for the API.
(Correct)
Fetch the messages from the on-premises Apache Kafka cluster by using a fleet of EC2 instances in an Auto Scaling Group. Send the data into an Amazon Kinesis Data Stream by using Amazon Kinesis Consumer Library.
Explanation
AWS Direct Connect makes it easy to establish a dedicated connection from an on-premises network to one or more VPCs in the same region. Using private VIF on AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, as shown in the following figure. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
A producer puts data records into Amazon Kinesis data streams. For example, a web server sending log data to a Kinesis data stream is a producer. A consumer processes the data records from a stream.
To put data into the stream, you must specify
the name of the stream, a partition key, and the data blob to be added to the stream.
The partition key is used to determine which shard in the stream the data record is added to. All the data in the shard is sent to the same worker that is processing the shard. [Producer (Stream name, Partition key, data blob) -> put -> Worker (Consumer) ]
A WebSocket API in API Gateway is a collection of WebSocket routes that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can use API Gateway features to help you with all aspects of the API lifecycle, from creation through monitoring your production APIs.
API Gateway WebSocket APIs are bidirectional. A client can send messages to a specific service, and services can independently send messages to clients. This bidirectional behavior enables richer client/service interactions because services can push data to clients without requiring clients to make an explicit request. WebSocket APIs are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms.
You can use the @connections
API from your backend service to send a callback message to a connected client, get connection information or disconnect from the client.
Therefore, the correct answers are:
- To have consistent performance, request for an AWS Direct Connection from the on-premises data center to the AWS VPC.
- Pull the messages from the on-premises Apache Kafka cluster by using a fleet of Amazon EC2 instances in an Auto Scaling Group. Send the data into an Amazon Kinesis Data Stream by using Amazon Kinesis Producer Library. (producer = puts into services; consumer = pull)
- Write a Lambda function to process the Amazon Kinesis data stream and create a WebSocket API in Amazon API Gateway to invoke the function. Send the callback messages to connected clients by using the @connections
command for the API.
The option that says: Fetch the messages from the on-premises Apache Kafka cluster by using a fleet of EC2 instances in an Auto Scaling Group. Send the data into an Amazon Kinesis Data Stream by using Amazon Kinesis Consumer Library is incorrect because you should use Amazon Kinesis Producer Library, not Consumer Library. (Send data = Producer)
The option that says: Write a Lambda function to process the Amazon Kinesis data stream and write a GraphQL API in AWS AppSync to invoke the function. Send the callback messages to connected clients by using the @connections
command for the API is incorrect because using @connections
to have the backend service connect back to the clients is not a feature of the GraphQL API when using AWS AppSync.
The option that says: To have a consistent performance while being cost-effective, configure a Site-to-Site VPN from the on-premises data center to the AWS VPC is incorrect because a Site-to-Site VPN does not provide a reliable and high-performance network connection between the on-premises data center and Amazon VPC.
References:
Check out these AWS Direct Connect and Amazon Kinesis Cheat Sheets:
Question 44: Skipped
A company has three AWS accounts each with its own VPCs. There is a requirement for communication between the AWS resources across the accounts, so VPC peering needs to be configured. Please refer to the figure below for details of each VPC:
VPC-B and VPC-C have matching CIDR blocks. For a short-term requirement, VPC-A needs to communicate only with the database instance in VPC-B with an IP address of 10.0.0.77/32
[On VPC-A, add a static route for VPC-B CIDR (10.0.0.77/32
) with the target pcx-aaaabbbb]
while being able to communicate with all the resources in VPC-C. The Solutions Architect already created the necessary VPC peering links but VPC-A cannot effectively communicate to the VPC-B instance.[Need a solution here] The Solutions Architect suspects that the routes on each VPC still need proper configuration.
Which of the following solutions will allow VPC-A to communicate with the database instance in VPC-B while being able to communicate with all resources on VPC-C ? [so basically want VPC A to connect to B on single IP, and other on C; not bout connecting B -> C]
Enable dynamic route propagation in VPC-A with the peering targets pcx-aaaabbbb
and pcx-aaaacccc
respectively. On VPC-B, enable dynamic route propagation with peering target pcx-aaaabbbb
and add a network access control list (NACL) that allows only connections to IP address 10.0.0.77/32
from pcx-aaaabbbb
. On VPC-C, enable dynamic route propagation with the peering target pcx-aaaacccc
.
On VPC-A, add a static route for VPC-B CIDR (10.0.0.0/24
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/16
) with the target pcx-aaaacccc
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
.
On VPC-A, add a static route for VPC-B CIDR (10.0.0.0/24
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/24
) with the target pcx-aaaacccc
. Add a network access control list (NACL) on VPC-A to deny all connections to VPC-B except for the IP address 10.0.0.77/32
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
.
On VPC-A, add a static route for VPC-B CIDR (10.0.0.77/32
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/16
) with the target pcx-aaaacccc
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
.
(Correct)
On VPC-A:
10.0.0.77/32 to PCX-AB
10.0.0.0/16 to PCX-AC
On VPC-B & C:
172.16.0.0/24 to PCX-AB & PCX-AC respectibely
Explanation
You can configure VPC peering connections to provide access to part of the CIDR block, a specific CIDR block (if the VPC has multiple CIDR blocks), or a specific instance within the peer VPC. In this scenario, a central VPC is peered to two or more VPCs that have overlapping CIDR blocks.
You have a central VPC (VPC A) with one subnet, and you have a VPC peering connection between VPC A and VPC B (pcx-aaaabbbb
), and between VPC A and VPC C (pcx-aaaacccc
). VPC B and VPC C have matching CIDR blocks. You want to use VPC peering connection pcx-aaaabbbb
to route traffic between VPC A and specific instance in VPC B. All other traffic destined for the 10.0.0.0/16
IP address range is routed through pcx-aaaacccc
between VPC A and VPC C. <- summary
VPC route tables use longest prefix match to select the most specific route across the intended VPC peering connection. All other traffic is routed through the next matching route, in this case, across the VPC peering connection pcx-aaaacccc
.
If you have a VPC peered with multiple VPCs that have overlapping or matching CIDR blocks, ensure that your route tables are configured to avoid sending response traffic from your VPC to the incorrect VPC. AWS currently does not support unicast reverse path forwarding in VPC peering connections that checks the source IP of packets and routes reply packets back to the source. You still need to configure static routes on VPC-B and VPC-C going to VPC-A, respectively.
Therefore, the correct answer is: On VPC-A, add a static route for VPC-B CIDR (10.0.0.77/32
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/16
) with the target pcx-aaaacccc
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
. The standard VPC peering configuration will be done for VPC-A and VPC-C. As for VPC-B, only the static route to the specific should be configured on VPC-A. AWS will handle the longest prefix match to route the traffic.
The option that says: On VPC-A, add a static route for VPC-B CIDR (10.0.0.0/24
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/24
) with the target pcx-aaaacccc
. Add a network access control list (NACL) on VPC-A to deny all connections to VPC-B except for the IP address 10.0.0.77/32
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
is incorrect. This will result in a conflict on the routing configuration on VPC-A. Network ACLs can block connections going out of your VPC, however, they can't redirect connections out to specific targets.
The option that says: Enable dynamic route propagation in VPC-A with the peering targets pcx-aaaabbbb
and pcx-aaaacccc
respectively. On VPC-B, enable dynamic route propagation with peering target pcx-aaaabbbb
and add a network acces control list (NACL) that allows only connections to IP address 10.0.0.77/32
from pcx-aaaabbbb
. On VPC-C, enable dynamic route propagation with the peering target pcx-aaaacccc
is incorrect. Dynamic route propagation is usually used for Direct Connection connections or Site-to-Site VPNs. In this scenario, you want to force a specific route to a specific instance on a specific peering target, thus, you need to configure static routes.
The option that says: On VPC-A, add a static route for VPC-B CIDR (10.0.0.0/24
) with the target pcx-aaaabbbb
and another static route for VPC-C CIDR (10.0.0.0/16
) with the target pcx-aaaacccc
. On VPC-B, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaabbbb
. On VPC-C, add a static route for VPC-A CIDR (172.16.0.0/24
) with the target pcx-aaaacccc
is incorrect. Route configuration for VPC-A and VPC-C is correct here, however, the route configuration of VPC-A and VPC-B is not. This will make traffic to other instances in the CIDR (10.0.0.0/24
), which is under VPC-C, to be routed to VPC-B.
References:
Check out this Amazon VPC Cheat Sheet:
Question 46: Skipped
A multinational healthcare company plans to launch a new MedTech information website. The solutions architect decided to use Amazon CloudFormation to deploy a three-tier web application that consists of a web tier, an application tier, and a database tier that will utilize Amazon DynamoDB for storage. The solutions architect must secure any credentials that are used to access the database tier. [ secret manager?]
Which of the following options will allow the application instances access to the DynamoDB tables [-[-> IAM Role] without exposing API credentials?
Create an IAM User in the CloudFormation template and assign permissions to read and write from the DynamoDB table. Then retrieve the values of the access and secret keys using CloudFormation's GetAtt function, and pass them to the application instance through user-data.
Have the user enter the access and secret keys of an existing IAM User that has permissions to read and write from the DynamoDB table instead of using the Parameter section in the CloudFormation template.
Create an IAM Role that grants access to the DynamoDB table. Use the AWS::SSM::Parameter
resource that creates an SSM parameter in AWS Systems Manager Parameter Store containing the Amazon Resource Name of the IAM role. Have the instance profile property of the application instance reference the role.
Create an IAM Role and assign the required permissions to read and write from the DynamoDB table. Have the instance profile property of the application instance reference the role.(Correct)
Explanation
Applications that run on an EC2 instance must include AWS credentials in their AWS API requests. You could have your developers store AWS credentials directly within the EC2 instance and allow applications in that instance to use those credentials. But developers would then have to manage the credentials and ensure that they securely pass the credentials to each instance and update each EC2 instance when it's time to rotate the credentials. That's a lot of additional work. [Manual approach]
Instead, you can and should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a role, temporary security credentials are created dynamically and provided to the user. The scenario requires the instance to have access to DynamoDB tables without having to use the API credentials. In such scenarios, always think of creating IAM Roles rather than IAM Users.
Therefore, the correct answer is: Create an IAM Role and assign the required permissions to read and write from the DynamoDB table. Have the instance profile property of the application instance reference the role. It uses IAM Role with the appropriate permissions to access the resource, and it references that Role in the instance profile property of the application instance.
The option that says: Create an IAM User in the CloudFormation template and assign permissions to read and write from the DynamoDB table. Then retrieve the values of the access and secret keys using CloudFormation's GetAtt function, and pass them to the application instance through user-data is incorrect because you should never expose the Access and Secret Keys while accessing AWS resources, and using IAM Role is a more secure way of accessing the resources than using IAM Users with security credentials.
The option that says: Create an IAM Role that grants access to the DynamoDB table. Use the AWS::SSM::Parameter
resource that creates an SSM parameter in AWS Systems Manager Parameter Store containing the Amazon Resource Name of the IAM role. Have the instance profile property of the application instance reference the role is incorrect because storing the ARN of the IAM Role in the AWS Systems Manager Parameter Store is not the proper way to attach the role to the application instance. You have to use the instance profile property (AWS::IAM::InstanceProfile
) instead.
The option that says: Have the user enter the access and secret keys of an existing IAM User that has permissions to read and write from the DynamoDB table instead of using the Parameter section in the CloudFormation template is incorrect because you should never expose the Access and Secret Keys while accessing the AWS resources, and using IAM Role is a more secure way of accessing the resources than using IAM Users with security credentials.
References:
Check out this AWS IAM Cheat Sheet:
Question 47: Skipped
A BPO company uses a multitiered, java-based content management system (CMS) hosted on an on-premises data center. The CMS has a JBoss Application server present in the application tier [-> EC2]. The database tier consists of an Oracle database which is regularly backed up to S3 using the Oracle RMAN backup utility [-> Not Oracle RDS coz it doesn't support RMAN backup]. The application's static files and content are kept on a 512 GB Storage Gateway volume [-> EBS] which is attached to the application server via an iSCSI interface. The solutions architect was tasked to create a disaster recovery solution for the application and its data.
Which AWS-based disaster recovery strategy will give you the best RTO?
Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Attach an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server to access the static content.
Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Also provision an EBS volume containing static content obtained from Storage Gateway, and attach the volume to the JBoss EC2 server.(Correct)
Use RDS for your Oracle database and EC2 for the JBoss application server. Restore the RMAN Oracle backups from Amazon Glacier, and provision an EBS volume containing static content obtained from Storage Gateway. The volume will be attached to the JBoss EC2 server.
Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Use an AWS Storage Gateway-VTL running on Amazon EC2 as your source for restoring static content.
Explanation
Recovery Manager (RMAN) is an Oracle Database client that performs backup and recovery tasks on your databases and automates the administration of your backup strategies. It greatly simplifies backing up, restoring, and recovering database files.
By using stored volumes, you can store your primary data locally, while asynchronously backing up that data to AWS. Stored volumes provide your on-premises applications with low-latency access to their entire datasets. At the same time, they provide durable, offsite backups. You can create storage volumes and mount them as iSCSI devices from your on-premises application servers. Data written to your stored volumes is stored on your on-premises storage hardware. This data is asynchronously backed up to Amazon S3 as Amazon Elastic Block Store (Amazon EBS) snapshots.
If you are restoring an AWS Storage Gateway volume snapshot, you can choose to restore the snapshot as an AWS Storage Gateway volume or as an Amazon EBS volume. AWS Backup integrates with both services, and any AWS Storage Gateway snapshot can be restored to either an AWS Storage Gateway volume or an Amazon EBS volume.
The option that says: Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Also provision an EBS volume containing static content obtained from Storage Gateway, and attach the volume to the JBoss EC2 server is correct because it deploys the Oracle database on an EC2 instance by restoring the backups from S3 which can provide a faster recovery time, and it generates the EBS volume of static content from Storage Gateway.
The option that says: Use RDS for your Oracle database and EC2 for the JBoss application server. Restore the RMAN Oracle backups from Amazon Glacier, and provision an EBS volume containing static content obtained from Storage Gateway. The volume will be attached to the JBoss EC2 server is incorrect because restoring the backups from Amazon Glacier will be slower than S3 and will not meet the RTO. (Note: RDS is fine)
The option that says: Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Attach an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server to access the static content is incorrect because there is no need to attach the Storage Gateway as an iSCSI volume; you can just easily and quickly create an EBS volume from the Storage Gateway. Then, you can generate snapshots from the EBS volumes for better recovery time.
The option that says: Provision EC2 servers for both your JBoss application and Oracle database, and then restore the database backups from an S3 bucket. Use an AWS Storage Gateway-VTL running on Amazon EC2 as your source for restoring static content is incorrect as restoring the content from Virtual Tape Library will not fit into the RTO.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Question 48: Skipped
A company has an Oracle Real Application Clusters (RAC) database on their on-premises data center which they want to migrate to AWS [-> DB migration service; Oracle RDS (Wrong - coz RDS doesn't support RAC3) ] . The Chief Information Security Officer (CISO) instructed the solutions architects to automate the patch management process of the operating system in which the database runs, as well as to set up scheduled backups to comply with the company's disaster recovery plan.
Which of the following should the solutions architect implement to meet the company requirements with the least amount of effort?
Launch a Lambda function that would automate the creation of snapshots of the database in the EC2 instance. Use the CodeDeploy and CodePipeline service to automate the patch management process of the database.
Migrate the database to Amazon Aurora and enable automated backups for your Aurora RAC cluster. Patching is automatically handled in Aurora during the system maintenance window.
Migrate the database to Amazon RDS which provides a multi-AZ failover feature for your RAC cluster. This will also reduce the RPO and RTO in the event of system failure since RDS offers features such as patch management and maintenance of the underlying host.
Migrate the database to a cluster of EBS-backed Amazon EC2 instances across multiple AZs. Automate the creation of EBS snapshots from EBS volumes of the EC2 instance by using Amazon Data Lifecycle Manager. Install the SSM Agent to the EC2 instance and automate the patch management process using AWS Systems Manager Patch Manager.
(Correct)
Explanation
Amazon RDS does not support certain features in Oracle such as Multitenant Database, Real Application Clusters (RAC), Unified Auditing, Database Vault, and many more.
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.
Hence, the option that says: Migrate the database to a cluster of EBS-backed Amazon EC2 instances across multiple AZs. Automate the creation of EBS snapshots from EBS volumes of the EC2 instance by using Amazon Data Lifecycle Manager. Install the SSM Agent to the EC2 instance and automate the patch management process using AWS Systems Manager Patch Manager is correct. Oracle RAC is supported via the deployment using Amazon EC2. AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates.
The following options are both incorrect because an Amazon RDS database does not support Oracle RAC:
1. Migrate the database to Amazon RDS which provides a multi-AZ failover feature for your RAC cluster. This will also reduce the RPO and RTO in the event of system failure since RDS offers features such as patch management and maintenance of the underlying host.
2. Migrate the database to Amazon Aurora and enable automated backups for your Aurora RAC cluster. Patching is automatically handled in Aurora during the system maintenance window
The option that says: Launch a Lambda function that would automate the creation of snapshots of the database in the EC2 instance. Use the CodeDeploy and CodePipeline service to automate the patch management process of the database is incorrect because CodeDeploy and CodePipeline are CI/CD services and are not suitable for patch management. You should use AWS Systems Manager Patch Manager instead.
References:
Check out this Amazon RDS Cheat Sheet:
Question 49: Skipped
A company is building a new cryptocurrency trading platform that will be hosted on the AWS cloud. The solutions architect needs to set up the designed architecture in a single VPC. The solution should mitigate distributed denial-of-service (DDoS) attacks [-> AWS Shield Advanced] to secure the company’s applications and systems. The solution should also include a notification [-> SNS?] for incoming Layer 3 or Layer 4 attacks such as SYN floods and UDP reflection attacks. The system should also be protected against SQL injection, cross-site scripting [WAF Rules], and other Layer 7 attacks [-> AWS Shield Advance].
Which of the following solutions should the solutions architect implement together to meet the above requirement? (Select TWO.)
Send network logs to Amazon Fraud Detector to detect DDoS attacks and send notifications to security teams.
Set up rule-based filtering using the AWS Network Firewall service.
Use AWS WAF to define customizable web security rules that control which traffic can access your web applications.
(Correct)
Use AWS Shield Advanced which provides enhanced DDoS attack detection and monitoring for application-layer traffic to your AWS resources.
(Correct)
Place your servers behind a CloudFront web distribution and improve your cache hit ratio.
Explanation
A Distributed Denial of Service (DDoS) attack is a malicious attempt to make a targeted system, such as a website or application, unavailable to end users. To achieve this, attackers use a variety of techniques that consume network or other resources, interrupting access for legitimate end-users.
AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures that follow AWS Best Practices for DDoS Resiliency. These include services such as Amazon Route 53, Amazon CloudFront, Elastic Load Balancing, and AWS WAF to control and absorb traffic and deflect unwanted requests. These services integrate with AWS Shield, a managed DDoS protection service that provides always-on detection and automatic inline mitigations to safeguard web applications running on AWS.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.
In this scenario, AWS Shield Advanced and AWS WAF are the two services that can provide optimal DDoS attack mitigation and protection against Layer 7 security risks to your cloud infrastructure.
Therefore the correct answers are:
- Use AWS Shield Advanced which provides enhanced DDoS attack detection and monitoring for application-layer traffic to your AWS resources.
- Use AWS WAF to define customizable web security rules that control which traffic can access your web applications.
The option that says: Send network logs to Amazon Fraud Detector to detect DDoS attacks and send notifications to security teams is incorrect. Amazon Fraud Detector uses machine learning that automates the detection of potentially fraudulent activities online. It is designed to predict fraudulent transactions based on previous data sets that were used to create a model. It is not designed to detect and mitigate DDoS attacks. AWS Shield is a more suitable service for this scenario.
The option that says: Place your servers behind a CloudFront web distribution and improve your cache hit ratio is incorrect. Although CloudFront can help mitigate DDoS attacks, improving the cache hit ratio of your CloudFront distribution is still not enough to totally protect your infrastructure. This option also fails to mention the geoblocking and HTTPS protocol support features of CloudFront. Using AWS Shield Advanced and AWS WAF will provide more effective protection against DDoS.
The option that says: Set up rule-based filtering using the AWS Network Firewall service is incorrect. AWS Network Firewall is primarily used to manage multiple firewall rules across hundreds of Amazon VPCs and AWS Accounts that are usually under a single AWS Organization. It is explicitly mentioned in the scenario that the company is using its sole AWS account, and the solution is only running on a single Amazon VPC. Hence, using the AWS Network Firewall service is not suitable for this case.
References:
Check out these AWS WAF and AWS Shield Cheat Sheets:
Question 53: Skipped (review)
A leading call center company has its headquarters in Seattle. Its corporate web portal is deployed to AWS. The AWS cloud resources are linked to its corporate data center via a link aggregation group (LAG), which terminates at the same AWS Direct Connect endpoint and is connected on a private virtual interface (VIF) in your VPC. The portal must authenticate [Cognito?] against their on-premises LDAP server. Each Amazon S3 bucket can only be accessed by a logged-in user [signed URL or cookie?] if it belongs to that user.
Which of the following options should the solutions architect implement in AWS to meet the company requirements? (Select TWO.)
Authenticate against LDAP using an identity broker you created, and have it call IAM Security Token Service (STS) to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials from the identity broker to access the appropriate S3 bucket.
(Correct)
The application first authenticates against LDAP, and then uses the LDAP credentials to log in to IAM service. Finally, it can now use the IAM temporary credentials to access the appropriate S3 bucket.
Create an identity broker that assumes an IAM role, and retrieve temporary AWS security credentials via IAM Security Token Service (STS). The application gets the AWS temporary security credentials from the identity broker to gain access to the appropriate S3 bucket.
The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via a call to IAM Security Token Service (STS). Afterward, the application can now use the temporary credentials from the role to access the appropriate S3 bucket. [<- more detailed]
(Correct)
Use a Direct Connect Gateway instead of a single Direct Connect connection. Set up a Transit VPC which will authenticate against their on-premises LDAP server.
Explanation
Lightweight Directory Access Protocol (LDAP) is a standard communications protocol used to read and write data to and from Active Directory. You can manage your user identities in an external system outside of AWS and grant users who sign in from those systems access to perform AWS tasks and access your AWS resources. The distinction is where the external system resides—in your data center or an external third party on the web.
For enterprise identity federation, you can authenticate users in your organization's network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory.
This scenario has the following attributes:
- The identity broker application has permissions to access IAM's token service (STS) API to create temporary security credentials.
- The identity broker application is able to verify that employees are authenticated within the existing authentication system.
- Users are able to get a temporary URL that gives them access to the AWS Management Console (which is referred to as single sign-on).
The option that says: Authenticate against LDAP using an identity broker you created, and have it call IAM Security Token Service (STS) to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials from the identity broker to access the appropriate S3 bucket is correct because it follows the correct sequence. It develops an identity broker that authenticates users against LDAP, gets the security token from STS, and then accesses the S3 bucket using the IAM federated user credentials.
Likewise, the option that says: The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via call to IAM Security Token Service (STS). Afterwards, the application can now use the temporary credentials from the role to access the appropriate S3 bucket is correct because it follows the correct sequence. It authenticates users using LDAP, gets the security token from STS, and then accesses the S3 bucket using the temporary credentials.
The option that says: Create an identity broker that assumes an IAM role, and retrieve temporary AWS security credentials via IAM Security Token Service (STS). The application gets the AWS temporary security credentials from the identity broker to gain access to the appropriate S3 bucket is incorrect because the users need to be authenticated using LDAP first, not STS. Also, the temporary credentials to log into AWS are provided by STS, not identity broker.
The option that says: The application first authenticates against LDAP, and then uses the LDAP credentials to log in to IAM service. Finally, it can now use the IAM temporary credentials to access the appropriate S3 bucket is incorrect because you cannot use the LDAP credentials to log into IAM.
The option that says: Use a Direct Connect Gateway instead of a single Direct Connect connection. Set up a Transit VPC which will authenticate against their on-premises LDAP server is incorrect because using a Direct Connect Gateway will only improve the availability of your on-premises network connection and using a transit VPC is just a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. These two things will not meet the requirement.
References:
Check out this AWS IAM Cheat Sheet:
Question 55: Skipped
An advertising company plans to release a new photo-sharing app that will be hosted on the AWS Cloud. The app will store all pictures directly uploaded by users in a single Amazon S3 bucket [-> S3 folders/paths] and users will also be able to view and download their own pictures directly from the Amazon S3 bucket [-> presigned URL / cookies ?]. The solutions architect must ensure the security of the application and it should be able to handle potentially millions of users in the most secure manner.
How should the solutions architect set up the user registration flow in AWS for this mobile app?
Create an IAM user and generate an access key and a secret key to be stored in the mobile app for the IAM user. After applying the appropriate permissions to the S3 bucket policy, use the generated credentials to access S3.
Store user information in Amazon RDS and create an IAM Role with appropriate permissions. Generate new temporary credentials using the AWS Security Token Service 'AssumeRole' function every time the user uses their mobile app and creates new temporary credentials. These credentials will be stored in the mobile app's memory and will be used to access Amazon S3.(Correct)
Create an IAM user, assign appropriate permissions to it, and generate an access key and a secret key that will be stored in the mobile app and used to access Amazon S3.
Generate long-term credentials using AWS STS and apply the appropriate permissions. Store the credentials in the mobile app, and use them to access Amazon S3.
Explanation
In this scenario, the best solution is to use a combination of an IAM Role and STS for authentication. The STS AssumeRole returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to. These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole for cross-account access or federation.
Therefore the correct answer is: Store user information in Amazon RDS and create an IAM Role with appropriate permissions. Generate new temporary credentials using the AWS Security Token Service 'AssumeRole' function every time the user uses their mobile app and creates new temporary credentials. These credentials will be stored in the mobile app's memory and will be used to access Amazon S3. It creates an IAM Role with appropriate permissions and then generates temporary security credentials using STS AssumeRole. Then, it generates new credentials when the user runs the app the next time.
The option that says: Create an IAM user and generate an access key and a secret key to be stored in the mobile app for the IAM user. After applying the appropriate permissions to the S3 bucket policy, use the generated credentials to access S3 is incorrect. It suggests creating an IAM User, not the IAM Role - which is not a good solution. You should create an IAM Role so that the app can access the AWS Resource using STS AssumeRole.
The option that says: Generate long-term credentials using AWS STS and apply the appropriate permissions. Store the credentials in the mobile app, and use them to access Amazon S3 is incorrect. You should always grant short-term or temporary credentials for the mobile application. This option recommends creating long-term credentials.
The option that says: Create an IAM user, assign appropriate permissions to it, and generate an access key and a secret key that will be stored in the mobile app and used to access Amazon S3 is incorrect. It does not create the required IAM Role but instead, an IAM user.
References:
Check out this AWS IAM Cheat Sheet:
Question 57: Skipped
As part of the Corporate Social Responsibility of the tech company, the development team created an online learning system for a public university. The application architecture uses an Application Load Balancer in front of two On-Demand EC2 instances located in two Availability Zones. The only remaining requirement is to secure the new website with an HTTPS connection. [-> SSL on ALB]
Which of the following option is the most cost-effective and easiest way to complete the online learning system?
Generate a Public Certificate in ACM. Configure the two EC2 instances to use the Public Certificate to handle HTTPS requests.
Generate a Public Certificate in ACM. Configure the Application Load Balancer to use the Public Certificate to handle HTTPS requests.(Correct)
Generate a Private Certificate in ACM. Configure the two EC2 instances to use the Private Certificate to handle HTTPS requests.
Generate a Private Certificate in ACM. Configure the Application Load Balancer to use the Private Certificate to handle HTTPS requests.
Explanation
With AWS Certificate Manager, you can generate public or private SSL/TLS certificates that you can use to secure your site. Public SSL/TLS certificates provisioned through AWS Certificate Manager are free. You pay only for the AWS resources that you create to run your application. For private certificates, the ACM Private Certificate Authority (CA) is priced along two dimensions: (1) You pay a monthly fee for the operation of each private CA until you delete it and (2) you pay for the private certificates you issue each month.
Public certificates generated from ACM can be used on Amazon CloudFront, Elastic Load Balancing, or Amazon API Gateway but not directly on EC2 instances, unlike private certificates.
Hence, generating a Public Certificate in ACM and configuring the Application Load Balancer to use the Public Certificate to handle HTTPS requests is correct in this scenario as a public certificate does not cost anything and you can configure this certificate with the Application Load Balancer.
The option that says: Generate a Private Certificate in ACM. Configure the Application Load Balancer to use the Private Certificate to handle HTTPS requests is incorrect because this solution entails an additional cost. Remember that you have to pay a monthly fee for the operation of each private CA until you delete it. A more cost-effective solution is to use a public certificate instead since this is free of charge.
The option that says: Generate a Public Certificate in ACM. Configure the two EC2 instances to use the Public Certificate to handle HTTPS requests is incorrect because you cannot export public certificates from ACM and use them with your EC2 instances. You can only use the public certificate from ACM in Amazon CloudFront, Elastic Load Balancing, and Amazon API Gateway.
The option that says: Generate a Private Certificate in ACM. Configure the two EC2 instances to use the Private Certificate to handle HTTPS requests is incorrect. Although you can export private certificates from ACM and use them with EC2 instances, using this type of certificate still costs more than a public certificate.
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
How can I add certificates for websites to the ELB using Amazon Certificate Manager?
Question 58: Skipped
A company implements best practices and mandates that all of the cloud-related deployments should not be done manually but through the use of CloudFormation. All of the CloudFormation templates should be treated as code and hence, all of them are committed in a private GIT repository. A senior solutions architect has recently left the team. One of the tasks of the junior solutions architect is to handle a distributed system in AWS, in which the architecture is declared in a CloudFormation template. The distributed system needs to be migrated to another VPC and the junior solutions architect tried to read the template to understand the AWS resources that the template will generate. While analyzing the CloudFormation template, he stumbled upon the code below.
What does this code snippet do in CloudFormation?
"SNSTopic" : {
"Type" : "AWS::SNS::Topic",
"Properties" : {
"Subscription" : [{
"Protocol" : "sqs",
"Endpoint" : { "Fn::GetAtt" : [ "TutorialsDojoQueue", "Arn" ] }
}]
}
Creates an SNS topic which allows SQS subscription endpoints.
Creates an SNS topic and then invokes the call to create an SQS queue with a logical resource name of TutorialsDojoQueue.
Creates an SNS topic which allows SQS subscription endpoints to be added as a parameter on the template.
Creates an SNS topic and then adds a subscription using the ARN attribute name for the SQS resource, which is created under the logical name TutorialsDojoQueue.(Correct)
Explanation
AWS CloudFormation provides several built-in functions that help you manage your stacks which are called "intrinsic functions". Use intrinsic functions in your templates to assign values to properties that are not available until runtime.
You can use intrinsic functions only in specific parts of a template. Currently, you can use intrinsic functions in resource properties, outputs, metadata attributes, and update policy attributes. You can also use intrinsic functions to conditionally create stack resources.
The Fn::GetAtt
intrinsic function returns the value of an attribute from a resource in the template. It has 2 parameters: the logicalNameOfResource
and the attributeName
. The logical name (also called logical ID) of the resource contains the attribute that you want to use. The attributeName
is the name of the resource-specific attribute whose value you want to utilize.
Therefore, the correct answer is: Create an SNS topic and then add a subscription using the ARN attribute name for the SQS resource, which is created under the logical name TutorialsDojoQueue. The code snippet creates an SNS topic and then adds a subscription using the ARN attribute name for the SQS resource, which is created under the logical name "TutorialsDojoQueue" using the GetAtt intrinsic function.
The following options are all incorrect because these options incorrectly described what the code snippet does:
- Creates an SNS topic which allows SQS subscription endpoints to be added as a parameter on the template.
- Creates an SNS topic which allows SQS subscription endpoints.
- Creates an SNS topic and then invokes the call to create an SQS queue with a logical resource name of TutorialsDojoQueue.
References:
Check out this AWS CloudFormation Cheat Sheet:
Question 59: Skipped
A shipping firm runs its web applications on its on-premises data center. The servers have a dependency on non-x86 hardware and the management plans to use AWS to scale its on-premises data storage [Storage Gateway]. However, the backup application is only able to write to POSIX-compatible block-based storage [-> not S3, ]. There is a total of 1,000 TB [-> Gateway-Cached volumes] of data files that need to be mounted to a single folder on the file server. Existing users must also be able to access portions of this data while the backups are taking place. [-> Not Glacier]
Which of the following backup solutions would be most appropriate to meet the above requirements?
Use Amazon Glacier as the target for your data backups.
Provision Gateway Cached Volumes from AWS Storage Gateway.(Correct)
Provision Gateway Stored Volumes from AWS Storage Gateway.
Use Amazon S3 as the target for your data backups.
Explanation
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security. Gateway-Cached volumes can support volumes of 1,024TB in size, whereas Gateway-stored volume supports volumes of 512 TB size. (Gateway-stored volume is lower)
Therefore, the correct answer is: Provision Gateway Cached Volumes from AWS Storage Gateway is correct because it supports volumes of up to 1,024 TB in size, and the frequently accessed data is stored on the on-premises server while the entire data is backed up over AWS.
The option that says: Use Amazon Glacier as the target for your data backups is incorrect. The data stored in Amazon Glacier is not available immediately. Retrieval jobs typically require 3-5 hours to complete.
The option that says: Provision Gateway Stored Volumes from AWS Storage Gateway is incorrect. Gateway stored volumes can only store up to 512 TB worth of data.
The option that says: Use Amazon S3 as the target for your data backups is incorrect. Amazon S3 is designed for object storage and not ideal for POSIX compliant data.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 60: Skipped
A company uses a CloudFormation script to deploy an online voting application. The app is used for a Nature Photography Contest that accepts high-resolution images, stores them in an S3 bucket, and records a 100-character summary about the image in RDS. The Solutions Architect must ensure that the same online voting application can be deployed once again using the same CloudFormation template for succeeding contests in the future. The photography contest will run for just a month and once it has been concluded, there would be nobody using the online voting application anymore until the next contest [S3 lifecycle, not deleting coz used in next contest]. As preparation for the upcoming events next year, the 100-character summaries should be kept and the S3 bucket, which contains the high-resolution photos, should remain.
Which of the following options is the recommended action to meet the above requirement?
1. Set the DeletionPolicy on the S3 resource to Snapshot
. 2. Set the DeletionPolicy on the RDS resource to Snapshot
.
1. Enable Cross-Region Replication (CRR) in the S3 bucket to maintain a copy of all the S3 objects. 2. Set the DeletionPolicy for the RDS instance to Snapshot
.
1. Set the DeletionPolicy on the S3 resource declaration in the CloudFormation template to Retain
. 2. Set the RDS resource declaration DeletionPolicy to Snapshot
.
(Correct)
For both the RDS and S3 resource types on the CloudFormation template, set the DeletionPolicy to Retain
.
Explanation
With the DeletionPolicy attribute, you can preserve or (in some cases) back up a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default.
Note that this capability also applies to stack update operations that lead to resources being deleted from stacks, for example, if you remove the resource from the stack template and then update the stack with the template. This capability does not apply to resources whose physical instance is replaced during stack update operations. For example, if you edit a resource's properties such that AWS CloudFormation replaces that resource during a stack update.
In this scenario, you need to keep the data on your S3 bucket and RDS which can be achieved by setting the DeletionPolicy of S3 to Retain
and for RDS to use Snapshot
.
Setting the DeletionPolicy on the S3 resource to Snapshot
and for RDS resource to use Snapshot
is incorrect because S3 does not support snapshots. It should be set to Retain
.
For both the RDS and S3 resource types on the CloudFormation template, setting the DeletionPolicy to Retain
is incorrect. Although you can set Retain
on both the DeletionPolicy of the S3 bucket and the RDS database, this entails an unnecessary operating cost since the RDS database will still be running even if it is not used. Take note that the application is meant to run for just a month and once it has been concluded, nobody would be using it until the next contest next year. The RDS should be set to Snapshot
.
Enabling Cross-Region Replication (CRR) in the S3 bucket to maintain a copy of all the S3 objects and setting the DeletionPolicy for the RDS instance to Snapshot
is incorrect because even though your data will still be available in the other region because of the CRR, the current S3 bucket will still be deleted.
References:
Check out this AWS CloudFormation Cheat Sheet:
Question 61: Skipped
A leading online media company runs a popular sports news website. The solutions architect has been tasked to analyze each web visitor's clickstream data on the website to populate user analytics, which gives insights about the sequence of pages and advertisements the visitor has clicked. The data will be processed in real-time [-> Kinesis] which will then transform the page layout as the visitors click through the web portal to increase user engagement and consequently, increase the revenue for the company.
Which of the following options should the solutions architect implement to meet the above requirements?
Publish web clicks by session to an Amazon SQS queue and periodically drain these events to Amazon RDS then analyze with SQL.
Publish the web clicks to Amazon Timestream. Run custom analysis, SQL queries and apply machine learning to generate relevant reports regarding user behavior.
Push web clicks by session to Amazon Kinesis and analyze behavior using Amazon Kinesis workers.(Correct)
Log clicks in weblogs by URL and store it in Amazon S3, and then analyze with Elastic MapReduce.
Explanation
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and responds instantly instead of having to wait until all your data is collected before the processing can begin.
In this example, a simple HTML page simulates the content of a blog page. As the reader scrolls the simulated blog post, the browser script uses the SDK for JavaScript to record the scroll distance down the page and send that data to Kinesis using the putRecords
method of the Kinesis client class. The streaming data captured by Amazon Kinesis Data Streams can then be processed by Amazon EC2 instances and stored in any of several data stores including Amazon DynamoDB and Amazon Redshift.
Therefore, the correct answer is: Push web clicks by session to Amazon Kinesis and analyzing behavior using Amazon Kinesis workers.
The following options are incorrect as SQS, EC2, EMR, and Timestream services do not have the capacity to analyze real-time streaming data:
- Log clicks in weblogs by URL and store it in Amazon S3, and then analyze with Elastic MapReduce.
- Publish the web clicks to Amazon Timestream. Run custom analysis, SQL queries and apply machine learning to generate relevant reports regarding user behavior.
- Publish web clicks by session to an Amazon SQS queue and periodically drain these events to Amazon RDS then analyze with SQL.
References:
Check out this Amazon Kinesis Cheat Sheet:
Question 62: Skipped
An e-commerce company is having their annual sale event where buyers will be able to purchase goods at a large discount on their e-commerce website. The e-commerce site will receive millions of visitors in a short period of time when the sale begins. The visitors will first login to the site using either their Facebook or Google credentials and add items to their cart. After purchasing, a page will display the cart items along with the discounted prices. The company needs to build a checkout system that can handle the sudden surge of incoming traffic. [ CloudFront -> ALB -> ASG -> EC2 (IAM ROle) -> Cognito ; SQS -> Store into DynamoDB ]
Which of the following is the MOST scalable solution that they should use?
Combine an Elastic Load balancer in front of multiple web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Cognito. The web servers will process the user's purchases and store them in a DynamoDB table. Use an IAM Role to gain permissions to the DynamoDB table.
Combine an Elastic Load balancer in front of an Auto Scaling group of web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Cognito, then process the user's purchases and store them into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the queue. Finally, the items from the queue are retrieved by a set of application servers and stored into a DynamoDB table.
(Correct)
Combine an Elastic Load balancer in front of an Auto Scaling group of web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Lex, then process the user's purchases and store the cart into a Multi-AZ RDS database.
Use the static website hosting feature of Amazon S3 with the Javascript SDK to authenticate the user login with Amazon Cognito. Set up AWS Global Accelerator to deliver the static content stored in the S3 bucket. Store user purchases in a DynamoDB table and use an IAM Role for managing permissions.
Explanation
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
In this scenario, the best solution is to use a combination of CloudFront, Elastic Load Balancer and SQS to provide a highly scalable architecture.
Hence, the correct answer is: Combine an Elastic Load balancer in front of an Auto Scaling group of web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Cognito, then process the user's purchases and store them into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the queue. Finally, the items from the queue are retrieved by a set of application servers and stored into a DynamoDB table. This is a highly scalable solution and creates an appropriate IAM Role to access the DynamoDB database. In addition, it uses SQS which decouples the application architecture. This will allow the application servers to process the requests.
The option that says: Combine an Elastic Load balancer in front of an Auto Scaling group of web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Lex, then process the user's purchases and store the cart into a Multi-AZ RDS database is incorrect because multi-AZ RDS is a more expensive solution when compared to DynamoDB. In addition, Amazon Lex is just a service for building conversational interfaces into any application using voice and text. This is not utilized for user authentication, unlike Amazon Cognito.
The option that says: Use the static website hosting feature of Amazon S3 with the Javascript SDK to authenticate the user login with Amazon Cognito. Set up AWS Global Accelerator to deliver the static content. Store user purchases in a DynamoDB table and use an IAM Role for managing permissions is incorrect. Although this would work, it is not scalable, and storing all the data directly in DynamoDB would consume read and write capacity and increase the cost. Moreover, you cannot use AWS Global Accelerator to deliver the static content stored in the S3 bucket. You have to use Amazon CloudFront instead.
The option that says: Combine an Elastic Load balancer in front of multiple web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Amazon Cognito. The web servers will process the user's purchases and store them in a DynamoDB table. Use an IAM Role to gain permissions to the DynamoDB table is incorrect because it is not scalable and storing all the data directly in DynamoDB would consume read and write capacity and increase the cost. Moreover, the web servers are not placed in an Auto Scaling group, which means that this solution is not scalable.
References:
Check out this Amazon DynamoDB Cheat Sheet:
Question 63: Skipped
A company is planning to launch a mobile app for the Department of Transportation that allows government staff to upload the latest photos [-> S3] of ongoing construction works such as bridges, roads culverts, and dams all over the country. The mobile app should send the photos to a web server hosted on an EC2 instance which then adds a watermark to each photo [-> Lambda] that contains the project details and the date it was taken. The solutions architect must design a solution in which the photos generated by the server will be uploaded to an S3 bucket for durable storage.
Which of the following solutions is a secure architecture and allows the EC2 instance to upload photos to S3?
Set up an IAM role with permissions to list and write objects to the S3 bucket. Attach the IAM role to the EC2 instance which will enable it to retrieve temporary security credentials from the instance metadata and use that access to upload the photos to the S3 bucket.
(Correct)
Set up an IAM user with permissions to list and write objects to the S3 bucket. Launch the instance as the IAM user which will enable the EC2 instance to retrieve temporary security credentials from the instance userdata and use that access to upload the photos to the S3 bucket.
Set up a service control policy (SCP) with permissions to list and write objects to the S3 bucket. Attach the SCP to the EC2 instance which will enable it to retrieve temporary security credentials from the instance metadata and use that access to upload the photos to the S3 bucket.
Set up an IAM service role with permissions to list and write objects to the S3 bucket. Attach the IAM role to the EC2 instance which will enable it to retrieve temporary security credentials from the instance userdata and use that access to upload the photos to the S3 bucket.
Explanation
This question tests your understanding of IAM, specifically on when to use an IAM Role over an SCP. Since the server is running on an EC2 instance and the application makes requests to S3 to store the photos, the more suitable option to use here is an IAM Role.
In addition, don't create an IAM user and pass the user's credentials to the application or embed the credentials in the application. That will create a security risk because if an attacker had unauthorized access to that EC2 instance then the user credentials can easily be acquired and exploited. The better way is to create an IAM role that you can attach to the EC2 instance to give applications running on the instance temporary security credentials which can be used to access other AWS resources such as an S3 bucket. The credentials have the permissions specified in the policies attached to the role.
The option that says: Set up an IAM role with permissions to list and write objects to the S3 bucket. Attach the IAM role to the EC2 instance which will enable it to retrieve temporary security credentials from the instance metadata and use that access to upload the photos to the S3 bucket is correct as it uses an IAM Role and fetches the temporary security credentials from the instance metadata.
The option that says: Set up a service control policy (SCP) with permissions to list and write objects to the S3 bucket. Attach the SCP to the EC2 instance which will enable it to retrieve temporary security credentials from the instance metadata and use that access to upload the photos to the S3 bucket is incorrect as SCPs simply enable you to restrict, at the account level of granularity, what services and actions the users, groups, and roles in those accounts can do. SCPs don't grant permissions to any user or role because this is handled through IAM policies.
The option that says: Set up an IAM user with permissions to list and write objects to the S3 bucket. Launch the instance as the IAM user which will enable the EC2 instance to retrieve temporary security credentials from the instance userdata and use that access to upload the photos to the S3 bucket is incorrect as an IAM Role is a better option to use instead of an IAM User. Plus, you should always retrieve the temporary security credentials from the instance metadata and not from the user data.
The option that says: Set up an IAM service role with permissions to list and write objects to the S3 bucket. Attach the IAM role to the EC2 instance which will enable it to retrieve temporary security credentials from the instance userdata and use that access to upload the photos to the S3 bucket is incorrect because although it uses an IAM Role, the temporary security credentials should be retrieved from the instance metadata and not from the user data.
References:
Check out this AWS IAM Cheat Sheet:
Here is a deep dive on IAM Policies:
Question 64: Skipped
A government organization is currently developing a multi-tiered web application prototype that consists of various components for registration, transaction processing, and reporting. All of the components will be using different IP addresses and they are all hosted on one, extra-large EC2 instance as its main server. They will be using S3 as a durable and scalable storage service. For security purposes, the IT manager wants to implement 2 separate SSL certificates for the separate components.
How can the organization achieve this with a single EC2 instance?
Create an EC2 instance with a NAT address.
Launch an on-demand EC2 instance that has multiple network interfaces with multiple elastic IP addresses.
(Correct)
Create an EC2 instance with multiple security groups attached to it which contain separate rules for each IP address, including custom rules in the Network ACL.
Create an EC2 instance that has multiple subnets in two separate Availability Zones attached to it and each will have a separate IP address.
Explanation
You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it's attached or detached from an instance and reattached to another instance.
When you move a network interface from one instance to another, network traffic is redirected to the new instance. You can also modify the attributes of your network interface, including changing its security groups and managing its IP addresses. Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces. The maximum number of network interfaces that you can use varies by instance type.
In this scenario, you basically need to provide multiple IP addresses to a single EC2 instance. This can be easily achieved by using an Elastic Network Interface (ENI). An elastic network interface is a logical networking component in a VPC that represents a virtual network card.
Creating an EC2 instance with multiple security groups attached to it which contain separate rules for each IP address, including custom rules in the Network ACL is incorrect because a security group is mainly used to control the incoming or outgoing traffic to the instance and doesn't provide multiple IP addresses to an EC2 instance.
Creating an EC2 instance that has multiple subnets in two separate Availability Zones attached to it and each will have a separate IP address is incorrect because you cannot place the same EC2 instance in two separate Availability Zones.
Creating an EC2 instance with a NAT address is incorrect because a NAT address doesn't provide multiple IP addresses to an EC2 instance.
References:
Check out this Amazon EC2 Cheat Sheet:
Question 65: Skipped
A company wants to improve data protection for the sensitive information stored on its AWS account - both in transit and at rest. Data protection in transit simply means that the data should be secured while it travels to and from Amazon S3. Data protection at rest means that the stored data on disk must be secured in Amazon S3 data centers. You can protect data in transit by using SSL or by using client-side encryption. To secure data at rest, you can choose from a variety of available Server-Side Encryption in S3.
Which of the following best describes how Amazon S3-Managed Keys (SSE-S3) encryption method works?
In SSE-S3, a randomly generated data encryption key is returned which is used by the client to encrypt the object data.
In SSE-S3, you will be able to manage the customer master keys (CMKs) and Amazon S3 manages the encryption for reading and writing objects in your S3 bucket.
SSE-S3 provides separate permissions to use an API key that provides added protection against unauthorized access of your objects in S3.
SSE-S3 provides strong multi-factor encryption in which each object is encrypted with a unique key. It also encrypts the key itself with a master key that it rotates regularly.(Correct)
Explanation
With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket so that all new objects are encrypted when they are stored in the bucket. The objects are encrypted using server-side encryption with either Amazon S3-managed keys (SSE-S3) or AWS KMS keys stored in AWS Key Management Service (SSE-KMS).
When you configure your bucket to use default encryption with SSE-KMS, you can also enable S3 Bucket Keys to decrease request traffic from Amazon S3 to AWS Key Management Service (AWS KMS) and reduce the cost of encryption. When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the objects.
Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) uses strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.
Therefore the correct answer is: SSE-S3 provides strong multi-factor encryption in which each object is encrypted with a unique key. It also encrypts the key itself with a master key that it rotates regularly.
The option that says: SSE-S3 provides separate permissions to use an API key that provides added protection against unauthorized access of your objects in S3 is incorrect. SSE-S3 does not use API keys but rather encryption keys.
The option that says: In SSE-S3, you will be able to manage the customer master keys (CMKs) and Amazon S3 manages the encryption for reading and writing objects in your S3 bucket is incorrect. Customer master keys (CMKs) are being used in SSE-KMS and not in SSE-S3.
The option that says: In SSE-S3, a randomly generated data encryption key is returned which is used by the client to encrypt the object data is incorrect. SSE-S3 does not use a randomly generated data encryption key.
References:
Check out this Amazon S3 Cheat Sheet:
Question 66: Skipped
An electronics and communications company in Japan has several VPCs in the AWS cloud. It uses NAT instances to allow multiple EC2 instances from the private subnet to initiate connections to the Internet while also restricting any requests coming from the outside network. However, there are numerous incidents where the NAT instance is not available, which affects the batch processing of critical applications.
Which is the most suitable solution that provides better availability and bandwidth to the current infrastructure with minimal administrative effort?
Launch two large NAT instances in two separate public subnets and add a route from the private subnet to each NAT instance to make it more fault-tolerant and highly available.
Create an egress-only Internet gateway. Update the route tables of the private subnet to point the Internet traffic to the egress-only Internet gateway.
Launch a larger NAT instance with the enhanced networking feature enabled to improve the availability and performance of the NAT device.
Create a NAT gateway then specify its corresponding subnet and Elastic IP address. Update the route tables of the private subnet to point the Internet traffic to the NAT gateway.
(Correct)
Explanation
You can use a NAT device to enable instances in a private subnet to connect to the internet (for example, for software updates) or other AWS services, but prevent the internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. When traffic goes to the internet, the source IPv4 address is replaced with the NAT device’s address and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances’ private IPv4 addresses.
AWS offers two kinds of NAT devices—a NAT gateway or a NAT instance. It is recommended to use NAT gateways, as they provide better availability and bandwidth over NAT instances. The NAT Gateway service is also a managed service that does not require your administration efforts.
A NAT instance is launched from a NAT AMI and you can choose to use a NAT instance for special purposes. However, this type of NAT device is limited and is not highly available compared with a NAT Gateway.
Therefore, the correct answer is: Create a NAT gateway then specify its corresponding subnet and Elastic IP address. Update the route tables of the private subnet to point the Internet traffic to the NAT gateway.
The option that says: Launch a larger NAT instance with the enhanced networking feature enabled to improve the availability and performance of the NAT device is incorrect. Even if you upgrade your NAT device to a larger instance, it would still be a single component. This means that if your NAT instance goes down, there would be no other instance to handle the requests, which means that your architecture is not highly available. It is better to use a NAT Gateway to provide better availability and bandwidth for your infrastructure.
The option that says: Launch two large NAT instances in two separate public subnets and add a route from the private subnet to each NAT instance to make it more fault-tolerant and highly available is incorrect. Although this solution is indeed highly available and fault-tolerant, this entails a lot of administrative effort to manage those two NAT instances. Hence, it is still better to use the NAT Gateway service since it is a managed service that does not require administrative effort.
The option that says: Create an egress-only Internet gateway. Update the route tables of the private subnet to point the Internet traffic to the egress-only Internet gateway is incorrect because an egress-only Internet gateway is primarily used to handle IPv6 traffic, which is not mentioned in this scenario.
References:
Check out this Amazon VPC Cheat Sheet:
Question 69: Skipped
A global real estate startup is looking for an option of adding a cost-effective location-based alert to their iOS and Android mobile apps. Their users will receive alerts on their mobile device regarding real estate offers in proximity to their current location and the delivery time for the push notifications should be less than a minute. The existing mobile app has an initial 2 million users worldwide and is rapidly growing. What is the most suitable architecture to use in this scenario?
Set up an architecture where the mobile app will send the user's location to an SQS queue and a fleet of On-Demand EC2 instances will retrieve the relevant offers from an Amazon Aurora database. Once the data has been processed, use AWS Device Farm to send out the offers to the mobile app.
Set up an architecture where the mobile app will send the user's location to an SQS queue and a fleet of On-Demand EC2 instances will retrieve the relevant offers from a DynamoDB table. Once the data has been processed, use AWS SNS Mobile Push to send out the offers to the mobile app.
(Correct)
Set up an architecture where there is an Auto Scaling group of On-Demand EC2 instances behind an API Gateway that retrieve the relevant offers from RDS. Once the data has been processed, use AWS AppSync to send out the offers to the mobile app.
Set up an architecture where the mobile app will send the user's location to an API Gateway with Lambda functions which process and retrieve the relevant offers from an RDS database. Once the data has been processed, use Amazon Pinpoint to send out the offers to the mobile app.
Explanation
With Amazon SNS Mobile Push Notifications, you have the ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts.
The option that says: Set up an architecture where the mobile app will send the user's location to an SQS queue and a fleet of On-Demand EC2 instances will retrieve the relevant offers from a DynamoDB table. Once the data has been processed, use AWS SNS Mobile Push to send out the offers to the mobile app is correct because SQS is a highly scalable, cost-effective solution for carrying out utility tasks such as holding the location of millions of users. In addition, it uses a highly scalable DynamoDB table and a cost-effective AWS SNS Mobile Push service to send push notification messages directly to the mobile apps.
The option that says: Set up an architecture where there is an Auto Scaling group of On-Demand EC2 instances behind an API Gateway that retrieve the relevant offers from RDS. Once the data has been processed, use AWS AppSync to send out the offers to the mobile app is incorrect. Although a combination of On-Demand EC2 instances and API Gateway can provide a scalable computing system, it is wrong to use AWS AppSync for push notification to mobile devices. You should use AWS SNS Mobile Push service instead.
The option that says: Set up an architecture where the mobile app will send the user's location to an API Gateway with Lambda functions which process and retrieve the relevant offers from an RDS database. Once the data has been processed, use Amazon Pinpoint to send out the offers to the mobile app is incorrect. Although it is correct to use Amazon Pinpoint to send push notifications, RDS is not a suitable database for the mobile app because it is not as scalable enough when processing data from various users around the globe, compared with DynamoDB. Usually, mobile applications do not have complicated table relationships hence, it is recommended to use a NoSQL database like DynamoDB. [-> Why DynamoDB]
The option that says: Set up an architecture where the mobile app will send the user's location to an SQS queue and a fleet of On-Demand EC2 instances will retrieve the relevant offers from an Amazon Aurora database. Once the data has been processed, use AWS Device Farm to send out the offers to the mobile app is incorrect because AWS Device Farm is an app testing service and is not used to push notifications to various mobile devices. It only lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real-time.
References:
Check out this Amazon SNS Cheat Sheet:
Question 72: Skipped
A leading media company is building a collaborative news website that is expected to have over 5 million readers per month globally. Each article contains a cover image and has at least 200 words. Based on the trend of their other websites, the new articles are highly browsed in the first 2 months and the authors tend to frequently update the articles on the first month after its publication. The readership is also expected to drop on the 3rd month and the articles are usually rarely accessed after a year [S3 -> ...]. The readers are also leaving a lot of comments within the first 3 months of publishing.
In this scenario, which of the following items can you use to build a durable, highly available, and scalable architecture for the news website? (Select TWO.)
Use Lambda with Auto-Healing enabled. [?]
Use CloudFront as a Content Delivery Network to load the articles much faster anywhere in the globe.(Correct)
Use EBS Volumes in RAID 0 configuration to store the static data such as the cover images and other media. [?]
Use Amazon RDS Multi-AZ deployments with Read Replicas. Use S3 to store the static data such as the cover images and other media.
(Correct)
Launch an RDS Oracle Real Application Clusters (RAC) [-> RAC not supported] with Read Replicas.
Explanation
In this scenario, the main objective is to provide a durable, highly-available and scalable architecture for the website. You can use CloudFront as a CDN, then Amazon RDS with Multi-AZ deployments and Read Replicas to provide scalability and high-availability for the millions of incoming traffic every month. Lastly, you can use an S3 bucket to durably store the images and other static media content of the website.
Therefore, the correct answer is: Use CloudFront as a Content Delivery Network to load the articles much faster anywhere in the globe and Use Amazon RDS Multi-AZ deployments with Read Replicas. Use S3 to store the static data such as the cover images and other media.
The option that says: Launch an RDS Oracle Real Application Clusters (RAC) with Read Replicas is incorrect because Oracle RAC is not supported in RDS.
The option that says: Use Lambda with Auto-Healing enabled is incorrect. AWS Lambda is serverless and it does not have Auto-Healing option since functions are event-driven.
The option that says: Use EBS Volumes in RAID 0 configuration to store the static data such as the cover images and other media is incorrect because EBS Volumes are not as durable and not as scalable compared with S3, even with a RAID 1 (mirroring) configuration.
References:
Check out these Amazon RDS and Amazon S3 Cheat Sheets:
Question 75: Skipped
A company is implementing cloud best practices for its infrastructure. The Solutions Architect is using AWS CloudFormation templates for infrastructure-as-code of its two-tier web application. The application frontend is hosted on an Auto Scaling group of Amazon EC2 instances while the database is an Amazon RDS for MySQL instance. For security purposes, the database password must be rotated every 60 days. [Secret Manager]
Which of the following solutions is the MOST secure way to store and retrieve the database password for the web application?
On the CloudFormation template, create a database password parameter. Add a UserData
[x] property to reference the password parameter in the initialization script of the Auto Scaling group’s launch template using the Ref
intrinsic function. Save the password inside the EC2 instance upon its launch. Use the Ref
intrinsic function to reference the parameter as the value of the MasterUserPassword
property in the AWS::RDS::DBInstance
resource.
On the CloudFormation template, create an encrypted parameter using AWS Systems Manager [x] Parameter Store for the database password. Add a UserData
property to reference the encrypted parameter in the initialization script of the Auto Scaling group’s launch template. Use the Fn::GetAtt
intrinsic function to reference the encrypted parameter as the value of the MasterUserPassword
property in the <code>AWS::RDS::DBInstance
resource.
On the CloudFormation template, create an AWS Secrets Manager secret resource for the database password. Modify the application to retrieve the database password from Secrets Manager when it launches. Use a dynamic reference for the secret resource to be placed as the value of the MasterUserPassword
property of the AWS::RDS::DBInstance
resource.
(Correct)
On the CloudFormation template, create an AWS Secrets Manager secret resource for the database password. Add a UserData
property to reference the secret resource in the initialization script of the Auto Scaling group’s launch template using the Ref
intrinsic function. Use the Ref
intrinsic function to reference the secret resource as the value of the <code>MasterUserPassword
property in the AWS::RDS::DBInstance
resource.
Explanation
AWS Secrets Manager integrates with AWS CloudFormation so you can create and retrieve secrets securely using CloudFormation. This integration makes it easier to automate provisioning your AWS infrastructure. For example, without any code changes, you can generate unique secrets for your resources with every execution of your CloudFormation template. This also improves the security of your infrastructure by storing secrets securely, encrypting automatically, and enabling rotation more easily. Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources.
CloudFormation helps you model your AWS resources as templates and execute these templates to provision AWS resources at scale. Some AWS resources require secrets as part of the provisioning process. For example, to provision a MySQL database, you must provide the credentials for the database superuser. You can use Secrets Manager, the AWS dedicated secrets management service, to create and manage such secrets.
Secrets Manager makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can reference Secrets Manager in your CloudFormation templates to create unique secrets with every invocation of your template. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Secrets Manager ensures the secret isn’t logged or persisted by CloudFormation by using a dynamic reference to the secret. You can configure Secrets Manager to rotate your secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for all Amazon RDS databases and supports extensibility with AWS Lambda so you can meet your custom rotation requirements.
Dynamic references provide a compact, powerful way for you to specify external values that are stored and managed in other services, such as the Systems Manager Parameter Store, in your stack templates. When you use a dynamic reference, CloudFormation retrieves the value of the specified reference when necessary during stack and change set operations.
Secrets Manager resource types supported in CloudFormation:
AWS::SecretsManager::Secret
— Create a secret and store it in Secrets Manager.
AWS::SecretsManager::ResourcePolicy
— Create a resource-based policy and attach it to a secret. Resource-based policies enable you to control access to secrets.
AWS::SecretsManager::SecretTargetAttachment
— Configure Secrets Manager to rotate the secret automatically.
AWS::SecretsManager::RotationSchedule
— Define the Lambda function that will be used to rotate the secret.
Therefore, the correct answer is: On the CloudFormation template, create an AWS Secrets Manager secret resource for the database password. Modify the application to retrieve the database password from Secrets Manager when it launches. Use a dynamic reference for the secret resource to be placed as the value of the MasterUserPassword
property of the AWS::RDS::DBInstance
resource. You can use dynamic references in CloudFormation to specify external values that are stored and managed in other services, such as AWS SSM Parameter Store or AWS Secrets Manager. Your application can then retrieve the database password resource on AWS Secrets Manager when it needs to.
The option that says: On the CloudFormation template, create a database password parameter. Add a UserData
property to reference the password parameter in the initialization script of the Auto Scaling group’s launch template using the Ref
intrinsic function. Save the password inside the EC2 instance upon its launch. Use the Ref
intrinsic function to reference the parameter as the value of the MasterUserPassword
property in the AWS::RDS::DBInstance
resource is incorrect. Using a normal parameter in CloudFormation is not secure. This will require you to either store the password on the template itself or input it on the AWS web console.
The option that says: On the CloudFormation template, create an AWS Secrets Manager secret resource for the database password. Add a UserData
property to reference the secret resource in the initialization script of the Auto Scaling group’s launch template using the Ref
intrinsic function. Use the Ref
intrinsic function to reference the secret resource as the value of the MasterUserPassword
property in the AWS::RDS::DBInstance
resource is incorrect. Using the user data scripts to retrieve the database password may expose the password to the environment of the operating system of the EC2 instance. It is more secure to configure the application to retrieve the secret resource when the application launches so it is not exposed to anything outside the application.
The option that says: On the CloudFormation template, create an encrypted parameter using AWS Systems Manager Parameter Store for the database password. Add a UserData
property to reference the encrypted parameter in the initialization script of the Auto Scaling group’s launch template. Use the Fn::GetAtt
intrinsic function to reference the encrypted parameter as the value of the MasterUserPassword
property in the AWS::RDS::DBInstance
resource is incorrect. AWS Systems Manager Parameter can store an encrypted password, however, it can generate a password that can be referenced in AWS CloudFormation. It also doesn't support automatic rotation of the parameter.
References:
Check out these AWS Secrets Manager and AWS CloudFormation Cheat Sheets:
Question 9: Skipped
A startup is building a mobile app and a custom GraphQL API backend that lets people post photos and videos of road potholes, faulty street lights, bridge damages, and other issues in the public infrastructure with 100-character summaries. The data gathered by the system will be used by the department of public works to facilitate fast resolution. The developers used a javascript-based React Native mobile framework so that it would run on various mobile and tablet devices. The app will be connecting to a custom GraphQL API that will be responsible for storing the photos and videos in an Amazon S3 bucket and will also access a DynamoDB table to store the summaries. The developers have recently deployed the mobile app prototype but it was found that there is an availability issue with the custom GraphQL API. To proceed with the project, the team decided to remove the API and instead, re-model the mobile app so that it will directly connect to both DynamoDB and S3 as well as handle user authentication. [API Gateway, Cognito]
Which of the following options provides the most cost-effective and scalable architecture for this project?
1. Set up a web identity federation using the AssumeRole API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM user for that provider and set up permissions for the IAM user to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
1. Set up a web identity federation using Cognito and social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Configure the IAM role in Cognito to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
1. Set up a web identity federation using the AssumeRoleWithWebIdentity API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
1. Set up a web identity federation using the AssumeRoleWithSAML API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS temporary security credentials to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
1. Set up a web identity federation using the AssumeRoleWithWebIdentity API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS temporary security credentials to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
(Correct)
Explanation
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known external identity provider (IdP), such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP. They can receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application.
In this scenario, you have a mobile app that needs to have access to the DynamoDB and S3 bucket. You can achieve this by using Web Identity Federation with AssumeRoleWithWebIdentity API which provides temporary security credentials and an IAM role.
Thus, the correct answer here is the following option:
1. Set up a web identity federation using the AssumeRoleWithWebIdentity API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS temporary security credentials to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
The following option is incorrect because you cannot use AssumeRole API and an IAM user in this scenario:
1. Set up a web identity federation using the AssumeRole API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM user for that provider and set up permissions for the IAM user to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
The following option is incorrect because you should have used AssumRoleWithWebIdentity instead of AssumeRoleWithSAML API:
1. Set up a web identity federation using the AssumeRoleWithSAML API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS temporary security credentials to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
The following option is incorrect because it is a security risk to store and use the AWS access and secret keys from the mobile app itself:
1. Set up a web identity federation using the AssumeRoleWithWebIdentity API of STS and register with social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Create an IAM role for that provider and set up permissions for the IAM role to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
The following option is incorrect. Even though the use of Cognito is valid, it is wrong to store and use the AWS access and secret keys from the mobile app itself. This is a security risk and you should use the temporary security credentials instead:
1. Set up a web identity federation using Cognito and social identity providers like Amazon, Google, Facebook or any other OpenID Connect (OIDC)-compatible IdP.
2. Configure the IAM role in Cognito to allow access to S3 and DynamoDB.
3. The mobile app will use the AWS access and secret keys to store the photos and videos to an S3 bucket and persist the summaries to the DynamoDB database.
References:
Check out this AWS IAM Cheat Sheet:
Question 15: Skipped
A company has recently finished developing a web application that will soon be put into production. Before it is transferred into the production environment, a final test run must be conducted. Only the employees can access the web app - either from the corporate network or from the Internet. The manager instructed the solutions architect to ensure that the EC2 instance hosting the application server will not be exposed to the Internet. [Private Subnet? SG? NACL? ]
Which of the following options is the recommended implementation to fulfill the company requirements?
1. Use IPsec VPN that would allow your employees to access the network of your application servers.
2. Create a public subnet in your VPC and launch your application servers in it.
1. Configure SSL VPN on the public subnet of your VPC.
2. Install an SSL VPN client software on all employee workstations.
3. Create a private subnet in your VPC and place your application servers in it.
(Correct)
1. Launch an Elastic Load Balancer for your EC2 instances that terminates SSL to them.
2. Create a public subnet in your VPC and launch your application servers in it.
1. Use AWS Direct Connect to hook up your employee workstations to the VPC via a private interface.
2. Create a public subnet and place your application servers in it.
Explanation
In this scenario, you have a web application which is still under development but you want to enable access to it only for the employees via public Internet. In this scenario, you can implement an SSL VPN solution in which the employees can connect first and once they are authenticated, they will be granted access to the online portal. In this way, you can launch the web servers in the private subnet and still access it over the Internet via the VPN.
Therefore, the correct answer is:
1. Configure SSL VPN on the public subnet of your VPC.
2. Install an SSL VPN client software on all employee workstations.
3. Create a private subnet in your VPC and place your application servers in it.
The following option is incorrect. Even though an IPSec VPN may work, your application servers are still exposed since you launched them in a public subnet:
1. Use IPsec VPN that would allow your employees to access the network of your application servers.
2. Create a public subnet in your VPC and launch your application servers in it.
The following option is incorrect because you don't need to set up a DirectConnect connection in order to meet the requirement. Additionally, it is costly to maintain this connection considering that it is not required to have a high bandwidth connection between the customer and AWS:
1. Use AWS Direct Connect to hook up your employee workstations to the VPC via a private interface.
2. Create a public subnet and place your application servers in it.
The following option is incorrect because terminating the SSL on the EC2 instance does not meet the requirement.
1. Launch an Elastic Load Balancer for your EC2 instances that terminates SSL to them.
2. Create a public subnet in your VPC and launch your application servers in it.
The application servers are still exposed as it is deployed to a public subnet.
References:
Check out this Amazon VPC Cheat Sheet:
Question 22: Skipped
A company is running hundreds of Linux-based Amazon EC2 instances launched with custom AMIs that are dedicated to specific products and services. As part of the security compliance requirements, vulnerability scanning must be done on all EC2 instances wherein each instance must be scanned and pass a Common Vulnerabilities and Exposures (CVE) assessment [-> Inspector]. Since the development team relies heavily on the custom AMIs for their deployments, the company wants to have an automated process [-> SSM Automation document] to run the security assessment on any new AMIs and properly tag them before they can be used by the developers. To ensure continuous compliance, the security-approved AMIs must also be scanned every 30 days [-> EventBridge (30-day interval cron rule)] to check for new vulnerabilities and apply the necessary patches.
Which of the following steps should the Solutions Architect implement to achieve the security requirements? (Select TWO.)
Write a Lambda function that will create automatic approval rules. Create a parameter on AWS SSM Parameter Store to save the list of all security-approved AMI. Set up a managed rule on AWS Config to continuously scan all running EC2 instances. For any detected vulnerability, run the designated SSM Automation document.
Install the AWS Systems Manager (SSM) agent on all EC2 instances. With the agent running, run a detailed CVE assessment scan on the EC2 instances launched from the AMIs that need scanning.
Create an Assessment template on Amazon Inspector to target the EC2 instances. Run a detailed CVE assessment scan on all running Amazon EC2 instances launched from the AMIs that need scanning.
(Correct)
Check AWS CloudTrail logs to determine the Amazon EC2 instance IDs that were launched from the AMIs that need scanning. Use AWS Config managed rule to run CVE assessment and remediation on the instances.
Develop a Lambda function that will create automatic approval rules. Create a parameter on AWS SSM Parameter Store to save the list of all security-approved AMI. Set up a 30-day interval cron rule on Amazon EventBridge to trigger an AWS SSM Automation document run on all EC2 instances.
(Correct)
Explanation
AWS Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation enables you to do the following:
- Build automations to configure and manage instances and AWS resources.
- Create custom runbooks or use pre-defined runbooks maintained by AWS.
- Receive notifications about Automation tasks and runbooks by using Amazon EventBridge.
- Monitor Automation progress and details by using the AWS Systems Manager console.
SSM Automation offers one-click automation for simplifying complex tasks such as creating golden Amazon Machines Images (AMIs) and recovering unreachable EC2 instances. For example, you can use Use the AWS-UpdateLinuxAmi
and AWS-UpdateWindowsAmi
runbooks to create golden AMIs from a source AMI. You can run custom scripts before and after updates are applied. You can also include or exclude specific packages from being installed.
With AWS EventBridge, you can create rules that self-trigger on an automated schedule in EventBridge using cron or rate expressions. Rate expressions are simpler to define but don't offer the fine-grained schedule control that cron expressions support. For example, with a cron expression, you can define a rule that triggers at a specified time on a certain day of each week or month. With this, you can schedule running AWS SSM Automation documents to remediate the vulnerable AMIs.
You can use Amazon Inspector to conduct a detailed scan for CVE in your fleet of EC2 instances. Amazon Inspector offers predefined software called an agent that you can optionally install in the operating system of the EC2 instances that you want to assess. Amazon Inspector also has rules packages that help verify whether the EC2 instances in your assessment targets are exposed to common vulnerabilities and exposures (CVEs). Attacks can exploit unpatched vulnerabilities to compromise the confidentiality, integrity, or availability of your service or data. The CVE system provides a reference method for publicly known information security vulnerabilities and exposures.
The option that says: Develop a Lambda function that will create automatic approval rules. Create a parameter on AWS SSM Parameter Store to save the list of all security-approved AMI. Set up a 30-day interval cron rule on Amazon EventBridge to trigger an AWS SSM Automation document run on all EC2 instances is correct because it satisfies the requirement for updating the security-approved AMI, along with scheduled patches every 30-days using SSM Automation document. AWS SSM Automation can automatically pack AMIs after patches are applied.
The option that says: Create an Assessment template on Amazon Inspector to target the EC2 instances. Run a detailed CVE assessment scan on all running Amazon EC2 instances launched from the AMIs that need scanning is correct because Amazon Inspector can run assessments on target EC2 instances to check if they are exposed to common vulnerabilities and exposures (CVEs).
The option that says: Install the AWS Systems Manager (SSM) agent on all EC2 instances. With the agent running, run a detailed CVE assessment scan on the EC2 instances launched from the AMIs that need scanning is incorrect because the SSM agent cannot run a detailed CVE assessment scan on EC2 instances. You have to use Amazon Inspector to satisfy the given requirement.
The option that says: Write a Lambda function that will create automatic approval rules. Create a parameter on AWS SSM Parameter Store to save the list of all security-approved AMI. Set up a managed rule on AWS Config to continuously scan all running EC2 instances. For any detected vulnerability, run the designated SSM Automation document is incorrect because AWS Config cannot automatically run checks on the operating system of your Amazon EC2 instances. The requirement is to run the assessment every 30-days only and not continuously.
The option that says: Check AWS CloudTrail logs to determine the Amazon EC2 instance IDs that were launched from the AMIs that need scanning. Use AWS Config managed rule to run CVE assessment and remediation on the instances is incorrect. Although it is possible to parse the EC2 instance IDs from CloudTrail and determine the vulnerable instances, you still cannot run the CVE assessment in AWS Config for your Amazon EC2 instances. Using Amazon Inspector is the most suitable service to use in running the CVE assessment.
CVE -> Inspector
Regular schedule checking -> EventBridge
References:
Check out the AWS Systems Manager and AWS Inspector Cheat Sheet:
Question 23: Skipped
A company has a large Microsoft Windows Server running on a public subnet. There are EC2 instances hosted on a private subnet that allows Remote Desktop Protocol (RDP) connections to the Windows Server via port 3389. These instances enable the Microsoft Administrators to connect to the public servers and troubleshoot any server failures.
The server must always have the latest operating system upgrades to improve security and it must be accessible at any given point in time. The administrators are tasked to refactor the existing solution and manage the server patching activities effectively, even outside the regular maintenance window.
Which of the following provides the LEAST amount of administrative overhead in managing the server?
Launch the Windows Server on Amazon EC2 instances. Use AWS Systems Manager Patch Manager to manage the patching process for the server. Configure it to automatically apply patches as they become available, ensuring that the server is always up-to-date with the latest operating system upgrades.
(Correct)
Launch the server in Amazon Lightsail with the recommended Amazon AMI. Set up a combination of Amazon EventBridge and AWS Lambda scheduled event to call the Upgrade Operating System
API in Amazon Lightsail to apply system updates.
Launch a hardened machine image from the AWS Marketplace and host the server in AWS Cloud9. Set up the AWS Systems Manager Patch Manager to automatically apply system updates. Use Amazon AppStream 2.0 to act as a bastion host.
Launch an AWS AppSync environment with a single EC2 instance that runs the Windows Server. Set up the environment with a custom AMI to utilize a hardened machine image that can be downloaded from AWS Marketplace. Configure the AWS Systems Manager Patch Manager to automatically apply the OS updates.
Explanation
Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
AWS Systems Manager Patch Manager can automate the process of patching managed instances, including both security-related updates and other types of updates. It uses the appropriate built-in mechanism for an operating system type to install updates on a managed node.
Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as optional lists of approved and rejected patches. This ensures that the server is always up-to-date with the latest operating system upgrades.
You can also leverage patch groups to organize instances for patching, such as different environments/tagged instances like development, test, and production.
Furthermore, you can schedule patching to run as a Maintenance Windows task, ensuring that patching activities are managed effectively even outside the regular maintenance window.
This solution would provide the least amount of administrative overhead as it automates the patching process, reducing the need for manual intervention.
Hence, the correct answer is: Launch the Windows Server on Amazon EC2 instances. Use AWS Systems Manager Patch Manager to manage the patching process for the server. Configure it to automatically apply patches as they become available, ensuring that the server is always up-to-date with the latest operating system upgrades.
The option that says: Launch an AWS AppSync environment with a single EC2 instance that runs the Windows Server. Set up the environment with a custom AMI to utilize a hardened machine image that can be downloaded from AWS Marketplace. Configure the AWS Systems Manager Patch Manager to automatically apply the OS updates is incorrect because the AWS AppSync service is a fully managed service for developing GraphQL APIs, and not for creating an environment with a single Window Server instance.
The option that says: Launch a hardened machine image from the AWS Marketplace and host the server in AWS Cloud9. Set up the AWS Systems Manager Patch Manager to automatically apply system updates. Use Amazon AppStream 2.0 to act as a bastion host is incorrect because you cannot serve any machine image using AWS Cloud9.
The option that says: Launch the server in Amazon Lightsail with the recommended Amazon AMI. Set up a combination of Amazon EventBridge and AWS Lambda scheduled event to call the Upgrade Operating System
API in Amazon Lightsail to apply system updates is incorrect. There is no Upgrade Operating System
API call in Amazon Lightsail.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 24: Skipped
A photo-sharing website uses a CloudFront distribution with a default name (dtut0r1al5doj0.cloudfront.net) to distribute its static contents [-> S3]. It uses an ELB in front of an Auto Scaling group of Spot EC2 instances deployed across two Availability Zones. The website has a poor search ranking in Google as it doesn't use a secure HTTPS/SSL on its site. [-> ACM, CloudFront to use SSL/TLS cert by changing Viewer Protocol Policy
Redirect to HTTPS
Which of the following are valid options in order to require HTTPS for communication between the viewers and CloudFront? (Select TWO.)
Use a self-signed certificate in the ELB.
Use a self-signed SSL/TLS certificate in the ELB which is stored in a private S3 bucket.
Configure the ELB to use its default SSL/TLS certificate.
Configure CloudFront to use its default SSL/TLS certificate by changing the Viewer Protocol Policy
setting for one or more cache behaviors to require HTTPS communication.
(Correct)
Set the Viewer Protocol Policy<
to use Redirect HTTP to HTTPS
or HTTPS Only
.
(Correct)
Explanation
If you're using the domain name that CloudFront assigned to your distribution, such as dtut0ria1sd0jo.cloudfront.net, you can change the Viewer Protocol Policy setting for one or more cache behaviors to require HTTPS communication by setting it as either Redirect HTTP to HTTPS
or HTTPS Only
. In that configuration, CloudFront provides its default SSL/TLS certificate.
If your origin is an Elastic Load Balancing load balancer, you can use a certificate provided by AWS Certificate Manager (ACM). You can also use a certificate that is signed by a trusted third-party certificate authority and imported into ACM. Note that you can't use a self-signed certificate for HTTPS communication between CloudFront and your origin.
Therefore, the following options are correct:
- Set the Viewer Protocol Policy
to use Redirect HTTP to HTTPS
or HTTPS Only
.
- Configure CloudFront to use its default SSL/TLS certificate by changing the Viewer Protocol Policy
setting for one or more cache behaviors to require HTTPS communication.
The option that says: Use a self-assigned SSL/TLS certificate in the ELB which is stored in a private S3 bucket is incorrect because you don't need to add an SSL certificate if you only require HTTPS for communication between the viewers and CloudFront. You should only do this if you require HTTPS between your origin and CloudFront. In addition, you can't use a self-signed certificate in this scenario even though it is stored in a private S3 bucket. You need to use either a certificate from ACM or a third-party certificate.
The option that says: Use a self-signed certificate in the ELB is incorrect. As explained in the previous paragraph, adding an SSL certificate in the ELB is not required. Additionally, using a self-signed certificate for public websites is not recommended.
The option that says: Configure the ELB to use its default SSL/TLS certificate is incorrect. There is no default SSL certificate in ELB, unlike what we have in CloudFront (*.cloudfront.net). As previously explained, adding an SSL certificate in the ELB is not required.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 30: Skipped
A company has a critical application running on an Auto Scaling group of Amazon EC2 instances. The application CI/CD pipelines are created on AWS CodePipeline and all of the relevant AWS resources are defined in AWS CloudFormation templates. During deployments, the Auto Scaling group spawns new instances and the user data script downloads the new artifact from a central Amazon S3 bucket. With several code updates during the development cycle, a recent update on the CloudFormation templates has caused a major application downtime.
Which of the following solutions should the Solutions Architect implement to reduce the chances of downtime during deployments?
Update the CloudFormation templates to include cfn helper scripts. This will detect and report conditions during deployments to ensure that only healthy deployments are continued. Create test plans for the quality assurance team to ensure that changes are tested on a non-production environment before applying to production.
Set up a blue/green deployment pattern on AWS CodeDeploy using CloudFormation to update the user data deployment scripts. Manually login to the instances and perform tests to verify that the deployment is successful and the application is running as expected.
Check the CloudFormation templates for errors with the help of plugins on the integrated development environment (IDE). Ensure that the templates are valid using AWS CLI. Include cfn helper scripts on the deployment code to detect and report for errors. Deploy on a non-production environment and perform manual testing before applying changes to production.
Add an AWS CodeBuild stage on the deployment pipeline to automatically test on a non-production environment. Leverage change sets on AWS CloudFormation to preview changes before applying to production. Set up a blue/green deployment pattern on AWS CodeDeploy to deploy changes on a separate environment and to quickly rollback if needed.
(Correct)
Explanation
AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools.
You can automate your release process by using AWS CodePipeline to test your code and run your builds with AWS CodeBuild. This involves two main steps:
- Create a continuous delivery (CD) pipeline with CodePipeline that automates builds with CodeBuild.
- Add test and build automation with CodeBuild to an existing pipeline in CodePipeline.
When you need to update a CloudFormation stack, understanding how your changes will affect running resources before you implement them can help you update stacks with confidence. Change sets allow you to preview how proposed changes to a stack might impact your running resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it possible to automate the deployment of code to either Amazon EC2 or on-premises instances. AWS CodeDeploy supports blue/green deployments. AWS CodeDeploy offers two ways to perform blue/green deployments:
- In the first approach, AWS CodeDeploy makes a copy of an Auto Scaling group. It, in turn, provisions new Amazon EC2 instances, deploys the application to these new instances, and then redirects traffic to the newly deployed code.
- In the second approach, you use instance tags or an Auto Scaling group to select the instances that will be used for the green environment. AWS CodeDeploy then deploys the code to the tagged instances.
In the following figure, the release manager uses the workstation instance to push a new version of the application to AWS CodeDeploy and starts a blue-green deployment. AWS CodeDeploy creates a copy of the Auto Scaling group. It launches two new web server instances just like the original two. AWS CodeDeploy installs the new version of the application and then redirects the load balancer to the new instances.
Therefore, the correct answer is: Add an AWS CodeBuild stage on the deployment pipeline to automatically test on a non-production environment. Leverage change sets on AWS CloudFormation to preview changes before applying to production. Set up a blue/green deployment pattern on AWS CodeDeploy to deploy changes on a separate environment and to quickly rollback if needed. With AWS CodeBuild on your pipeline, you can add automated tests to verify that the artifact is working as expected. CloudFormation change sets allow you to preview proposed changes on templates before you apply them. AWS CodeDeploy can use a blue/green deployment strategy to have a separate deployment environment and easy rollback procedures.
The option that says: Update the CloudFormation templates to include cfn helper scripts. This will detect and report conditions during deployments to ensure that only healthy deployments are continued. Create test plans for the quality assurance team to ensure that changes are tested on a non-production environment before applying to production is incorrect. This may be possible, however, it will rely on another team to do the testing. It is better to set up AWS CodeBuild to automate this verification which also reduces any human intervention or errors.
The option that says: Check the CloudFormation templates for errors with the help of plugins on the integrated development environment (IDE). Ensure that the templates are valid using AWS CLI. Include cfn helper scripts on the deployment code to detect and report for errors. Deploy on a non-production environment and perform manual testing before applying changes to production is incorrect. Using plugins on IDEs only detects syntax errors on your CloudFormation codes. It won't prevent the user from creating logical errors on the template that may have drastic effects on the current environment.
The option that says: Set up a blue/green deployment pattern on AWS CodeDeploy using CloudFormation to update the user data deployment scripts. Manually login to the instances and perform tests to verify that the deployment is successful and the application is running as expected is incorrect. This is possible but not recommended since manual intervention is prone to human error. You should create automated testing instead to verify the application for every deployment.
References:
Check out these AWS CodeBuild, AWS CodeDeploy, and AWS CloudFormation Cheat Sheets:
Question 31: Skipped
A company hosts its application on several Amazon EC2 instances inside a VPC. A known security vulnerability was discovered in the outdated Operating System [-> Patch Manager] of the company's EC2 fleet. The solutions architect is responsible for mitigating the vulnerability as soon as possible to safeguard your systems from various cybersecurity attacks. In addition, it is also required to record all of the changes to patches [-> AWS Config] and association compliance statuses.
Which of the following options is the recommended way to meet the above requirements?
Use AWS Systems Manager State Manager to ensure that OS security patches are installed on the Amazon EC2 instances. Use Amazon OpenSearch service to record, monitor, and visualize the patch statuses of the entire EC2 instance fleet.
Set up Amazon Control Tower to deploy OS security updates to the Amazon EC2 instances. Create an Amazon Managed Grafana dashboard to visualize the security compliance of the EC2 instance fleet.
Create a new AMI that automatically installs the OS security patches every week on the provided maintenance window. Roll out the new AMI to the Amazon EC2 instances fleet.
Use AWS Systems Manager Patch Manager to deploy the OS security patches on the EC2 instances. Use AWS Config to manage, detect and record the security compliance of the EC2 instances.
(Correct)
Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.
Since you are also required to record all of the changes to patch and association compliance statuses, you can use AWS Config to meet this requirement. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
Therefore, the correct answer is: Use AWS Systems Manager Patch Manager to deploy the OS security patches on the EC2 instances. Use AWS Config to manage, detect and record the security compliance of the EC2 instances.
The option that says: Create a new AMI that automatically installs the OS security patches every week on the provided maintenance window. Roll out the new AMI to the Amazon EC2 instances fleet is incorrect. This approach is possible but is not recommended because you won't have control over each particular patch that needs to be installed. And it will be cumbersome to have a rollback operation in case a certain update broke the normal operations of your servers.
The option that says: Set up Amazon Control Tower to deploy OS security updates to the Amazon EC2 instances. Create an Amazon Managed Grafana dashboard to visualize the security compliance of the EC2 instance fleet is incorrect. Amazon Control Tower is used to automate the setup of your multi-account AWS environment, not install security patches.
The option that says: Use AWS Systems Manager State Manager to ensure that OS security patches are installed on the Amazon EC2 instances. Use Amazon OpenSearch service to record, monitor, and visualize the patch statuses of the entire EC2 instance fleet is incorrect. AWS Systems Manager State Manager is designed to be a configuration management service that automates the process of keeping your managed nodes and other AWS resources in a state that you define. Although it may be possible to use it to install security patches, AWS SSM Patch Manager is the recommended service for installing OS updates to your EC2 instances.
References:
Check out this AWS Systems Manager and AWS Config Cheat Sheet:
Question 33: Skipped
A major telecommunications company is planning to set up a disaster recovery solution for its Amazon Redshift cluster which is being used by its online data analytics application. Database encryption is enabled on their clusters using AWS KMS [-> snapshot copy grant - encrypt cross region as KMS is a regional service] and it is required that the recovery site should be at least 500 miles from their primary cloud location. [-> cross-region snapshots]
Which of the following is the most suitable solution to meet these requirements and to make its architecture highly available?
Set up a snapshot copy grant
for a master key in the destination region and enable cross-region snapshots in your Redshift cluster to copy snapshots of the cluster to another region.
(Correct)
Create a new AWS CloudFormation stack that will deploy the cluster in another region and will regularly back up the data to an S3 bucket, configured with cross-region replication. In case of an outage in the primary region, just use the snapshot from the S3 bucket and then start the cluster.
Develop a scheduled job using AWS Lambda which will regularly take a snapshot of the Redshift cluster and copy it to another region.
In your Redshift cluster, enable the cross-region snapshot copy feature to copy snapshots to another region.
Explanation
Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection. Amazon Redshift automatically takes incremental snapshots that track changes to the cluster since the previous automated snapshot.
Automated snapshots retain all of the data required to restore a cluster from a snapshot. You can take a manual snapshot any time. When you restore from a snapshot, Amazon Redshift creates a new cluster and makes the new cluster available before all of the data is loaded, so you can begin querying the new cluster immediately. The cluster streams data on demand from the snapshot in response to active queries, then loads the remaining data in the background.
When you launch an Amazon Redshift cluster, you can choose to encrypt it with a master key from the AWS Key Management Service (AWS KMS). AWS KMS keys are specific to a region. If you want to enable cross-region snapshot copy for an AWS KMS-encrypted cluster, you must configure a snapshot copy grant for a master key in the destination region so that Amazon Redshift can perform encryption operations in the destination region.
Therefore, the correct answer is: Set up a snapshot copy grant
for a master key in the destination region and enable cross-region snapshots in your Redshift cluster to copy snapshots of the cluster to another region.
The option that says: Create a new AWS CloudFormation stack that will deploy the cluster in another region and will regularly back up the data to an S3 bucket, configured with cross-region replication. In case of an outage in the primary region, just use the snapshot from the S3 bucket and then start the cluster is incorrect. Using a combination of CloudFormation and a separate S3 bucket entails a lot of configuration and set up compared with just enabling cross-region snapshot copy in your Redshift cluster.
The option that says: Develop a scheduled job using AWS Lambda which will regularly take a snapshot of the Redshift cluster and copy it to another region is incorrect. It is not recommended to use AWS Lambda to copy data on your Redshift cluster to another region. You simply have to enable cross-region snapshot copy in your Redshift cluster in order to meet the requirement.
The option that says: In your Redshift cluster, enable the cross-region snapshot copy feature to copy snapshots to another region is incorrect. Although it is right to use the cross-region snapshot copy feature, you still have to configure a snapshot copy grant for a master key in the destination region so that Amazon Redshift can perform encryption operations in the destination region.
References:
Check out this Amazon Redshift Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 35: Skipped
A small telecommunications company has recently adopted a hybrid cloud architecture with AWS. They are storing static files of their on-premises web application on a 5 TB gateway-stored volume in AWS Storage Gateway, which is attached to the application server via an iSCSI interface [-> EBS]. As part of their disaster recovery plan, they should be able to run the web application on AWS in case their on-premises network encountered any technical issues.
Which of the following options is the MOST suitable solution that you should implement?
Restore the static content by attaching the AWS Storage Gateway to the EC2 instance that hosts the application server.
For the static content, create an EFS file system from the AWS Storage Gateway service and mount it to the EC2 instance where the application server is hosted.
Generate an EBS snapshot of the static content from the AWS Storage Gateway service. Afterward, restore it to an EBS volume that you can then attach to the EC2 instance where the application server is hosted.
(Correct)
Restore the static content from an AWS Storage Gateway to an S3 bucket and link it to the EC2 instance where the app server is running.
Explanation
By using stored volumes, you can store your primary data locally, while asynchronously backing up that data to AWS. Stored volumes provide your on-premises applications with low-latency access to their entire datasets. At the same time, they provide durable, offsite backups. You can create storage volumes and mount them as iSCSI devices from your on-premises application servers. Data written to your stored volumes are stored on your on-premises storage hardware. This data is asynchronously backed up to Amazon S3 as Amazon Elastic Block Store (Amazon EBS) snapshots.
You can restore an Amazon EBS snapshot to an on-premises gateway storage volume if you need to recover a backup of your data. You can also use the snapshot as a starting point for a new Amazon EBS volume, which you can then attach to an Amazon EC2 instance.
Since this is using a Volume Storage Gateway, you have to generate an EBS snapshot and generate an EBS Volume to restore the data.
Therefore, the correct answer is: Generate an EBS snapshot of the static content from the AWS Storage Gateway service. Afterwards, restore it to an EBS volume that you can then attach to the EC2 instance where the application server is hosted.
The option that says: Restore the static content from an AWS Storage Gateway to an S3 bucket and linking it on the EC2 instance where the app server is running is incorrect because linking the S3 bucket to the EC2 instance is not a suitable option to restore data from AWS Storage Gateway. You should generate a snapshot first and generate an EBS Volume that you can attach to the instance.
The option that says: Restore the static content by attaching the AWS Storage Gateway to the EC2 instance that hosts the application server is incorrect because you cannot directly attach the AWS Storage Gateway to a running EC2 instance which runs your application server. Although you can deploy and activate a volume or tape gateway on EC2, this instance has a different AMI than your application server which runs on a different EC2 instance. You have to generate an EBS Volume first, based on the generated snapshot from AWS Storage Gateway.
The option that says: Create an EFS file system from the AWS Storage Gateway service and mounting it to the EC2 instance where the application server is hosted for the static content is incorrect because using EFS in this scenario is not appropriate. You should use EBS Volumes to restore your data from Storage Gateway.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 36: Skipped
A company has several financial applications hosted in AWS that uses Amazon S3 buckets to store static data. The Solutions Architect recently discovered that some employees store highly classified data into S3 buckets without proper approval [-> presigned URL ?]. To mitigate any security risks, the Architect needs to determine all possible S3 objects that contain personally identifiable information (PII) [-> Macie] and determine whether the data has been accessed [-> CloudTrail]. Due to the sheer volume of data, the Architect must implement an automated solution to accomplish this important task.
Which of the following should the solutions architect implement for this scenario?
Use Amazon GuardDuty to detect personally identifiable information (PII) on the Amazon S3 buckets. Determine if the objects with PII have been recently accessed by tracking the GET
API calls in AWS CloudTrail that are used to download these objects.
Install the Amazon Inspector agent on the Amazon S3 buckets. Use AWS CloudTrail to determine if the objects with personally identifiable information (PII) have been recently accessed by tracking the GET
API calls that are used to fetch these objects.
Enable Amazon Macie on the S3 buckets to automatically classify the data and detect any objects with personally identifiable information (PII). Determine if the objects with PII have been recently accessed by tracking the GET
API calls in AWS CloudTrail.
(Correct)
Detect personally identifiable information (PII) on the specific S3 buckets using Amazon Athena. Set up Amazon CloudWatch to determine if the objects with PII have been recently accessed by tracking the GET
API calls that are used to download these objects.
Explanation
Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization.
Amazon Macie continuously monitors data access activity for anomalies and delivers alerts when it detects the risk of unauthorized access or inadvertent data leaks. Amazon Macie has the ability to detect global access permissions inadvertently being set on sensitive data, detect uploading of API keys inside source code, and verify sensitive customer data is being stored and accessed in a manner that meets their compliance standards.
Hence, the correct answer is: Enable Amazon Macie on the S3 buckets to automatically classify the data and detect any objects with personally identifiable information (PII). Determine if the objects with PII have been recently accessed by tracking the GET
API calls in AWS CloudTrail.
The option that says: Detect personally identifiable information (PII) on the specific S3 buckets using Amazon Athena. Set up Amazon CloudWatch to determine if the objects with PII have been recently accessed by tracking the GET
API calls that are used to download these objects is incorrect because Amazon Athena is not capable of detecting personally identifiable information (PII) in an Amazon S3 bucket. You have to use Amazon Macie instead. In addition, you have to use AWS CloudTrail to track the GET
API calls that are used to fetch the objects with PII.
The option that says: Use Amazon GuardDuty to detect personally identifiable information (PII) on the Amazon S3 buckets. Determine if the objects with PII have been recently accessed by tracking the GET
API calls in AWS CloudTrail that are used to download these objects is incorrect because GuardDuty is just a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. You have to use Amazon Macie instead.
The option that says: Install the Amazon Inspector agent on the Amazon S3 buckets. Use AWS CloudTrail to determine if the objects with personally identifiable information (PII) have been recently accessed by tracking the GET
API calls that are used to fetch these objects is incorrect because you can only install the Amazon Inspector agent to EC2 instances and not to S3 buckets. Amazon Inspector is basically an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. You have to use Amazon Macie instead.
PII -> Macie
CVE -> Inspector
References:
Check out this Amazon Macie Cheat Sheet:
Question 37: Skipped
A leading financial company runs its application in an Amazon ECS Cluster. The application processes a large stream of intraday data [-> DynamoDB Stream] and stores the generated result in a DynamoDB table. To comply with the financial regulatory policy, the solutions architect was tasked to design a system that detects new entries [-> DynamoDB event? (called DynamoDB stream - trigger Lambda?] in the DynamoDB table and then automatically run tests to verify the results using a Lambda function.
Which of the following options can satisfy the company’s requirement with minimal configuration changes?
Migrate the table to Amazon DocumentDB to take advantage of its integration with Amazon EventBridge which can invoke a Lambda function for specific database events.
Set up a DynamoDB stream to detect the new entries and automatically trigger the Lambda function.
(Correct)
Run an AWS Lambda function using Amazon SNS as a trigger each time the ECS Cluster successfully processes financial data.
Detect the new entries in the DynamoDB table using Systems Manager Automation then automatically invoke the Lambda function for processing.
Explanation
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables.
If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.
You can create a Lambda function which can perform a specific action that you specify, such as sending a notification or initiating a workflow. For instance, you can set up a Lambda function to simply copy each stream record to persistent storage, such as EFS or S3, to create a permanent audit trail of write activity in your table.
Suppose you have a mobile gaming app that writes to a TutorialsDojoCourses
table. Whenever the TopCourse
attribute of the TutorialsDojoScores
table is updated, a corresponding stream record is written to the table's stream. This event could then trigger a Lambda function that posts a congratulatory message on a social media network. (The function would simply ignore any stream records that are not updates to TutorialsDojoCourses
or that do not modify the TopCourse
attribute.)
Therefore, the correct answer is: Set up a DynamoDB stream to detect the new entries and automatically trigger the Lambda function. In this way, the requirement can be met with a minimal configuration change. DynamoDB streams can be used as an event source to automatically trigger Lambda functions whenever there is a new entry.
The option that says: Run an AWS Lambda function using Amazon SNS as a trigger each time the ECS Cluster successfully processes financial data is incorrect. You don't need to create an SNS topic just to invoke Lambda functions. You can simply enable DynamoDB streams to meet the requirement with less configuration.
The option that says: Migrate the table to Amazon DocumentDB to take advantage of its integration with Amazon EventBridge which can invoke a Lambda function for specific database events is incorrect. The question requires minimal configuration changes so migrating the database to another service is not recommended as this will require a effort and changes on the application.
The option that says: Detect the new entries in the DynamoDB table using Systems Manager Automation then automatically invoking the Lambda function for processing is incorrect. The Systems Manager Automation service is primarily used to simplify common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. It does not have the capability to detect new entries in a DynamoDB table.
References:
Check out this Amazon DynamoDB cheat sheet:
Question 40: Skipped
An enterprise has several development and production AWS accounts managed under its AWS Organization. Consolidated billing is enabled in the organization but the management wants more visibility on the AWS Billing and Cost Management. With the sudden increase in the Amazon RDS and Amazon DynamoDB costs, the management required all CloudFormation templates to enforce a consistent tagging [-> SCP to deny resources CRUD that don't have tags] with cost center numbers and project ID numbers on all resources that will be provisioned. The management also wants these tags to be enforced in all existing and future DynamoDB and RDS instances. [-> Tag Editor to tag in bulk]
Which of the following options is the recommended strategy to meet the company requirements?
Tag all existing resources in bulk using the Tag Editor. On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Wait for at least 24 hours to allow AWS to propagate the tags and gather cost reports.
On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Wait for at least 24 hours to allow AWS to propagate the tags and gather cost reports. Update existing federated roles to deny users from creating resources that do not have the cost center and project ID tags.
Create an AWS Config rule to check for any untagged resource and send a notification email to the finance team. Write a Lambda function that has a cross-account role to tag all RDS databases and DynamoDB resources on all accounts under the organization. Schedule this function to run every hour.
Tag all existing resources in bulk using the Tag Editor. On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Apply an SCP on the organizational unit that denies users from creating resources that do not have the cost center and project ID tags.
(Correct)
Explanation
Tags are words or phrases that act as metadata that you can use to identify and organize your AWS resources. A resource can have up to 50 user-applied tags. It can also have read-only system tags. Each tag consists of a key and one optional value.
You can add tags to resources when you create the resource. You can use the resource's service console or API to add, change, or remove those tags one resource at a time. To add tags to—or edit or delete tags of—multiple resources at once, use Tag Editor. With Tag Editor, you search for the resources that you want to tag, and then manage tags for the resources in your search results.
For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, AWS generated tags and user-defined tags. AWS, or AWS Marketplace ISV defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.
User-defined tags are tags that you define, create, and apply to resources. After you have created and applied the user-defined tags, you can activate them by using the Billing and Cost Management console for cost allocation tracking. Cost Allocation Tags appear on the console after you've enabled Cost Explorer, Budgets, AWS Cost and Usage Reports, or legacy reports. After you activate the AWS services, they appear on your cost allocation report. You can then use the tags on your cost allocation report to track your AWS costs. Tags are not applied to resources that were created before the tags were created.
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs aren't available if your organization has enabled only the consolidated billing features.
Therefore, the correct answer is: Tag all existing resources in bulk using the Tag Editor. On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Apply an SCP on the organizational unit that denies users from creating resources that do not have the cost center and project ID tags. Tag Editor allows bulk tagging to easily tag your AWS resources. The SCP rules will enforce the company tagging policy by preventing users from creating resources that do not have the appropriate tags.
The option that says: Tag all existing resources in bulk using the Tag Editor. On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Wait for at least 24 hours to allow AWS to propagate the tags and gather cost reports is incorrect. This solution is incomplete. It does not prevent users from creating untagged resources in the future.
The option that says: Create an AWS Config rule to check for any untagged resource and send a notification email to the finance team. Write a Lambda function that has a cross-account role to tag all RDS databases and DynamoDB resources on all accounts under the organization. Schedule this function to run every hour is incorrect. This solution does not prevent users from creating untagged resources in the future and the AWS Config rule does not automatically tag non-compliant resources.
The option that says: On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Wait for at least 24 hours to allow AWS to propagate the tags and gather cost reports. Update existing federated roles to deny users from creating resources that do not have the cost center and project ID tags is incorrect. This may be possible but it will be cumbersome to edit all IAM policies across all the company AWS accounts. It is better to use an SCP applied at the organization unit level to enforce the tagging policy.
bulk tagging -> TAG Editor
policy to create resources with tagging -> SCP / IAM Policy (?)
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
Question 41: Skipped
A company recently adopted a modern design for its legacy application. The new application is now suitable for native cloud deployments so the CI/CD pipelines need to be updated as well. The following deployment requirements are needed to support the new application:
- The pipeline should support deployments of new versions several times every hour.
- The pipeline should be able to quickly rollback to the previous application version if any problems are encountered on the new version.
Which of the following options is the recommended solution to meet the company requirements?
Reconfigure the pipeline to create a Staging environment on AWS Elastic Beanstalk. Deploy the newer version on the Staging environment. Swap the Staging and Production environment URLs to shift traffic to the newer version.
(Correct)
Package the newer application version on AMIs including the needed configurations. Update the Launch Template of the Auto Scaling group and trigger a scale-out to use this new AMI. Ensure that the configured termination policy is to delete the old instances using the previous AMI.
Package the newer application version on AMIs including the needed configurations. Reconfigure the CI/CD pipeline to deploy this AMI by replacing the current Amazon EC2 instances.
Use Amazon Lightsail to handle the deployment of new Amazon EC2 instances and the needed load balancers. Add an Amazon EC2 user data script to download the latest application artifact from an Amazon S3 bucket. Use a weighted routing policy on Amazon Route 53 to slowly shift traffic to the newer version.
Explanation
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, Immutable, and Traffic splitting) and options that let you configure the batch size and health check behavior during deployments. By default, your environment uses all-at-once deployments. If you created the environment with the EB CLI and it's a scalable environment, it uses rolling deployments.
You can use the AWS Elastic Beanstalk console to upload an updated source bundle and deploy it to your Elastic Beanstalk environment or redeploy a previously uploaded version. The following list provides summary information about the different deployment policies and adds related considerations.
All at once – The quickest deployment method. Suitable if you can accept a short loss of service and if quick deployments are important to you. With this method, Elastic Beanstalk deploys the new application version to each instance.
Rolling – Avoids downtime and minimizes reduced availability at a cost of a longer deployment time. Suitable if you can't accept any period of completely lost service. With this method, your application is deployed to your environment one batch of instances at a time.
Rolling with additional batch – Avoids any reduced availability at a cost of an even longer deployment time compared to the Rolling method. Suitable if you must maintain the same bandwidth throughout the deployment. With this method, Elastic Beanstalk launches an extra batch of instances, then performs a rolling deployment.
Immutable – A slower deployment method that ensures your new application version is always deployed to new instances instead of updating existing instances. It also has the additional advantage of a quick and safe rollback in case the deployment fails.
Traffic splitting – A canary testing deployment method. Suitable if you want to test the health of your new application version using a portion of incoming traffic while keeping the rest of the traffic served by the old application version.
For deployments that depend on resource configuration changes or a new version that can't run alongside the old version, you can launch a new environment with the new version and perform a CNAME swap for a blue/green deployment.
Because AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application can become unavailable to users for a short period of time. You can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment and then swap CNAMEs of the two environments to redirect traffic to the new version instantly. With this method, you can have two independent environments and you can quickly switch between the version by swapping the URLs.
Therefore, the correct answer is: Reconfigure the pipeline to create a Staging environment on AWS Elastic Beanstalk. Deploy the newer version on the Staging environment. Swap the Staging and Production environment URLs to shift traffic to the newer version. This is a blue/green deployment on Elastic Beanstalk. You can deploy a newer version without affecting the current version and quickly roll back by just swapping the URLs again.
The option that says: Package the newer application version on AMIs including the needed configurations. Reconfigure the CI/CD pipeline to deploy this AMI by replacing the current Amazon EC2 instances is incorrect. This is a possible deployment procedure, however, the rollback procedure will take too long because it will need to replace the current EC2 instances again.
The option that says: Use Amazon Lightsail to handle the deployment of new Amazon EC2 instances and the needed load balancers. Add an Amazon EC2 user data script to download the latest application artifact from an Amazon S3 bucket. Use a weighted routing policy on Amazon Route 53 to slowly shift traffic to the newer version is incorrect. This is possible, however, is designed to deploy simple web applications in Dev/Test environments, not for production, mission-critical workloads. The rollback for this procedure will take longer too, as there is a need to re-create newer instances again to run the EC2 user data script.
The options that says: Package the newer application version on AMIs including the needed configurations. Update the Launch Template of the Auto Scaling group and trigger a scale-out to use this new AMI. Ensure that the configured termination policy is to delete the old instances using the previous AMI is incorrect. This is also a possible deployment, however, it is not recommended for this scenario. With this solution, you will have to re-deploy the older AMI in case of a rollback which takes more time compared to just swapping the environment URLs.
Takeaway:
blue-green deployment with Elastic BeanStack; two envs and swap CNAME
References:
Check out the AWS Elastic Beanstalk Cheat Sheet:
Question 43: Skipped
A company wants to improve the security of its cloud resources by ensuring that all running EC2 instances were launched from pre-approved AMIs only [-> Config rules -> Lambda -> SNS], which are set by the Security team [-> Lambda -> SNS - Security team]. Their Development team has an agile CI/CD process which should not be stalled by the new automated solution that they’ll implement. Any new application release must be deployed first before the solution could analyze if it is using a pre-approved AMI or not.
Which of the following options enforces the required controls with the LEAST impact on the development process? (Select TWO.)
Set up the required policies, roles, and permissions to a centralized IT Operations team, which will manually process the security approval steps to ensure that EC2 instances are only launched from pre-approved AMIs.
Set up a scheduled Lambda function to search through the list of running EC2 instances within your VPC and determine if any of these are based on unauthorized AMIs. Afterward, publish a new message to an SNS topic to inform the Security team that this occurred and then terminate the EC2 instance.
(Correct)
Set up AWS Config rules to determine any launches of EC2 instances based on non-approved AMIs and then trigger an AWS Lambda function to automatically terminate the instance. Afterward, publish a message to an SNS topic to inform the Security team about the occurrence.
(Correct)
Set up IAM policies to restrict the ability of users to launch EC2 instances based on a specific set of pre-approved AMIs which were tagged by the Security team.
Set up Amazon Inspector [CVE] to do regular scans using a custom assessment template to determine if the EC2 instance is based upon a pre-approved AMI. Terminate the instances and inform the Security team by email about the security breach.
Explanation
When you run your applications on AWS, you usually use AWS resources, which you must create and manage collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS resources. AWS Config is designed to help you oversee your application resources.
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. With AWS Config, you can do the following:
- Evaluate your AWS resource configurations for desired settings.
- Get a snapshot of the current configurations of the supported resources that are associated with your AWS account.
- Retrieve configurations of one or more resources that exist in your account.
- Retrieve historical configurations of one or more resources.
- Receive a notification whenever a resource is created, modified, or deleted.
- View relationships between resources. For example, you might want to find all resources that use a particular security group.
AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. In this scenario, you can use the approved-amis-by-id
AWS manage rule which checks whether running instances are using specified AMIs. You can also use a Lambda function which is scheduled to run regularly to scan all of the running EC2 instances in your VPC and check if there is an instance that was launched using an unauthorized AMI. Hence, the following options are the correct answers:
- Set up AWS Config rules to determine any launches of EC2 instances based on non-approved AMIs and then trigger an AWS Lambda function to automatically terminate the instance. Afterward, publish a message to an SNS topic to inform the Security team about the occurrence.
- Set up a scheduled Lambda function to search through the list of running EC2 instances within your VPC and determine if any of these are based on unauthorized AMIs. Afterward, publish a new message to an SNS topic to inform the Security team that this occurred and then terminate the EC2 instance.
The option that says: Set up the required policies, roles, and permissions to a centralized IT Operations team, which will manually process the security approval steps to ensure that EC2 instances are only launched from pre-approved AMIs is incorrect because having manual information security approval will impact the development process. A better solution is to implement an automated process using AWS Config and a scheduled AWS Lambda function.
The option that says: Set up IAM policies to restrict the ability of users to launch EC2 instances based on a specific set of pre-approved AMIs which were tagged by the Security team is incorrect because setting up an IAM Policy will totally restrict the development team from launching EC2 instances with unapproved AMIs which could impact their CI/CD process. The scenario clearly says that the solution should not have any interruption in the company's development process.
The option that says: Set up Amazon Inspector to do regular scans using a custom assessment template to determine if the EC2 instance is based upon a pre-approved AMI. Terminate the instances and inform the Security team by email about the security breach is incorrect because the Amazon Inspector service is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not have the capability to detect EC2 instances that are using unapproved AMIs, unlike AWS Config.
Takeaway: unapproved AMIs -> AWS Config
CVE -> Inspector
References:
Check out this AWS Config Cheat Sheet:
Question 47: Skipped
A company is running its enterprise resource planning application in AWS that handles supply chain, order management, and delivery tracking. The architecture has a set of RESTful web services that enable third-party companies to search for data that will be consumed by their respective applications. The public web services consist of several AWS Lambda functions. DynamoDB is used for its database tier and is integrated with an Amazon OpenSearch domain, which stores the indexes and supports the search feature. A Solutions Architect has been instructed to ensure that in the event of a failed deployment, there should be no downtime [-> >= standby], and a system should be in place to prevent subsequent deployments. The service must strictly maintain full capacity during API deployment without any reduced compute capacity to avoid degradation of the service.
Among the options below, which can the Architect use to meet the requirements in the MOST efficient way?
Do a blue/green deployment on all upcoming changes using AWS CodeDeploy. Using AWS CloudFormation, launch the Amazon DynamoDB tables, AWS Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to Immutable
.
(Correct)
Do an in-place deployment on all upcoming changes using AWS CodeDeploy. Using AWS SAM, launch the Amazon DynamoDB tables, Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to Rolling
.
Do a blue/green deployment on all upcoming changes using AWS CodeDeploy. Using AWS SAM, launch the DynamoDB tables, Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to All at Once
.
Do a blue/green deployment on all upcoming changes using Amazon Lightsail. Let Amazon Lightsail handle the provisioning of database instances, EC2 instances, and load balancers needed by the web application. Use CloudFormation to deploy the AWS Lambda functions, provision DynamoDB tables, and create an Amazon OpenSearch domain in your VPC.
Explanation
AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, and Immutable) and options that let you configure the batch size and health check behavior during deployments. By default, your environment uses all-at-once deployments. If you created the environment with the EB CLI and it's an automatically scaling environment (you didn't specify the --single
option), it uses rolling deployments.
With rolling deployments, Elastic Beanstalk splits the environment's EC2 instances into batches and deploys the new version of the application to one batch at a time, leaving the rest of the instances in the environment running the old version of the application. During a rolling deployment, some instances serve requests with the old version of the application, while instances in completed batches serve other requests with the new version.
To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as a rolling deployment with an additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances.''
Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.
Therefore, the correct answer is: Do a blue/green deployment on all upcoming changes using AWS CodeDeploy. Using AWS CloudFormation, launch the Amazon DynamoDB tables, AWS Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to Immutable
.
The option that says: Do a blue/green deployment on all upcoming changes using AWS CodeDeploy. Using AWS SAM, launch the DynamoDB tables, Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to All at Once
is incorrect. This policy deploys the new version to all instances simultaneously which means that the instances in your environment are out of service for a short time while the deployment occurs.
The option that says: Do an in-place deployment on all upcoming changes using AWS CodeDeploy. Using AWS SAM, launch the Amazon DynamoDB tables, Lambda functions, and Amazon OpenSearch domain in your AWS VPC. Host the web application in AWS Elastic Beanstalk and set the deployment policy to Rolling
is incorrect. This policy will deploy the new version in batches where each batch is taken out of service during the deployment phase, reducing your environment's capacity by the number of instances in a batch.
The option that says: Do a blue/green deployment on all upcoming changes using Amazon Lightsail. Let Amazon Lightsail handle the provisioning of database instances, EC2 instances, and load balancers needed by the web application. Use CloudFormation to deploy the AWS Lambda functions, provision DynamoDB tables, and create an Amazon OpenSearch domain in your VPC is incorrect. Amazon Lightsail is a cost-effective solution for deploying simple web applications. AWS recommends it for DEV/Test environments, not for enterprise-ready, mission-critical workloads.
Takeaway:
deploy DEV/Test simple web app -> Lightsail (not enterprise-ready, mission critical app) All at Once -> have downtime
Rolling -> deploy in batches, partial completed rolling deployments; reduce environment capacity by number of instances in the batch
Immutable -> full set of ASG and instances; leaving the original instances untouched. healthchecks
Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.
References:
Check out this AWS Elastic Beanstalk Cheat Sheet:
Question 48: Skipped
A company runs a finance-related application on a fleet of Amazon EC2 instances inside a private subnet of a VPC in AWS. To access the application, the instances are behind an internet-facing Application Load Balancer (ALB). As part of security compliance, the company is required to have a solution that allows it to inspect network payloads [-> Traffic Mirroring on EC2 ENI; NOT VPC flow logs] that are being sent to the application [-> VPC flow logs? WRONG]. Analyzing the network payloads will help in reverse-engineering sophisticated network attacks that the application may experience.
Which of the following options should the solutions architect implement to meet the company requirements?
Create a new AWS web ACL with blank rules and a default “Allow” action. Associate the ALB to this web ACL. Enable logging on web ACL and send them to Amazon CloudWatch Logs for analysis.
Go to the Amazon VPC console and create a VPC flow log. Set the destination of flow log data to an Amazon S3 bucket for analysis.
Go to the Amazon EC2 console and enable “Access logs” for the ALB. Send the ALB access logs to Amazon AppFlow for payload inspection and to an Amazon S3 bucket for long-term storage.
Configure Traffic Mirroring on the elastic network interface of the EC2 instances. Send the mirrored traffic to a monitoring appliance for storage and inspection.
(Correct)
Explanation
Traffic Mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon EC2 instances. You can then send the traffic to out-of-band security and monitoring appliances for:
-Content inspection
-Threat monitoring
-Troubleshooting
The security and monitoring appliances can be deployed as individual instances or as a fleet of instances behind a Network Load Balancer with a UDP listener. Traffic Mirroring supports filters and packet truncation so that you only extract the traffic of interest to monitor by using monitoring tools of your choice.
Traffic Mirroring copies inbound and outbound traffic from the network interfaces that are attached to your Amazon EC2 instances. You can send the mirrored traffic to the network interface of another EC2 instance or a Network Load Balancer that has a UDP listener. The traffic mirror source and the traffic mirror target (monitoring appliance) can be in the same VPC. Or they can be in different VPCs that are connected through intra-Region VPC peering or a transit gateway.
Therefore, the correct answer is: Configure Traffic Mirroring on the elastic network interface of the EC2 instances. Send the mirrored traffic to a monitoring appliance for storage and inspection. Traffic Mirroring can copy network traffic from an elastic network interface and send it to a monitoring appliance for inspection.
The option that says: Go to the Amazon VPC console and create a VPC flow log. Set the destination of flow log data to an Amazon S3 bucket for analysis is incorrect. This will log network data on all resources on the VPC, not just the EC2 cluster in question. Additionally, VPC flow log data only contain OSI Layer 4 (Transport) information. This does not include payload contents.
The option that says: Go to the Amazon EC2 console and enable “Access logs” for the ALB. Send the ALB access logs to Amazon AppFlow for payload inspection and to an Amazon S3 bucket for long-term storage is incorrect. Amazon AppFlow is used for transferring data between Software-as-a-Service (SaaS) applications, not for payload inspection. ALB access logs have similar content to HTTP/HTTPS logs, however, it only contains the request path and some headers on the logs. The payload itself is not recorded.
The option that says: Create a new AWS web ACL with blank rules and a default “Allow” action. Associate the ALB to this web ACL. Enable logging on web ACL and send them to Amazon CloudWatch Logs for analysis is incorrect. You can capture information about the web requests that are evaluated by AWS WAF and send them to AWS CloudWatch Logs. However, CloudWatch Logs is designed for searching and querying logs. You will only see the details of the request packet and not the payload itself. If you need a more sophisticated analysis and inspection of the request payload, it would be better to use a dedicated monitoring appliance.
Takeaway
Transfer data between SaaS -> AppFlow
ALB access logs / VPC Flow logs don't record paylod, just request path and heaaders or OSI Layer 4 (Transport) info
References:
Check out this Amazon VPC Cheat Sheet:
Question 50: Skipped
A national library is planning to store around 50 TB of data containing all their books, articles, and other written materials in AWS. One of the requirements is to have a search feature to enable the users to look for their collection on their dynamic website. [-> CloudSearch]
As a Cloud Engineer, what is the most suitable solution that you should implement in AWS to satisfy the needed functionality?
Use Elastic Beanstalk as the deployment service. Deploy the needed AWS resources such as the Multi-AZ RDS for storage and an EC2 instance to host their website.
Use CloudFormation as the deployment service to deploy the needed AWS resources such as an S3 bucket for storage; CloudSearch to provide the needed search functionality, and an EC2 instance to host their website.(Correct)
Use CodeDeploy as the deployment service to deploy two S3 buckets in which the first one serves as the storage service and the second one for hosting their dynamic website. Use the native search functionality of S3 to satisfy the search feature requirement.
Use AWS CodePipeline as the deployment service to deploy Amazon Kinesis as the storage service and an EC2 instance to serve their website.
Explanation
Amazon S3 offers a highly durable, scalable, and secure destination for backing up and archiving your critical data. You can use S3’s versioning capability to provide even further protection for your stored data. Whether you’re storing pharmaceutical or financial data, or multimedia files such as photos and videos, Amazon S3 can be used as your data lake for big data analytics.
Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost-effective to set up, manage, and scale a search solution for your website or application. With Amazon CloudSearch, you can quickly add rich search capabilities to your website or application. You don't need to become a search expert or worry about hardware provisioning, setup, and maintenance.
Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
Amazon S3 is great for storage but it lacks an effective search feature. Hence, it is better to use a service like Amazon CloudSearch to fulfill the requirement in the scenario.
Therefore, the correct answer is: Use CloudFormation as the deployment service to deploy the needed AWS resources such as an S3 bucket for storage; CloudSearch to provide the needed search functionality, and an EC2 instance to host their website.
The option that says: Use Elastic Beanstalk as the deployment service. Deploy the needed AWS resources such as the Multi-AZ RDS for storage and an EC2 instance to host their website is incorrect. You can use RDS as your database for the application, however, this will be very expensive. It is recommended to use CloudSearch instead to have a cost-effective search solution for the website application.
The option that says: Use AWS CodePipeline as the deployment service to deploy Amazon Kinesis as the storage service and an EC2 instance to serve their website is incorrect. Although AWS CodePipeline is useful for automating release pipelines, it's not designed to handle the complexities of deploying and managing applications. Additionally, though Amazon Kinesis is great for real-time data streaming, it's not an ideal long-term storage service.
The option that says: Use CodeDeploy as the deployment service to deploy two S3 buckets in which the first one serves as the storage service and the second one for hosting their dynamic website. Use the native search functionality of S3 to satisfy the search feature requirement is incorrect. You cannot host the dynamic application on Amazon S3, and there is no built-in search feature on Amazon S3 that will satisfy the application requirements.
References:
Check out this Amazon CloudSearch Cheat Sheet:
Question 51: Skipped (review - memorize)
A company uses an AWS CloudFormation template to deploy its three-tier web application on the AWS Cloud. The CloudFormation template contains a custom AMI value used by the Auto Scaling group of Amazon EC2 instances. Every new version of the application corresponds to a new AMI that needs to be deployed. The company doesn’t want any downtime [-> Immutable? blue/green?] during the deployment process. The Solutions Architect has been tasked to implement a solution that will streamline its AMI deployment process by doing the following steps:
- Update the CloudFormation template to refer to the new AMI.
- Launch new EC2 instances from the new AMI by using the calling the UpdateStack
API to replace the old EC2 instances.
Which of the following actions should the Solutions Architect take to achieve the above requirements?
Copy the updated template and deploy it to a new CloudFormation stack. After its successful deployment, update the Amazon Route 53 records to point to the new stack and delete the old stack.
Create a new CloudFormation change set to view the changes in the new version of the template. Verify that the correct AMI is listed on the change before executing the change set.
Update the CloudFormation template AWS::AutoScaling::AutoScalingGroup
resource section and specify an UpdatePolicy
attribute with an AutoScalingRollingUpdate
.
(Correct)
Update the CloudFormation template AWS::AutoScaling::LaunchConfiguration
resource section and specify a DeletionPolicy
attribute with MinSuccessfulInstancesPercent
of 50.
Explanation
AWS CloudFormation gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code. You can use a template to create, update, and delete an entire stack as a single unit, as often as you need to, instead of managing resources individually.
The AWS::AutoScaling::AutoScalingGroup
resource defines an Amazon EC2 Auto Scaling group, which is a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.
When you update the launch template for an Auto Scaling group, this update action does not deploy any change across the running Amazon EC2 instances in the Auto Scaling group. All new instances will get the updated configuration, but existing instances continue to run with the configuration that they were originally launched with. This works the same way as any other Auto Scaling group.
You can add an UpdatePolicy
attribute to your stack to perform rolling updates (or replace the group) when a change has been made to the group. Alternatively, you can force a rolling update on your instances at any time after updating the stack by starting an instance refresh.
The UpdatePolicy
on the AutoScalingGroup
will automatically execute the rolling deployment of the new AMI instances when the Cloudformation template is updated.
To specify how AWS CloudFormation handles rolling updates for an Auto Scaling group, use the AutoScalingRollingUpdate
policy. Rolling updates enable you to specify whether AWS CloudFormation updates instances that are in an Auto Scaling group in batches or all at once. For example, suppose you have updated the MaxBatchSize
in your stack template's UpdatePolicy
from 1 to 10. This allows you to perform updates without causing downtime to your currently running application.
Therefore, the correct answer is: Update the CloudFormation template AWS::AutoScaling::AutoScalingGroup
resource section and specify an UpdatePolicy
attribute with an AutoScalingRollingUpdatepolicy
.
The option that says: Create a new CloudFormation change set to view the changes in the new version of the template. Verify that the correct AMI is listed on the change before executing the change set is incorrect. Although this is possible, the existing EC2 instances won't be replaced immediately this way. The change set will only update the Launch Template with the new AMI and will only be applied to the newly spawned instances, excluding the existing ones.
The option that says: Update the CloudFormation template AWS::AutoScaling::LaunchConfiguration
resource section and specify a DeletionPolicy
attribute with MinSuccessfulInstancesPercentof
50 is incorrect. This option should have used a CreationPolicy
attribute instead of a DeletionPolicy
attribute because there is no MinSuccessfulInstancesPercentof
element in the DeletionPolicy
attribute.
The option that says: Copy the updated template and deploy it to a new CloudFormation stack. After its successful deployment, update the Amazon Route 53 records to point to the new stack and delete the old stack is incorrect. Although this is possible, it involves an extra step that requires updating the Route 53 entry for every deployment. This does not satisfy the deployment requirement of just calling the UpdateStack API to replace the instances.
Takeaway: CloudFormation deploy ASG-EC2 stack,
UpdatePolicy
attribute with an AutoScalingRollingUpdatepolicy
to minimize downtime
References:
Check out the Amazon CloudFront Cheat Sheet:
Question 54: Skipped
A startup currently runs a web application on an extra-large Amazon EC2 instance. The application allows users to upload and download various pdf files from a private Amazon S3 bucket using a pre-signed URL. The web application checks if the file being requested actually exists in the S3 bucket before generating the URL.
In this scenario, how should the solutions architect configure the web application to access the Amazon S3 bucket securely? [-> IAM Role on EC2, app retrieve the temp cred thru curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
1. Create an IAM user with the appropriate permissions allowing access and listing of all of the objects of the S3 bucket. Associate the EC2 instance with the IAM user.
2. Program your web application to retrieve the user credentials from the EC2 instance metadata.
1. Store your access keys inside the EC2 instance.
2. Program your web application to retrieve the AWS credentials from the instance to interact with the objects in the S3 bucket.
1. Create an IAM role with a policy that allows listing and uploading of the objects in the S3 bucket. Launch the EC2 instance with the IAM role.
2. Program your web application to retrieve the temporary security credentials from the EC2 instance metadata.
(Correct)
1. Create an IAM role with a policy that allows listing of the objects in the S3 bucket. Launch the EC2 instance with the IAM role.
2. Program your web application to retrieve the temporary security credentials from the EC2 instance user data.
Explanation
Applications that run on an EC2 instance must include AWS credentials in their AWS API requests. You could store AWS credentials directly within the EC2 instance and allow applications in that instance to use those credentials. But you would then have to manage the credentials and ensure that they securely pass the credentials to each instance and update each EC2 instance when it's time to rotate the credentials. That's a lot of additional work. Instead, you can and should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance.
Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials to sign API requests.
When the application runs, it obtains temporary security credentials from Amazon EC2 instance metadata. These are temporary security credentials that represent the role and are valid for a limited period of time to access various AWS resources. You can fetch the temporary security credentials from the instance by requesting it from this endpoint:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
In this particular scenario, you have to use a combination of IAM Role and EC2 instance metadata to provide the web application the required access it needs to access the S3 bucket. Hence, the correct answer is the following option:
1. Create an IAM role with a policy that allows listing and uploading of the objects in the S3 bucket. Launch the EC2 instance with the IAM role.
2. Program your web application to retrieve the temporary security credentials from the EC2 instance metadata.
The following option is incorrect because it is a security vulnerability to store the AWS credentials in your EC2 instances:
1. Store your access keys inside the EC2 instance.
2. Program your web application to retrieve the AWS credentials from the instance to interact with the objects in the S3 bucket.
Anyone who has access to that EC2 instance can find your AWS credentials and hence, this is not a recommended option.
The following option is incorrect because you should have used an IAM Role instead of an IAM User:
1. Create an IAM user with the appropriate permissions allowing access and listing of all of the objects of the S3 bucket. Associate the EC2 instance with the IAM user.
2. Program your web application to retrieve the user credentials from the EC2 instance metadata.
The following option is incorrect because you should retrieve the IAM role's credentials from the EC2 instance metadata and not from the user data:
1. Create an IAM role with a policy that allows listing of the objects in the S3 bucket. Launch the EC2 instance with the IAM role.
2. Program your web application to retrieve the temporary security credentials from the EC2 instance user data.
References:
Check out this AWS IAM Cheat Sheet:
Question 55: Skipped
A company has a multi-tier web application hosted in AWS. It leverages Amazon CloudFront to reliably scale and quickly serve requests from users around the world. After several months in operation, the company received user complaints of slow response time from the web application. The monitoring team reported that the CloudFront cache hit ratio metric is steadily dropping for the past months. This metric indicates that there are inconsistent query strings on user requests and queries that contain upper-case or mixed-case letters [-> Lambda@Edge to normalize query]. These requests cause CloudFront to send unnecessary origin queries.
Which of the following actions will increase the cache hit ratio of the CloudFront distribution?
Write a Lamda@Edge function that will normalize the query parameters by sorting them in alphabetical order and converting them into lower case. Deploy this function with the CloudFront distribution and set “viewer request” as the trigger to invoke the function.
(Correct)
Reconfigure the CloudFront distribution to ensure that the “case insensitive” option is enabled for processing query string parameters.
Launch a reverse proxy inside the application VPC to intercept the requests going to the origin instances. Process the query parameters to sort them by name and convert them to lowercase letters before forwarding them to the instances.
Reconfigure the CloudFront distribution to remove the caching behavior based on query string parameters. This will cache the requests regardless of the order or case of the query parameters.
Explanation
For each query string parameter that your web application forwards to CloudFront, CloudFront forwards requests to your origin for every parameter value and caches a separate version of the object for every parameter value. This is true even if your origin always returns the same object regardless of the parameter value. For multiple parameters, the number of requests and the number of objects multiply. For example, if requests for an object include two parameters that each have three different values, CloudFront caches six versions of that object. AWS recommends that you configure CloudFront to cache based only on the query string parameters for which your origin returns different versions, and that you carefully consider the merits of caching based on each parameter.
If you configure CloudFront to cache based on query string parameters, you can improve caching if you do the following:
- Configure CloudFront to forward only the query string parameters for which your origin will return unique objects.
- Use the same case (uppercase or lowercase) for all instances of the same parameter. For example, if one request contains parameter1=A
and another contains parameter1=a
, CloudFront forwards separate requests to your origin when a request contains parameter1=A
and when a request contains parameter1=a
. CloudFront then separately caches the corresponding objects returned by your origin separately even if the objects are identical. If you use just A or a, CloudFront forwards fewer requests to your origin.
- List parameters in the same order. As with differences in case, if one request for an object contains the query string parameter1=a¶meter2=b
and another request for the same object contains parameter2=b¶meter1=a
, CloudFront forwards both requests to your origin and separately caches the corresponding objects even if they're identical. If you always use the same order for parameters, CloudFront forwards fewer requests to your origin.
Lambda@Edge is an extension of AWS Lambda, a compute service that lets you execute functions that customize the content that CloudFront delivers. You can author Node.js or Python functions in one Region, US-East-1 (N. Virginia), and then execute them in AWS locations globally that are closer to the viewer, without provisioning or managing servers. Lambda@Edge scales automatically, from a few requests per day to thousands per second. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience.
With Lambda@Edge, you can improve your cache hit ratio by making the following changes to query strings before CloudFront forwards requests to your origin:
- Alphabetize key-value pairs by the name of the parameter.
- Change the case of key-value pairs to lowercase.
Therefore, the correct answer is: Write a Lamda@Edge function that will normalize the query parameters by sorting them in alphabetical order and converting them into lower case. Deploy this function with the CloudFront distribution and set “viewer request” as the trigger to invoke the function. With the "viewer request" set as the trigger, the Lambda@Edge function will normalize the query string before CloudFront processes it. CloudFront will then see the matching cache item for the normalized request, thus increasing the cache hit ratio.
The option that says: Reconfigure the CloudFront distribution to remove the caching behavior based on query string parameters. This will cache the requests regardless of the order or case of the query parameters is incorrect. This will ignore any query parameters and will cache all requests which will cause CloudFront to return incorrect cached items to users.
The option that says: Launch a reverse proxy inside the application VPC to intercept the requests going to the origin instances. Process the query parameters to sort them by name and convert them to lowercase letters before forwarding them to the instances is incorrect. This will not increase the cache hit ratio because CloudFront already forwarded the request to the origin as the proxy processes it which defeats the purpose of having a CloudFront cache.
The option that says: Reconfigure the CloudFront distribution to ensure that the “case insensitive” option is enabled for processing query string parameters is incorrect. CloudFront is case-sensitive when caching objects. There is no "case-insensitive" option in CloudFront. You will have to normalize your query parameters if you want all requests to be in lower-case.
References:
Check out these Amazon CloudFront and AWS Lambda Cheat Sheets:
Question 58: Skipped
A leading telecommunications company is moving all of its mission-critical, multi-tier applications to AWS. [-> Migration] At present, their architecture is composed of desktop client applications and several servers that are all located in their on-premises data center. [-> EC2] The application-tier is using a MySQL database [-> RDS] that is hosted on a single VM while both the presentation and business logic layers are distributed across multiple VMs. There has been a lot of reports that their users, who access the applications remotely, are experiencing increased connection latency and slow load times.
Which of the following is the MOST cost-effective solution to improve the uptime of the application with MINIMAL change [-> ?] and improve the overall user experience?
Set up a new CloudFront web distribution to improve the overall user experience of your desktop applications. Migrate the MySQL database from your VM to a Redshift cluster [X}. Host the application and presentation layers in ECS containers behind a Network Load Balancer.
Use Amazon AppStream 2.0 [?] to centrally manage your desktop applications and improve the overall user experience. Migrate the MySQL database from your VM to Amazon Aurora. Host the application and presentation layers in an Auto Scaling group on EC2 instances behind an Application Load Balancer.
(Correct)
Use Amazon ElastiCache to improve the overall user experience of your desktop applications. Directly migrate the MySQL database from your VM to a DynamoDB database [X. Host the application and presentation layers in AWS Fargate containers behind an Application Load Balancer.
Using Amazon WorkSpaces [?], set up and allocate a workspace for each user to improve the overall user experience. Migrate the MySQL database from your VM to a self-hosted MySQL database in a large EC2 instance. Host the application and presentation layers in Amazon ECS containers behind an Application Load Balancer.
Explanation
Amazon AppStream 2.0 is a fully managed application streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer. You can easily scale to any number of users across the globe without acquiring, provisioning, and operating hardware or infrastructure. AppStream 2.0 is built on AWS, so you benefit from a data center and network architecture designed for the most security-sensitive organizations. Each user has a fluid and responsive experience with your applications, including GPU-intensive 3D design and engineering ones, because your applications run on virtual machines (VMs) optimized for specific use cases and each streaming session automatically adjusts to network conditions.
Enterprises can use AppStream 2.0 to simplify application delivery and complete their migration to the cloud. Educational institutions can provide every student access to the applications they need for class on any computer. Software vendors can use AppStream 2.0 to deliver trials, demos, and training for their applications with no downloads or installations. They can also develop a full software-as-a-service (SaaS) solution without rewriting their application.
Therefore, the correct answer is the option that says: Use Amazon AppStream 2.0 to centrally manage your desktop applications and improve the overall user experience. Migrate the MySQL database from your VM to Amazon Aurora. Host the application and presentation layers in an Auto Scaling group on EC2 instances behind an Application Load Balancer.
The option that says: Set up a new CloudFront web distribution to improve the overall user experience of your desktop applications. Migrate the MySQL database from your VM to a Redshift cluster. Host the application and presentation layers in ECS containers behind a Network Load Balancer is incorrect. It is not suitable to migrate your MySQL database to a Redshift cluster.
The option that says: Use Amazon ElastiCache to improve the overall user experience of your desktop applications. Directly migrate the MySQL database from your VM to a DynamoDB database. Host the application and presentation layers in AWS Fargate containers behind an Application Load Balancer is incorrect. You cannot directly migrate your MySQL database, which is relational in type, to DynamoDB which is a NoSQL database or non-relational. The use of Amazon ElastiCache alone will not significantly improve the overall user experience since this service is primarily used for caching.
The option that says: Using Amazon WorkSpaces, set up and allocate a workspace for each user to improve the overall user experience. Migrate the MySQL database from your VM to a self-hosted MySQL database in a large EC2 instance. Host the application and presentation layers in Amazon ECS containers behind an Application Load Balancer is incorrect. Amazon WorkSpaces is primarily used as a managed, secure Desktop-as-a-Service (DaaS) solution and it costs a lot to implement. Take note that in the scenario, it was mentioned that they are using desktop applications but there was no mention about the need for Windows or Linux desktops which is why the use of Amazon WorkSpaces is incorrect.
References:
Check out this Amazon RDS Cheat Sheet:
Question 59: Skipped
A company is building an innovative AI-powered traffic monitoring portal and uses AWS to host its cloud infrastructure. For the initial deployment, the application would be used by an entire city. The application should be highly available and fault-tolerant to avoid unnecessary downtime.
[-> EC2, ASG, ALB]
Which of the following options is the MOST suitable architecture that you should implement?
Use ElastiCache for the database caching of the portal. Launch an Auto Scaling group of EC2 instances on four Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use a MySQL RDS instance with Multi-AZ deployments configuration and Read Replicas. Use Route 53 and create a CNAME that points to the ELB.
Launch an Auto Scaling group of EC2 instances on two Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use a MySQL RDS instance with Multi-AZ deployments configuration. Use Route 53 and create an A record that points to the ELB.
Launch an Auto Scaling group of EC2 instances on three Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use Amazon Aurora with Aurora Replicas as the database tier. Use Route 53 and create an Alias record that points to the ELB.
(Correct)
Use DynamoDB as the database of the portal. Launch an Auto Scaling group of EC2 instances on four Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use Route 53 and create an A record that points to the ELB.
Explanation
Suppose that you start out running your app or website on a single EC2 instance, and over time, traffic increases to the point that you require more than one instance to meet the demand. You can launch multiple EC2 instances from your AMI and then use Elastic Load Balancing to distribute incoming traffic for your application across these EC2 instances. This increases the availability of your application. Placing your instances in multiple Availability Zones also improves the fault tolerance in your application. If one Availability Zone experiences an outage, traffic is routed to the other Availability Zone.
You can use Amazon EC2 Auto Scaling to maintain a minimum number of running instances for your application at all times. Amazon EC2 Auto Scaling can detect when your instance or application is unhealthy and replace it automatically to maintain the availability of your application. You can also use Amazon EC2 Auto Scaling to scale your Amazon EC2 capacity up or down automatically based on demand, using criteria that you specify.
In this scenario, all of the options are highly available architectures. The main difference here is how they use Route 53.
The correct answer is the option that says: Launch an Auto Scaling group of EC2 instances on three Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use Amazon Aurora with Aurora Replicas as the database tier. Use Route 53 and create an Alias record that points to the ELB. It uses the correct type of record in Route 53, which is an alias record, to point to the ELB.
The option that says: Launch an Auto Scaling group of EC2 instances on two Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use a MySQL RDS instance with Multi-AZ deployments configuration. Use Route 53 and create an A record that points to the ELB is incorrect. You need to create an Alias record with the DNS name and not a normal A-record that points to an IP address.
The option that says: Use DynamoDB as the database of the portal. Launch an Auto Scaling group of EC2 instances on four Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use Route 53 and create an A record that points to the ELB is incorrect. To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias resource record set that points to your load balancer.
The option that says: Use ElastiCache for the database caching of the portal. Launch an Auto Scaling group of EC2 instances on four Availability Zones. Attach an application load balancer to the Auto Scaling Group. Use a MySQL RDS instance with Multi-AZ deployments configuration and Read Replicas. Use Route 53 and create a CNAME that point to the ELB is incorrect. You have to use an Alias record to route to the ELB and not a CNAME record.
References:
Check out this Amazon Route 53 Cheat Sheet:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 64: Skipped
An Amazon partner company plans to host its application on a fleet of Amazon EC2 instances in an Auto Scaling group on a public subnet inside a VPC. A single security group is associated with all the EC2 instances. On a private subnet in the same region, an Amazon Aurora MySQL DB Cluster is created to be accessed by the application. A different security group is associated with the DB Cluster. The solutions architect has been tasked to provide access from the application to the DB Cluster. [-> DBSG - INbound to AppSG; AppSG - Outbound to DB SG at port 3306]
Which of the following options is the recommended implementation to meet the application requirements while providing the least-privilege permissions? (Select TWO.)
To restrict communication in a different subnet, update the outbound Network Access Control List (NACL) of the public subnet to allow access to the private subnet CIDR with the default Aurora port 3306.
On the Amazon EC2 instances’ security group, create an outbound rule with the destination as the DB cluster’s security group using the default Aurora port 3306.
(Correct)
On the Amazon EC2 instances’ security group, create an inbound rule with the source as the DB cluster’s security group using the default Aurora port 3306.
On the Amazon Aurora cluster’s security group, create an inbound rule with the source as the Amazon EC2 instances’ security group using the default Aurora port 3306.
(Correct)
On the Amazon Aurora cluster’s security group, create an outbound rule with the destination as the Amazon EC2 instances’ security group using the default Aurora port 3306.
To restrict communication in a different subnet, update the inbound Network Access Control List (NACL) of the private subnet to allow access from the public subnet CIDR with default Aurora port 3306.
Explanation
A security group controls the traffic that is allowed to reach and leave the resources that it is associated with. For example, after you associate a security group with an EC2 instance, it controls the inbound and outbound traffic for the instance.
When you create a VPC, it comes with a default security group. You can create additional security groups for each VPC. You can associate a security group only with resources in the VPC for which it is created. For each security group, you add rules that control the traffic based on protocols and port numbers. There are separate sets of rules for inbound traffic and outbound traffic.
The rules of a security group control the inbound traffic that's allowed to reach the resources that are associated with the security group. The rules also control the outbound traffic that's allowed to leave them.
You can add or remove rules for a security group (also referred to as authorizing or revoking inbound or outbound access). A rule applies either to inbound traffic (ingress) or outbound traffic (egress). You can grant access to a specific source or destination.
Security groups are stateful. For example, if you send a request from an instance, the response traffic for that request is allowed to reach the instance regardless of the inbound security group rules. Responses to allowed inbound traffic are allowed to leave the instance, regardless of the outbound rules.
The option that says: On the Amazon EC2 instances’ security group, create an outbound rule with the destination as the DB cluster’s security group using the default Aurora port 3306 is correct. The application on the EC2 instance will initiate the connection to the DB cluster, thus, an outbound rule is needed on the instances security group with the destination set as the DB cluster security group.
The option that says: On the Amazon Aurora cluster’s security group, create an inbound rule with the source as the Amazon EC2 instances’ security group using the default Aurora port 3306 is correct. The DB cluster will be receiving requests from the EC2 instances, thus, an inbound rule is needed, and the source is the EC2 instances security group.
The option that says: On the Amazon EC2 instances’ security group, create an inbound rule with the source as the DB cluster’s security group using the default Aurora port 3306 is incorrect. The application on the EC2 instance will initiate the connection to the DB cluster, thus, an outbound rule is needed on the instances security group.
The option that says: On the Amazon Aurora cluster’s security group, create an outbound rule with the destination as the Amazon EC2 instances’ security group using the default Aurora port 3306 is incorrect. You don't need to add an outbound rule on the DB security because the DB clusters do not initiate connections. Any request received with the inbound rule will be automatically allowed to reply to the source.
The option that says: To restrict communication in a different subnet, update the outbound Network Access Control List (NACL) of the public subnet to allow access to the private subnet CIDR with the default Aurora port 3306 is incorrect. This may be possible, however, you should not use CIDR as this will open the communication from other EC2 instances on the subnet as well. It is recommended to use security group IDs for this scenario.
The option that says: To restrict communication in a different subnet, update the inbound Network Access Control List (NACL) of the private subnet to allow access from the public subnet CIDR with default Aurora port 3306 is incorrect. This may be possible, however, you should not use CIDR as this will open the communication from other EC2 instances on the subnet as well. It is recommended to use security group IDs for this scenario.
References:
Check out this Amazon VPC Cheat Sheet:
Check out this Security Group vs. NACL comparison:
Question 1: Skipped
A consumer goods company runs its e-commerce website entirely on its on-premises data center with high-resolution photos and videos. Due to the unprecedented growth of their popular product, they are expecting an increase in incoming traffic to their website across the globe [-> CloudFront] in the coming days. The CTO requested to urgently do the necessary architectural changes to be able to handle the demand. The solutions architect suggested migrating the application to AWS, but the CTO decided that they need at least 3 months to implement a hybrid cloud architecture.
What could the solutions architect do with the current on-premises website to help offload some of the traffic and scale out to meet the demand in a cost-effective way? [-> Storage Gateway? S3?]
Launch a CloudFront web distribution with the URL of the on-premises web application as the origin. Offload the DNS to AWS to handle CloudFront traffic.
(Correct)
Rehost the website to an S3 bucket with website hosting enabled. Create a CloudFront distribution with the S3 endpoint as the origin. Set up Origin Shield and launch a CloudFront Function to offload the DNS to AWS to handle CloudFront traffic.
Use AWS Transit Gateway to establish a dedicated connection with the on-premises website and to manage and configure the servers with AWS App Runner to meet the demand.
Replicate the current web infrastructure of the on-premises website on AWS. Offload the DNS to Route 53 and configure weight-based DNS routing to send 50% of the traffic to AWS.
Explanation
Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay) so that content is delivered with the best possible performance.
- If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately.
- If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.
You can use CloudFront with the on-premises website as the origin. CloudFront is a highly available, scalable service that can cache frequently accessed files on the website and can significantly make the load times faster.
Therefore, the correct answer is: Launch a CloudFront web distribution with the URL of the on-premises web application as the origin. Offload the DNS to AWS to handle CloudFront traffic.
The option that says: Use AWS Transit Gateway to establish a dedicated connection with the on-premises website and to manage and configure the servers with AWS App Runner to meet the demand is incorrect since you cannot create a hybrid dedicated connection to your on-premises network by using an AWS Transit Gateway alone. You have to use AWS Direct Connect for this to work. AWS App Runner is simply a fully managed container application service that lets you build, deploy, and run containerized web applications and API services. The scenario didn't mention any container-based application; thus, AWS App Runner is irrelevant in this case.
The option that says: Rehost the website to an S3 bucket with website hosting enabled. Create a CloudFront distribution with the S3 endpoint as the origin. Set up Origin Shield and launch a CloudFront Function to offload the DNS to AWS to handle CloudFront traffic is incorrect. While S3 can be used for hosting websites, it's only suitable for static websites and simple landing pages, not for full-blown web applications that require dynamic content processing and user interaction. Lastly, the CloudFront function is not capable of DNS offloading, which is typically handled by DNS management services like Route 53.
The option that says: Replicate the current web infrastructure of the on-premises website on AWS. Offload the DNS to Route 53 and configure weight-based DNS routing to send 50% of the traffic to AWS is incorrect as this option is time-consuming and you don't have enough time to replicate the entire architecture of the on-premises website.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 7: Skipped
A multinational investment bank has multiple cloud architectures across the globe. The company has a VPC in the US East region for their East Coast office and another VPC in the US West for their West Coast office. There is a requirement to establish a low latency, high-bandwidth connection between their on-premises data center in Texas and both of their VPCs in AWS. [-> Transit Gateway? Wrong - Connect On Prem to > 1 VPC -> Direct Connect Gateway + n x Virtual Private Gateways + Private Virtual Interfaces ( if 1 VPC, Direct Connect)]
Which of the following options should the solutions architect implement to achieve the requirement in a cost-effective manner?
Set up an AWS VPN managed connection between the VPC in US East region and the on-premises data center in Texas.
Establish a Direct Connect connection between the VPC in US East region and the on-premises data center in Texas, and then establish another Direct Connect connection between the VPC in US West region and the on-premises data center.
Set up an AWS Direct Connect Gateway with two virtual private gateways. Launch and connect the required Private Virtual Interfaces to the Direct Connect Gateway.
(Correct)
Set up two separate VPC peering connections for the two VPCs and for the on-premises data center.
Explanation
You can use an AWS Direct Connect gateway to connect your AWS Direct Connect connection over a private virtual interface to one or more VPCs in your account that are located in the same or different regions. You associate a Direct Connect gateway with the virtual private gateway for the VPC, and then create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway. You can attach multiple private virtual interfaces to your Direct Connect gateway. A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway in any public region and access it from all other public regions.
Hence, the correct answer is: Set up an AWS Direct Connect Gateway with two virtual private gateways. Create the required Private Virtual Interfaces to the Direct Connect Gateway.
The option that says: Establish a Direct Connect connection between the VPC in US East region and the on-premises data center in Texas, and then establish another Direct Connect connection between the VPC in US West region and the on-premises data center is incorrect because establishing two separate Direct Connect connections is expensive and hence, not a cost-effective option. It is better to establish a Direct Connect gateway instead which uses one Direct Connect connection to integrate the 2 VPCs and the on-premises data center.
Setting up an AWS VPN managed connection between the VPC in US East region and the on-premises data center in Texas is incorrect because a VPN Connection is a more suitable solution for low to modest bandwidth requirements which can tolerate the inherent variability in Internet-based connectivity.
Setting up two separate VPC peering connections for the two VPCs and for the on-premises data center is incorrect because VPC Peering is used to connect 2 VPC’s together and not to connect your on-premises data center.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 12: Skipped
A data analytics company is running a Redshift data warehouse for one of its major clients. In compliance with the Business Continuity Program of the client, they need to provide a Recovery Point Objective of 24 hours and a Recovery Time Objective of 1 hour [-> pilot light?]. The data warehouse should be available even in the event that the entire AWS Region is down [-> cross region]. Which of the following is the most suitable configuration for this scenario?
Configure Redshift to have automatic snapshots and do a cross-region snapshot copy to automatically replicate the current production cluster to the disaster recovery region.
(Correct)
Enable Redshift replication from the cluster running in the primary region to the cluster running in the secondary region. Change the DNS endpoint to the secondary cluster's primary node in case of system failures in the primary region.
Configure Redshift to use Cross-Region Replication (CRR) and in case of system failure, failover to the backup region and manually copy the snapshot from the primary region to the secondary region.
No additional configuration needed. Redshift is configured with automatic snapshot by default.
Explanation
When automated snapshots are enabled for a cluster, Amazon Redshift periodically takes snapshots of that cluster, usually every eight hours or following every 5 GB per node of data changes, or whichever comes first. Automated snapshots are enabled by default when you create a cluster. These snapshots are deleted at the end of a retention period. The default retention period is one day, but you can modify it by using the Amazon Redshift console or programmatically by using the Amazon Redshift API.
When you enable Amazon Redshift to automatically copy snapshots to another region, you specify the destination region where you want snapshots to be copied. In the case of automated snapshots, you can also specify the retention period that they should be kept in the destination region. After an automated snapshot is copied to the destination region and it reaches the retention time period there, it is deleted from the destination region, keeping your snapshot usage low. You can change this retention period if you need to keep the automated snapshots for a shorter or longer period of time in the destination region.
Therefore the correct answer is: Configure Redshift to have automatic snapshots and do a cross-region snapshot copy to automatically replicate the current production cluster to the disaster recovery region. The automatic snapshots feature is automatically enabled by default. You can then configure Amazon Redshift to automatically copy snapshots (automated or manual) for a cluster to another AWS Region.
The option that says: Configure Redshift to use Cross-Region Replication (CRR) and in case of system failure, failover to the backup region and manually copy the snapshot from the primary region to the secondary region is incorrect. Amazon Redshift only has cross-region backup feature (using snapshots), not Cross-Region Replication.
The option that says: Enable Redshift replication from the cluster running in the primary region to the cluster running in the secondary region. Change the DNS endpoint to the secondary cluster's primary node in case of system failures in the primary region is incorrect. Amazon Redshift only has cross-region backup feature (using snapshots); it can't replicate directly to another cluster in another region.
The option that says: No additional configuration needed. Redshift is configured with automatic snapshot by default is incorrect. Even though the automatic snapshots feature is enabled by default, cross-region snapshot copy is not.
Redshift -
automatic snapshot (enabled by default) (similar to EC2)
cross-region snapshot copy to auto replicate cluster to disaster region (similar to EC2)
References:
Check out this Amazon Redshift Cheat Sheet:
Question 13: Skipped
A research company hosts its internal applications inside AWS VPCs in multiple AWS Accounts. The internal applications are accessed securely from inside the company network using an AWS Site-to-Site VPN connection. VPC peering connections has been established from the company’s main AWS account to VPCs in other AWS Accounts. The company has recently announced that employees will be allowed to work remotely if they are connected using a VPN. The solutions architect has been tasked to create a scalable and reliable AWS Client VPN solution that employees can use when working remotely.
Which of the following options is the most cost-effective implementation to meet the company requirements with minimal changes to the current setup?
Install the AWS Client VPN on the company data center. Create AWS Transit Gateway to connect the main VPC and other VPCs into a single hub. Update the VPC route configurations to allow communication with the internal applications.
Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in the same VPC region in the main AWS account. Update the VPC route configurations to allow communication with the internal applications.
(Correct)
Install the AWS Client VPN on the company data center. Configure connectivity between the existing AWS Site-to-Site VPN and client VPN endpoint.
Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in each of the AWS accounts. Update each VPC route configuration to allow communication with the internal applications.
Explanation
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
The administrator is responsible for setting up and configuring the service. This involves creating the Client VPN endpoint, associating the target network, configuring the authorization rules, and setting up additional routes (if required).
The client is the end user. This is the person who connects to the Client VPN endpoint to establish a VPN session. The client establishes the VPN session from their local computer or mobile device using an OpenVPN-based VPN client application.
The following are the key components for using AWS Client VPN.
-Client VPN endpoint — Your Client VPN administrator creates and configures a Client VPN endpoint in AWS. Your administrator controls which networks and resources you can access when establishing a VPN connection.
-VPN client application — The software application that you use to connect to the Client VPN endpoint and establish a secure VPN connection.
-Client VPN endpoint configuration file — A configuration file that's provided to you by your Client VPN administrator. The file includes information about the Client VPN endpoint and the certificates required to establish a VPN connection. You load this file into your chosen VPN client application.
The configuration for this scenario includes a target VPC (VPC A) that is peered with an additional VPC (VPC B). We recommend this configuration if you need to give clients access to the resources inside a target VPC and other VPCs that are peered with it (such as VPC B).
Therefore, the correct answer is: Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in the same VPC region in the main AWS account. Update the VPC route configurations to allow communication with the internal applications. This solution is cost-effective and requires minimal changes to the current network setup.
The option that says: Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in each of the AWS accounts. Update each VPC route configuration to allow communication with the internal applications is incorrect. You only need to create a Client VPN endpoint on the main VPC account that has Site-to-Site VPN.
The option that says: Install the AWS Client VPN on the company data center. Create AWS Transit Gateway to connect the main VPC and other VPCs into a single hub. Update the VPC route configurations to allow communication with the internal applications is incorrect. This may be possible, however, this will require reconfiguring the existing VPC peering between the main VPC and the other VPCs in other AWS accounts. Additionally, the AWS Client VPN should be installed on the employee workstations.
The option that says: Install the AWS Client VPN on the company data center. Configure connectivity between the existing AWS Site-to-Site VPN and client VPN endpoint is incorrect. This configuration is impossible and the AWS Client VPN should be installed on the employee workstations and not in the company data center.
References:
Check out these Amazon VPC and VPC Peeing Cheat Sheets:
Question 17: Skipped
A top Internet of Things (IoT) company has developed a wrist-worn activity tracker for soldiers deployed in the field. The device acts as a sensor to monitor the health and vital statistics of the wearer. It is expected that there would be thousands of devices that will send data to the server every minute and after 5 years, the number will increase to tens of thousands. One of the requirements is that the application should be able to accept the incoming data, run it through ETL to store in a data warehouse, and archive the old data. The officers in the military headquarters should have a real-time dashboard to view the sensor data.
Which of the following options is the most suitable architecture to implement in this scenario?
Store the raw data directly in an Amazon S3 bucket with a lifecycle policy to store in Glacier after a month. Register the S3 bucket as a source on AWS Lake Formation. Launch an EMR cluster access the data lake, runs it through ETL, and then output that data to Amazon Redshift.
Store the data directly to DynamoDB. Launch a data pipeline that starts an EMR cluster using data from DynamoDB and sends the data to S3 and Redshift.
Send the raw data directly to Amazon Kinesis Data Firehose for processing and output the data to an S3 bucket. For archiving, create a lifecycle policy from S3 to Glacier. Use Lambda to process the data through Amazon EMR and sends the output to Amazon Redshift.
(Correct)
Leverage Amazon Athena to accept the incoming data and store them using DynamoDB. Setup a cron job that takes data from the DynamoDB table and sends it to an Amazon EMR cluster for ETL, then outputs the result to Amazon Redshift.
Explanation
Whenever there is a requirement for real-time data collection or analysis, always consider Amazon Kinesis.
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, Dynatrace, LogicMonitor, MongoDB, New Relic, and Sumo Logic. Kinesis Data Firehose is part of the Kinesis streaming data platform, along with Kinesis Data Streams and Kinesis Video Streams.
Therefore, the correct answer is: Send the raw data directly to Amazon Kinesis Data Firehose for processing and output the data to an S3 bucket. For archiving, create a lifecycle policy from S3 to Glacier. Use Lambda to process the data through Amazon EMR and sends the output to Amazon Redshift. Amazon Kinesis will ingest the data in real-time, transform it, and output it to an Amazon S3 bucket.
The option that says: Store the data directly to DynamoDB. Launch a data pipeline that starts an EMR cluster using data from DynamoDB and sends the data to S3 and Redshift is incorrect. For the collection of real-time data, AWS recommends Amazon Kinesis for ingestion. After ingestion, you can output the data into several AWS services for further processing.
The option that says: Store the raw data directly in an Amazon S3 bucket with a lifecycle policy to store in Glacier after a month. Register the S3 bucket as a source on AWS Lake Formation. Launch an EMR cluster access the data lake, runs it through ETL, and then output that data to Amazon Redshift is incorrect. Amazon S3 can only accept data using HTTP requests, and the data sent by IoT may be in a different format. For ingesting the data, Amazon Kinesis should be the first one to accept, process it, and store in Amazon S3.
The option that says: Leverage Amazon Athena to accept the incoming data and store them using DynamoDB. Setup a cron job that takes data from the DynamoDB table and sends it to an Amazon EMR cluster for ETL, then outputs the result to Amazon Redshift is incorrect. Amazon Athena is a query service to query data and analyze big data, and not suitable for ingesting real-time IoT data.
References:
Check out this Amazon Kinesis Cheat Sheet:
Question 19: Skipped
A leading e-commerce company plans to launch a donation website for all the victims of the recent super typhoon in South East Asia for its Corporate and Social Responsibility program. The company will advertise its program on TV and on social media, which is why they anticipate incoming traffic on their donation website. Donors can send their donations in cash, which can be transferred electronically, or they can simply post their home address where a team of volunteers can pick up their used clothes, canned goods, and other donations. Donors can optionally write a positive and encouraging message to the victims along with their donations. These features of the donation website will eventually result in a high number of write operations on their database tier considering that there are millions of generous donors around the globe who want to help.
Which of the following options is the best solution for this scenario?
Use an Oracle database hosted on an extra large Dedicated EC2 instance as your database tier and an SQS queue for buffering the write operations.
Amazon DynamoDB with a provisioned write throughput. Use an SQS queue to buffer the large incoming traffic to your Auto Scaled EC2 instances, which processes and writes the data to DynamoDB.(Correct)
Use DynamoDB as a database storage and a CloudFront web distribution for hosting static resources.
Use an Amazon RDS instance with Provisioned IOPS.
Explanation
In this scenario, the application is write-intensive which means that we have to implement a solution to buffer and scale out the services according to the incoming traffic. SQS can be used to handle the large incoming requests and you can use DynamoDB to store large amounts of data.
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as dead-letter queues and cost allocation tags.
Amazon SQS queues can deliver very high throughput. Standard queues support a nearly unlimited number of transactions per second (TPS) per action. By default, FIFO queues support up to 3,000 messages per second with batching.
Therefore, the correct answer is: Amazon DynamoDB with a provisioned write throughput. Use an SQS queue to buffer the large incoming traffic to your Auto Scaled EC2 instances, which processes and writes the data to DynamoDB. The SQS queue can act as a buffer to temporarily store all the requests while waiting for them to be processed and written to the DynamoDB table.
The option that says: Use DynamoDB as a database storage and a CloudFront web distribution for hosting static resources is incorrect. CloudFront is used for caching static content, not for hosting the static website resources
The option that says: Use an Oracle database hosted on an extra large Dedicated EC2 instance as your database tier and an SQS queue for buffering the write operations is incorrect. AWS does not recommend hosting a database on EC2 instances. RDS instances are designed for hosting databases.
The option that says: Use an Amazon RDS instance with Provisioned IOPS is incorrect. This may be possible, however, this is significantly more expensive than using a DynamoDB table and adding an SQS queue.
References:
Check out this Amazon SQS Cheat Sheet:
Question 22: Skipped
A company requires regular processing of a massive amount of product catalogs that need to be handled per batch. The data need to be processed regularly by on-demand workers. The company instructed its solutions architect to design a workflow orchestration system that will enable them to reprocess failures and handle multiple concurrent operations.
What is the MOST suitable solution that the solutions architect should implement in order to manage the state of every workflow?
Store workflow data in an Amazon RDS with AWS Lambda functions polling the RDS database instance for status changes. Set up worker Lambda functions to process the next workflow steps, then use Amazon QuickSight to visualize workflow states directly out of Amazon RDS.
Use Amazon MQ to set up a batch process workflow that handles the processing for a single batch. Develop worker jobs using AWS Lambda functions.
Set up a workflow using AWS Config and AWS Step Functions in order to orchestrate multiple concurrent workflows. Visualize the status of each workflow using the AWS Management Console. Store the historical data in an S3 bucket and visualize the data using Amazon QuickSight.
Implement AWS Step Functions to orchestrate batch processing workflows. Use the AWS Management Console to monitor workflow status and manage failure reprocessing
(Correct)
Explanation
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications.
In this scenario, the company can leverage AWS Step Functions to design and manage the workflows required for processing massive amounts of product catalogs. Step Functions can orchestrate these workflows by defining them as a series of steps or states, where each state can represent a specific task in the process, such as data validation, transformation, or loading. Furthermore, AWS Step Functions offers built-in mechanisms for error handling and retry policies, which are essential for managing failures in any of the workflow steps. Through the AWS Management Console, you can actively monitor each workflow's status, seeing real-time updates on successes, failures, and ongoing tasks. This setup makes it much easier for your solutions architect and operations team to quickly spot and troubleshoot any issues.
Hence, the correct answer is: Implement AWS Step Functions to orchestrate batch processing workflows. Use the AWS Management Console to monitor workflow status and manage failure reprocessing.
The option that says: Store workflow data in an Amazon RDS with AWS Lambda functions polling the RDS database instance for status changes. Set up worker Lambda functions to process the next workflow steps, then use Amazon QuickSight to visualize workflow states directly out of Amazon RDS is incorrect. While this could work, it involves more overhead in managing database connections and polling mechanisms, making it less efficient for orchestrating complex workflows compared to using a dedicated service like AWS Step Functions.
The option that says: Set up a workflow using AWS Config and AWS Step Functions in order to orchestrate multiple concurrent workflows. Visualize the status of each workflow using the AWS Management Console. Store the historical data in an S3 bucket and visualize the data using Amazon QuickSight is incorrect. AWS Config is primarily a service for auditing and evaluating the configurations of your AWS resources, not for workflow orchestration.
The option that says: Use Amazon MQ to set up a batch process workflow that handles the processing for a single batch. Develop worker jobs using AWS Lambda functions is incorrect. Amazon MQ is simply a message broker service for messaging between software components but does not offer the same level of workflow orchestration or state management capabilities as AWS Step Functions. It's more suited for decoupling applications and integrating different systems rather than managing complex workflows.
References:
Check out this AWS Step Functions Cheat Sheet:
Question 23: Skipped
A digital banking company runs its production workload on the AWS cloud. The company has enabled multi-region support on an AWS CloudTrail trail. As part of the company security policy, the creation of any IAM users must be approved by the security team. When an IAM user is created, all of the permissions from that user must be removed automatically. A notification must then be sent to the security team to approve the user creation.
Which of the following options should the solutions architect implement to meet the company requirements? (Select THREE.)
Configure an event filter in AWS CloudTrail for the CreateUser event and send a notification to an Amazon Simple Notification Service (Amazon SNS) topic.
Use Amazon EventBridge to invoke an AWS Fargate tasks that will remove permissions on the newly created IAM user.
Create a rule in Amazon EventBridge that will check for patterns in AWS CloudTrail API calls with the CreateUser eventName.
(Correct)
Use Amazon EventBridge to invoke an AWS Step Function state machine that will remove permissions on the newly created IAM user.
(Correct)
Send a message to an Amazon Simple Notification Service (Amazon SNS) topic. Have the security team subscribe to the SNS topic.
(Correct)
Use Amazon AWS Audit Manager to continually audit newly created users and send a notification to the security team.
Explanation
Using IAM, you can manage access to AWS services and resources securely. You can create and manage AWS users and groups and use permissions to allow and deny those users and groups access to AWS resources.
You can create an Amazon EventBridge rule with an event pattern that matches a specific IAM API call or multiple IAM API calls. Then, associate the rule with an Amazon Simple Notification Service (Amazon SNS) topic. When the rule runs, an SNS notification is sent to the corresponding subscriptions.
Amazon EventBridge works with AWS CloudTrail, a service that records actions from AWS services. CloudTrail captures API calls made by or on behalf of your AWS account from the EventBridge console and to EventBridge API operations.
Using the information collected by CloudTrail, you can determine what request was made to EventBridge, the IP address from which the request was made, who made the request, when it was made, and more. When an event occurs in EventBridge, CloudTrail records the event in Event history. You can view, search, and download recent events in your AWS account.
The option that says: Use Amazon EventBridge to invoke an AWS Step Function state machine that will remove permissions on the newly created IAM user is correct. When Amazon EventBridge detects a pattern from the specified events, it can trigger an AWS Step Function that has specific actions.
The option that says: Send a message to an Amazon Simple Notification Service (Amazon SNS) topic. Have the security team subscribe to the SNS topic is correct. If you create a pattern in Amazon EventBridge, it can then send a message to an SNS Topic once the event is detected.
The option that says: Create a rule in Amazon EventBridge that will check for patterns in AWS CloudTrail API calls with the CreateUser eventName is correct. With Amazon EventBridge, you define patterns to scan events that happen in your AWS account, such as creating an IAM user.
The option that says: Configure an event filter in AWS CloudTrail for the CreateUser event and send a notification to an Amazon Simple Notification Service (Amazon SNS) topic is incorrect. You can filter events on CloudTrail by searching for keywords, however, you can't configure it to send notifications to Amazon SNS. You should use EventBridge for this scenario.
The option that says: Use Amazon AWS Audit Manager to continually audit newly created users and send a notification to the security team is incorrect. AWS Audit Manager is used to map compliance requirements to AWS usage data with prebuilt and custom frameworks and automated evidence collection, not for scanning IAM user creation events.
The option that says: Use Amazon EventBridge to invoke an AWS Fargate tasks that will remove permissions on the newly created IAM user is incorrect. Amazon EventBridge cannot directly invoke AWS Fargate tasks.
References:
Check out these Amazon CloudWatch and AWS Simple Notification Service Cheat Sheets:
Question 30: Skipped
A company has an on-premises data center that is hosting its gaming service. Its primary function is player-matching and is accessible from players around the world. The gaming service prioritizes network speed for the users so all traffic to the servers uses User Datagram Protocol (UDP). As more players join, the company is having difficulty scaling its infrastructure so it plans to migrate the service to the AWS cloud. The Solutions Architect has been tasked with the migration and AWS Shield Advanced has been enabled already to protect all public-facing resources.
Which of the following actions should the Solutions Architect implement to achieve the company requirements? (Select TWO.)
Place the Auto Scaling of Amazon EC2 instances behind an Internet-facing Application Load Balancer (ALB). For the domain name, create an Amazon Route 53 entry that is Aliased to the FQDN of the ALB.
Create an AWS WAF rule that will explicitly block all non-UDP traffic. Ensure that the AWS WAF rule is associated with the load balancer of the EC2 instances.
Set up network ACL rules on the VPC to deny all non-UDP traffic. Ensure that the NACL is associated with the load balancer subnets.
(Correct)
Place the Auto Scaling of Amazon EC2 instances behind a Network Load Balancer (NLB). For the domain name, create an Amazon Route 53 entry that points to the Elastic IP address of the NLB.
(Correct)
Create an Amazon CloudFront distribution and set the Load Balancer as the origin. Use only secure protocols on the distribution origin settings.
Explanation
A Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.
For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port. A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime. Different UDP flows have different source IP addresses and ports so that they can be routed to different targets.
When you create Network Access Control Lists (NACLs), you can specify both allow and deny rules. This is useful if you want to explicitly deny certain types of traffic to your application. For example, you can define IP addresses (as CIDR ranges), protocols, and destination ports that are denied access to the entire subnet. If your application is used only for TCP traffic, you can create a rule to deny all UDP traffic or vice versa. This option is useful when responding to DDoS attacks because it lets you create your own rules to mitigate the attack when you know the source IPs or other signatures.
If you are subscribed to AWS Shield Advanced, you can register Elastic IPs (EIPs) as Protected Resources. DDoS attacks against EIPs that have been registered as Protected Resources are detected more quickly, which can result in a faster time to mitigate.
The option that says: Place the Auto Scaling of Amazon EC2 instances behind a Network Load Balancer (NLB). For the domain name, create an Amazon Route 53 entry that points to the Elastic IP address of the NLB is correct. The service uses UDP traffic and prioritizes network speed, so a Network Load Balancer is ideal for this scenario. You can create a Route 53 entry pointing to the NLB’s Elastic IP.
The option that says: Set up network ACL rules on the VPC to deny all non-UDP traffic. Ensure that the NACL is associated with the load balancer subnets is correct. Since all traffic to the servers is UDP, you can set NACL to block all non-UDP traffic which can help block attacks such as traffic coming from TCP connections.
The option that says: Place the Auto Scaling of Amazon EC2 instances behind an internet-facing Application Load Balancer (ALB). For the domain name, create an Amazon Route 53 entry that is Aliased to the FQDN of the ALB is incorrect. UDP traffic operates at Layer 4 of the OSI model while an ALB operates at Layer 7. A Network Load Balancer is a better fit for this scenario.
The option that says: Create an AWS WAF rule that will explicitly block all non-UDP traffic. Ensure that the AWS WAF rule is associated with the load balancer of the EC2 instances is incorrect. AWS WAF rules cannot protect a Network Load Balancer yet. It is better to use NACL rules to block the non-UDP traffic.
The option that says: Create an Amazon CloudFront distribution and set the Load Balancer as the origin. Use only secure protocols on the distribution origin settings is incorrect. Although CloudFront does provide caching and security settings for the origin, it helps mitigate DDoS attacks to your instances or NLBs because AWS Shield Standard is enabled by default. For more advanced DDoS mitigation AWS Shield Advanced offers integration with NACLs that will block traffic at the edge of the AWS Network.
References:
Check out these Application Load Balancer and AWS Shield Cheat Sheets:
Question 32: Skipped
A retail company runs its two-tier e-commerce website on its on-premises data center. The application runs on a LAMP stack behind a load balancing appliance. The operations team uses SSH to login to the application servers to deploy software updates and install patches on the system. The website has been a target of multiple cyber-attacks recently such as:
- Distributed Denial of Service (DDoS) attacks
- SQL Injection attacks
- Dictionary attacks to SSH accounts on the web servers
The solutions architect plans to migrate the whole system to AWS to improve its security and availability. The following approaches are laid out to address the company concerns:
- Fix SQL injection attacks by reviewing existing application code and logic.
- Use the latest Amazon Linux AMIs to ensure that initial security patches are installed.
- Install the AWS Systems Manager agent on the instances to manage OS patching.
Which of the following are additional recommended actions to address the identified attacks while maintaining high availability and security for the application?
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login on the EC2 instances but with limited access from the company IP address only. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Enable AWS Shield Standard to protect the instances from DDoS attacks.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login only on a bastion host with limited access from the company IP address. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers and enable AWS Shield Standard for DDoS protection.
Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution. Enable AWS Shield Advanced for added protection.
(Correct)
Explanation
AWS Systems Manager Session Manager allows you to manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, on-premises instances, and virtual machines (VMs) through an interactive one-click browser-based shell or through the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.
A distributed denial of service (DDoS) attack is an attack in which multiple compromised systems attempt to flood a target, such as a network or web application, with traffic. A DDoS attack can prevent legitimate users from accessing a service and can cause the system to crash due to the overwhelming traffic volume.
AWS provides two levels of protection against DDoS attacks: AWS Shield Standard and AWS Shield Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against the most common, frequently occurring network and transport layer DDoS attacks that target your website or applications. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. When you subscribe to AWS Shield Advanced and add specific resources to be protected, AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the resources.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting. You can also customize rules that filter out specific traffic patterns.
Therefore, the correct answer is: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution. Enable AWS Shield Advanced for added protection. AWS SSM Session manager allows secure remote access to your instances without using SSH login. AWS WAF rules can block common web exploits like SQL injection attacks and AWS Shield Advanced provides added protection for DDoS attacks.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login only on a bastion host with limited access from the company IP address. Migrate the on-premises MySQL server to an Amazon RDS Multi-AZ instance. Create a CloudFront distribution in front of the application servers and enable AWS Shield Standard for DDoS protection is incorrect. You don't need to use a bastion host if you have AWS SSM agent installed on the instances. You can use SSM Session Manager to login to the servers. AWS Shield Standard is already enabled by default but you should enable AWS Shield Advanced for a higher level of protection.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Enable remote SSH login on the EC2 instances but with limited access from the company IP address only. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Enable AWS Shield Standard to protect the instances from DDoS attacks is incorrect. You don't need to use a bastion host if you have AWS SSM agent installed on the instances. Using a Single-AZ RDS instance is not recommended if you require a highly available database.
The option that says: Use an Application Load Balancer to spread the load on a cluster of Amazon EC2 instances. Disable remote SSH login to the EC2 instances and use AWS SSM Session Manager instead. Migrate the on-premises MySQL server to an Amazon RDS Single-AZ instance. Create a CloudFront distribution in front of the application servers, and apply AWS WAF rules for the distribution is incorrect. AWS WAF may protect you from common web attacks, but you still need to enable AWS Shield Advanced for a higher level of protection against DDoS attacks. Using a Single-AZ RDS instance is not recommended if you require a highly available database.
References:
Check out these AWS Systems Manager and AWS WAF Cheat Sheets:
Question 35: SkippedA supermarket chain is planning to launch an online shopping website to allow its loyal shoppers to buy their groceries online. Since there are a lot of online shoppers at any time of the day, the website should be highly available 24/7 and fault tolerant. Which of the following options provides the best architecture that meets the above requirement?
Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an Amazon RDS database running in a single Reserved EC2 Instance.
Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and a RDS configured with Multi-AZ Deployments.(Correct)
Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and one RDS instance deployed with Read Replicas in the two separate Availability Zones.
Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances each and an RDS instance deployed with Read Replicas in two separate Availability Zones.
Explanation
For high availability, it is best to always choose an RDS instance configured with Multi-AZ deployments. You should also deploy your application across multiple Availability Zones to improve fault tolerance. AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to set up application scaling for multiple resources across multiple services in minutes.
Hence, the correct answer is the option that says: Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an RDS configured with Multi-AZ Deployments.
The option that says: Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances each and an RDS instance deployed with Read Replicas in two separate Availability Zones is incorrect because a Read Replica is primarily used to improve the scalability of the database. This architecture will neither be highly available 24/7 nor fault-tolerant in the event of an AZ outage.
The option that says: Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and one RDS instance deployed with Read Replicas in the two separate Availability Zones is incorrect. Although the EC2 instances are highly available, the database tier is not. You have to use an Amazon RDS database with Multi-AZ configuration.
The option that says: Deploy the website across 2 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and an Amazon RDS database running in a single Reserved EC2 Instance is incorrect. With this architecture, the single Reserved Instance is only deployed in a single AZ. In the event of an AZ outage, the entire database will be unavailable.
References:
Check out this AWS Auto Scaling Cheat Sheet:
Question 38: Skipped
A law firm has decided to use Amazon S3 buckets for storage after an extensive Total Cost of ownership (TCO) analysis comparing S3 versus acquiring more storage for its on-premises hardware. The attorneys, paralegals, clerks, and other employees of the law firm will be using Amazon S3 buckets to store their legal documents and other media files. For a better user experience, the management wants to implement a single-sign-on system in which the user can just use their existing Active Directory login to access the S3 storage to avoid having to remember yet another password.
Which of the following options should the solutions architect implement for the above requirement and also provide a mechanism that restricts access for each user to a designated user folder in a bucket? (Select TWO.)
Set up a matching IAM user and IAM Policy for every user in your corporate directory that needs access to a folder in the bucket.
Set up a federation proxy or a custom identity provider and use AWS Security Token Service to generate temporary tokens. Use an IAM Role to enable access to AWS services.
(Correct)
Configure an IAM Policy that restricts access only to the user-specific folders in the Amazon S3 Bucket.
(Correct)
Use Amazon Connect to integrate the on-premises Active Directory with Amazon S3 and AWS IAM.
Configure an IAM user that provides access for the user and an IAM Policy that restricts access only to the user-specific folders in the S3 Bucket.
Explanation
Federation enables you to manage access to your AWS Cloud resources centrally. With federation, you can use single sign-on (SSO) to access your AWS accounts using credentials from your corporate directory. Federation uses open standards, such as Security Assertion Markup Language 2.0 (SAML), to exchange identity and security information between an identity provider (IdP) and an application.
Your users might already have identities outside of AWS, such as in your corporate directory. If those users need to work with AWS resources (or work with applications that access those resources) then those users also need AWS security credentials. You can use an IAM role to specify permissions for users whose identity is federated from your organization or a third-party identity provider (IdP). Setting up an identity provider for federated access is required for integrating your on-premises Active Directory with AWS.
Therefore, the following options are the correct answers:
- Set up a federation proxy or a custom identity provider and use AWS Security Token Service to generate temporary tokens. Use an IAM Role to enable access to AWS services.
- Configure an IAM Policy that restricts access only to the user-specific folders in the Amazon S3 Bucket.
The option that says: Using Amazon Connect to integrate the on-premises Active Directory with Amazon S3 and AWS IAM is incorrect because Amazon Connect is simply an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost.
The option that says: Configure an IAM user that provides access for the user and an IAM Policy that restricts access only to the user-specific folders in the S3 Bucket is incorrect because you have to use an IAM Role, instead of an IAM user, to provide the access needed to your AWS Resources.
The option that says: Set up a matching IAM user and IAM Policy for every user in your corporate directory that needs access to a folder in the bucket is incorrect because you should be creating IAM Roles rather than IAM Users.
References:
Check out this AWS Identity & Access Management (IAM) Cheat Sheet:
Question 40: Skipped
A company deployed a blockchain application in AWS a year ago using AWS Opsworks. There has been a lot of security patches lately for the underlying Linux servers of the blockchain application, which means that the Opsworks stack instances should be updated.
In this scenario, which of the following are the best practices when updating an AWS stack? (Select TWO.)
Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.(Correct)
Use WAF to deploy the security patches.
On Windows-based instances, run the Update Dependencies stack command.
Use CloudFormation to deploy the security patches.
Delete the entire stack and create a new one.
Run the Update Dependencies stack command for Linux based instances.(Correct)
Explanation
Linux operating system providers supply regular updates, most of which are operating system security patches but can also include updates to installed packages. You should ensure that your instances' operating systems are current with the latest security patches.
By default, AWS OpsWorks Stacks automatically installs the latest updates during setup, after an instance finishes booting. AWS OpsWorks Stacks does not automatically install updates after an instance is online, to avoid interruptions such as restarting application servers. Instead, you manage updates to your online instances yourself, so you can minimize any disruptions.
AWS recommends that you use one of the following to update your online instances:
- Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup.
- On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command, which installs the current set of security patches and other updates on the specified instances.
The following options are incorrect as these are irrelevant in updating your online instances in OpsWorks:
- Delete the entire stack and create a new one.
- Use CloudFormation to deploy the security patches.
- On Windows-based instances, run the Update Dependencies stack command.
- Use WAF to deploy the security patches.
References:
Check out this AWS OpsWorks Cheat Sheet:
Question 45: Skipped
A retail company has an online shopping website that provides cheap bargains and discounts on various products. The company has recently moved its infrastructure from its previous hosting provider to AWS. The architecture uses an Application Load Balancer (ALB) in front of an Auto Scaling group of Spot and On-Demand EC2 instances. The solutions architect must set up a CloudFront web distribution that uses a custom domain name and the origin should point to the new ALB.
Which of the following options is the correct implementation of an end-to-end HTTPS connection from the origin to the CloudFront viewers?
Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to HTTPS Only in CloudFront, then use an SSL/TLS certificate from a third-party certificate authority which was imported to S3.
Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to Match Viewer to support both HTTP or HTTPS in CloudFront then use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attach it in your ALB. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
(Correct)
Upload a self-signed certificate in the ALB. Set the Viewer Protocol Policy to HTTPS Only
in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
Explanation
Remember that there are rules on which type of SSL Certificate to use if you are using an EC2 or an ELB as your origin. This question is about setting up an end-to-end HTTPS connection between the Viewers, CloudFront, and your custom origin, which is an ALB instance.
The certificate issuer you must use depends on whether you want to require HTTPS between viewers and CloudFront or between CloudFront and your origin:
HTTPS between viewers and CloudFront
- You can use a certificate that was issued by a trusted certificate authority (CA) such as Comodo, DigiCert, Symantec or other third-party providers.
- You can use a certificate provided by AWS Certificate Manager (ACM)
HTTPS between CloudFront and a custom origin
- If the origin is not an ELB load balancer, such as Amazon EC2, the certificate must be issued by a trusted CA such as Comodo, DigiCert, Symantec or other third-party providers.
- If your origin is an ELB load balancer, you can also use a certificate provided by ACM.
If you're using your own domain name, such as tutorialsdojo.com, you need to change several CloudFront settings. You also need to use an SSL/TLS certificate provided by AWS Certificate Manager (ACM), or import a certificate from a third-party certificate authority into ACM or the IAM certificate store. Lastly, you should set the Viewer Protocol Policy to HTTPS Only in CloudFront.
Hence, the option that says: Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attach it in your ALB. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is the correct answer in this scenario.
The option that says: Upload a self-signed certificate in the ALB. Set the Viewer Protocol Policy to HTTPS only
in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is incorrect because you cannot directly upload a self-signed certificate in your ALB.
The option that says: Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to Match Viewer to support both HTTP or HTTPS in CloudFront then use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store is incorrect because you have to set the Viewer Protocol Policy to HTTPS Only
.
The option that says: Use a certificate that is signed by a trusted third-party certificate authority in the ALB, which is then imported into ACM. Set the Viewer Protocol Policy to HTTPS Only in CloudFront, then use an SSL/TLS certificate from a third-party certificate authority which was imported to S3 is incorrect because you cannot use an SSL/TLS certificate from a third-party certificate authority which was imported to S3.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 46: Skipped
A large media company based in Los Angeles, California runs a MySQL RDS instance inside an AWS VPC. The company runs a custom analytics application on its on-premises data center that will require read-only access to the database. The company wants to have a read replica of a running MySQL RDS instance inside of AWS cloud to its on-premises data center and use it as an endpoint for the analytics application.
Which of the following options is the most secure way of performing this replication?
Create a Data Pipeline that exports the MySQL data each night and securely downloads the data from an S3 HTTPS endpoint. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication.
Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the Virtual Private Cloud service. Prepare an instance of MySQL running external to Amazon RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the Amazon RDS instance to the on-premises MySQL instance and start the replication from the Amazon RDS Read Replica.
(Correct)
Configure the RDS instance as the master and enable replication over the open Internet using an SSL endpoint to the on-premises server. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication.
RDS cannot replicate to an on-premises database server. Instead, configure the RDS instance to replicate to an EC2 instance with core MySQL and then configure replication over a secure VPN/VPG connection.
Explanation
Amazon supports Internet Protocol security (IPsec) VPN connections. IPsec is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a data stream. Data transferred between your VPC and datacenter routes over an encrypted VPN connection to help maintain the confidentiality and integrity of data in transit. An Internet gateway is not required to establish a hardware VPN connection.
You can set up replication between an Amazon RDS MySQL (or MariaDB DB instance) that is running in AWS and a MySQL (or MariaDB instance) to your on-premises data center. Replication to an instance of MySQL running external to Amazon RDS is only supported during the time it takes to export a database from a MySQL DB instance.
To allow communication between RDS and to your on-premises network, you must first set up a VPN or an AWS Direct Connect connection. Once that is done, just follow the below the steps to perform the replication:
Prepare an instance of MySQL running external to Amazon RDS.
Configure the MySQL DB instance to be the replication source.
Use mysqldump
to transfer the database from the Amazon RDS instance to the instance external to Amazon RDS (e.g. on-premises server)
Start replication to the instance running external to Amazon RDS.
The option that says: Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the Virtual Private Cloud service. Prepare an instance of MySQL running external to Amazon RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the Amazon RDS instance to the on-premises MySQL instance and start the replication from the Amazon RDS Read Replica is correct because it is feasible to set up the secure IPSec VPN connection between the on-premises server and AWS VPC using the VPN/Gateways.
The option that says: Configure the RDS instance as the master and enable replication over the open Internet using an SSL endpoint to the on-premises server. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication is incorrect because SSL endpoint cannot be utilized here as it is only used to securely access the database.
The option that says: RDS cannot replicate to an on-premises database server. Instead, configure the RDS instance to replicate to an EC2 instance with core MySQL and then configure replication over a secure VPN/VPG connection is incorrect because you do not need to establish a secure VPN/VPG connection in the first place as EC2 and RDS are both in AWS Cloud.
The option that says: Create a Data Pipeline that exports the MySQL data each night and securely downloads the data from an S3 HTTPS endpoint. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication is incorrect because Data Pipeline is for batch jobs and is not suitable for this scenario.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 47: Skipped
A travel and tourism company has multiple AWS accounts that are assigned to various departments. The marketing department stores the images and media files that are used in its marketing campaigns on an encrypted Amazon S3 bucket in its AWS account. The marketing team wants to share this S3 bucket so that the management team can review the files.
The solutions architect created an IAM role named mgmt_reviewer in the Management AWS account as well as a custom AWS Key Management System (AWS KMS) key on the Marketing AWS account which is associated with the S3 bucket. However, when users from the Management account received an Access Denied error when they assume the IAM role and try to access the objects on the S3 bucket.
Which of the following options should the solutions architect implement to make sure that the users on the Management AWS account can access the Marketing team's S3 bucket with the minimum required permissions? (Select THREE.)
Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Marketing team’s AWS account ID.
Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the mgmt_reviewer IAM role.
(Correct)
Ensure that the mgmt_reviewer IAM role on the Management account has full permissions to access the S3 bucket. Add a decrypt permission for the custom KMS key on the IAM policy.
Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the Management team’s AWS account ID.
Ensure that the mgmt_reviewer IAM role policy includes read permissions to the Amazon S3 bucket and a decrypt permission to the custom AWS KSM key.
(Correct)
Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Management team’s AWS account ID.
(Correct)
Explanation
An AWS account—for example, Account A—can grant another AWS account, Account B, permission to access its resources, such as buckets and objects. Account B can then delegate those permissions to users in its account. Account A can also directly grant a user in Account B permissions using a bucket policy. But the user will still need permission from the parent account, Account B, to which the user belongs, even if Account B does not have permission from Account A. As long as the user has permission from both the resource owner and the parent account, the user will be able to access the resource.
In Amazon S3, you can grant users in another AWS account (Account B) granular cross-account access to objects owned by your account (Account A).
Depending on the type of access that you want to provide, use one of the following solutions to grant cross-account access to objects:
-AWS Identity and Access Management (IAM) policies and resource-based bucket policies for programmatic-only access to S3 bucket objects
-IAM policies and resource-based Access Control Lists (ACLs) for programmatic-only access to S3 bucket objects
-Cross-account IAM roles for programmatic and console access to S3 bucket objects
If the requester is an IAM principal, then the AWS account that owns the principal must grant the S3 permissions through an IAM policy. Based on your specific use case, the bucket owner must also grant permissions through a bucket policy or ACL. After access is granted, programmatic access of cross-account buckets is the same as accessing the same account buckets.
To grant access to an AWS KMS-encrypted bucket in Account A to a user in Account B, you must have these permissions in place:
-The bucket policy in Account A must grant access to Account B.
-The AWS KMS key policy in Account A must grant access to the user in Account B.
-The AWS Identity and Access Management (IAM) policy in Account B must grant user access to the bucket and the AWS KMS key in Account A.
The option that says: Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Management team’s AWS account ID is correct. The S3 bucket policy must allow read permission from the Management team. Thus, the Principal on the bucket policy should be set to the Management team's account ID.
The option that says: Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the mgmt_reviewer IAM role is correct. The decrypt permission is needed by the mgmt_reviewer IAM role to use the KMS key to decrypt the objects on the S3 bucket.
The option that says: Ensure that the mgmt_reviewer IAM role policy includes read permissions to the Amazon S3 bucket and a decrypt permission to the custom AWS KSM key is correct. This is needed so that all users assuming this role will have access permission to access the S3 and use the KMS key to decrypt the S3 objects.
The option that says: Ensure that the mgmt_reviewer IAM role on the Management account has full permissions to access the S3 bucket. Add a decrypt permission for the custom KMS key on the IAM policy is incorrect. The Management team only requires read permissions on the S3 bucket. Unless required, you should not grant full permission access to the S3 bucket.
The option that says: Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Marketing team’s AWS account ID is incorrect. The Marketing team already owns the S3 bucket. The bucket policy Principal should be set to the Management team's account ID.
The option that says: Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the Management team’s AWS account ID is incorrect. Policy on the KMS key should include the mgmt_reviewer IAM role ARN, not the Management team's account ID.
References:
Check out these Amazon S3 and AWS Key Management Service Cheat Sheets:
Question 49: Skipped
A leading media company in the country is building a voting system for a popular singing competition show on national TV. The viewers who watch the performances can visit the company’s dynamic website to vote for their favorite singer. After the show has finished, it is expected that the site will receive millions of visitors who would like to cast their votes. Web visitors should log in using their social media accounts and then submit their votes. The webpage will display the winner after the show, as well as the vote total for each singer. The solutions architect is tasked to build the voting site and ensure that it can handle the rapid influx of incoming traffic in the most cost-effective way possible.
Which of the following architecture should you use to meet the requirement?
Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Use Amazon Cognito for user authentication. The web servers will process the user's vote and pass the result in an SQS queue. Set up an IAM Role to grant the EC2 instances permissions to write to the SQS queue. A group of EC2 instances will then retrieve and process the items from the queue. Finally, store the results in a DynamoDB table.
(Correct)
Use a CloudFront web distribution and an Application Load balancer in front of an Auto Scaling group of EC2 instances. The servers will first authenticate the user using IAM and then process the user's vote which will then be stored to RDS.
Use a CloudFront web distribution and deploy the website using S3 hosting feature. Write a custom NodeJS application to authenticate the user using STS and AssumeRole API. Setup an IAM Role to grant permission to store the user's vote to a DynamoDB table.
Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Develop a custom authentication service using STS and AssumeRoleWithSAML API. The servers will process the user's vote and store the result in a DynamoDB table.
Explanation
For User authentication, you can use Amazon Cognito. To host the static assets of the website, you can use CloudFront. Considering that there would be millions of voters and data to be stored, it is best to use DynamoDB which can automatically scale.
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Therefore, the correct answer is: Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Use Amazon Cognito for user authentication. The web servers will process the user's vote and pass the result in an SQS queue. Set up an IAM Role to grant the EC2 instances permissions to write to the SQS queue. A group of EC2 instances will then retrieve and process the items from the queue. Finally, store the results in a DynamoDB table.
The option that says: Use a CloudFront web distribution and an Application Load balancer in front of an Auto Scaling group of EC2 instances. The servers will first authenticate the user using IAM and then process the user's vote which will then be stored to RDS is incorrect. By default, you can't use IAM alone to set up social media account registration to your website. You have to use Amazon Cognito. It is also more suitable to use DynamoDB instead of an RDS database since this is only a simple voting application that doesn't warrant a complex table relationship. DynamoDB is a fully managed database that automatically scales, unlike RDS. It can store and accommodate millions of data from the users who cast their votes more effectively than RDS.
The option that says: Use a CloudFront web distribution and deploy the website using S3 hosting feature. Write a custom NodeJS application to authenticate the user using STS and AssumeRole API. Setup an IAM Role to grant permission to store the user's vote to a DynamoDB table is incorrect. The AssumeRole
API is not suitable for user authentication that uses a web identity provider such as Amazon Cognito, Login with Amazon, Facebook, Google, or any social media identity provider. You can use the AssumeRoleWithWebIdentity API or better yet, Amazon Cognito instead.
The option that says: Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Develop a custom authentication service using STS and AssumeRoleWithSAML API. The servers will process the user's vote and store the result in a DynamoDB table is incorrect. The AssumeRoleWithSAML
API is primarily used for SAML 2.0-based federation and for social media user registration.
References:
Check out these Amazon Cognito Cheat Sheets:
Question 50: Skipped
A FinTech startup has recently consolidated its multiple AWS accounts using AWS Organizations. It currently has two teams in its organization, a security team and a development team. The former is responsible for protecting their cloud infrastructure and making sure that all of their resources are compliant, while the latter is responsible for developing new applications that are deployed to EC2 instances. The security team is required to set up a system that will check if all of the running EC2 instances are using an approved AMI. However, the solution should not stop the development team from deploying an EC2 instance running on a non-approved AMI. The disruption is only allowed once the deployment has been completed. In addition, they have to set up a notification system that sends the compliance state of the resources to determine whether they are compliant.
Which of the following options is the most suitable solution that the security team should implement?
Create and assign an SCP and an IAM policy that restricts the AWS accounts and the development team from launching an EC2 instance using an unapproved AMI. Create a CloudWatch alarm that will automatically notify the security team if there are non-compliant EC2 instances running in their VPCs.
Use an AWS Config Managed Rule and specify a list of approved AMI IDs. This rule will check whether running EC2 instances are using specified AMIs. Configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a notification for non-compliant instances.
(Correct)
Set up a Trusted Advisor check that will verify whether the running EC2 instances in your VPCs are using approved AMIs. Create a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will check all of the AMIs being used by your EC2 instances and that will send a notification if there is a running instance which uses an unapproved AMI.
Use the Amazon Inspector service to automatically check all of the AMIs that are being used by your EC2 instances. Set up an SNS topic that will send a notification to both the security and development teams if there is a non-compliant EC2 instance running in their VPCs.
Explanation
When you run your applications on AWS, you usually use AWS resources, which you must create and manage collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS resources. AWS Config is designed to help you oversee your application resources.
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. With AWS Config, you can do the following:
- Evaluate your AWS resource configurations for desired settings.
- Get a snapshot of the current configurations of the supported resources that are associated with your AWS account.
- Retrieve configurations of one or more resources that exist in your account.
- Retrieve historical configurations of one or more resources.
- Receive a notification whenever a resource is created, modified, or deleted.
- View relationships between resources. For example, you might want to find all resources that use a particular security group.
AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. In this scenario, you can use the approved-amis-by-id
AWS managed rule which checks whether running instances are using specified AMIs.
Therefore, the correct answer is: Use an AWS Config Managed Rule and specify a list of approved AMI IDs. This rule will check whether running EC2 instances are using specified AMIs. Configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a notification for non-compliant instances.
The option that says: Create and assign an SCP and an IAM policy that restricts the AWS accounts and the development team from launching an EC2 instance using an unapproved AMI. Create a CloudWatch alarm that will automatically notify the security team if there are non-compliant EC2 instances running in their VPCs is incorrect. Setting up an SCP and IAM Policy will totally restrict the development team from launching EC2 instances with unapproved AMIs. The scenario clearly says that the solution should not have this kind of restriction.
The option that says: Use the Amazon Inspector service to automatically check all of the AMIs that are being used by your EC2 instances. Set up an SNS topic that will send a notification to both the security and development teams if there is a non-compliant EC2 instance running in their VPCs is incorrect. The Amazon Inspector service is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not have the capability to detect EC2 instances that are using unapproved AMIs, unlike AWS Config.
The option that says: Set up a Trusted Advisor check that will verify whether the running EC2 instances in your VPCs are using approved AMIs. Create a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will check all of the AMIs being used by your EC2 instances and that will send a notification if there is a running instance which uses an unapproved AMI is incorrect. AWS Trusted Advisor is primarily used to check if your cloud infrastructure is in compliance with the best practices and recommendations across five categories: cost optimization, security, fault tolerance, performance, and service limits. Their security checks for EC2 do not cover the checking of individual AMIs that are being used by your EC2 instances.
References:
Check out this AWS Config Cheat Sheet:
Question 55: Skipped
A company runs hundreds of Amazon EC2 instances inside a VPC. Whenever an EC2 error is encountered, the solutions architect performs manual steps in order to regain access to the impaired instance. The management wants to automatically recover impaired EC2 instances in the VPC. The goal is to automatically fix an instance that has become unreachable due to network misconfigurations, RDP issues, firewall settings, and many others to meet the compliance requirements.
Which of the following options is the most suitable solution that the solutions architect should implement to meet the above requirements?
To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager Session Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by setting up a monitoring system using CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances.
To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager State Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by using AWS Systems Manager Maintenance Windows.
Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue
document.
(Correct)
Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using AWS Lambda and a custom script.
Explanation
EC2Rescue can help you diagnose and troubleshoot problems on Amazon EC2 Linux and Windows Server instances. You can run the tool manually, or you can run the tool automatically by using Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue document. The AWSSupport-ExecuteEC2Rescue document is designed to perform a combination of Systems Manager actions, AWS CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue.
Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation enables you to do the following:
- Build Automation workflows to configure and manage instances and AWS resources.
- Create custom workflows or use pre-defined workflows maintained by AWS.
- Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.
- Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager console.
Therefore, the correct answer is: Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue
document.
The option that says: To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager State Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by using AWS Systems Manager Maintenance Windows is incorrect. AWS Config is a service which is primarily used to assess, audit, and evaluate the configurations of your AWS resources but not to diagnose and troubleshoot problems in your EC2 instances. In addition, AWS Systems Manager State Manager is primarily used as a secure and scalable configuration management service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define but does not help you in troubleshooting your EC2 instances.
The option that says: To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager Session Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by setting up a monitoring system using CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances is incorrect. Just like the other option, AWS Config does not help you troubleshoot the problems in your EC2 instances. The AWS Systems Manager Sessions Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys for your EC2 instances, but it does not provide the capability of helping you diagnose and troubleshoot problems in your instance like what the EC2Rescue tool can do. In addition, setting up a CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances is an operational overhead that can be easily done by using AWS Systems Manager Automation.
The option that says: Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using AWS Lambda and a custom script is incorrect because while it is technically possible to use AWS Lambda to automate the running of the EC2Rescue tool, this approach requires significant development effort to create and maintain the custom script. On the other hand, using the existing AWSSupport-ExecuteEC2Rescue
automation document in AWS Systems Manager can be a more straightforward solution. This document, maintained by AWS, can be triggered automatically, reducing the need for custom scripting and the associated development and maintenance effort.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 57: Skipped
A company recently switched to using Amazon CloudFront for its content delivery network. The development team already made the preparations necessary to optimize the application performance for global users. The company’s content management system (CMS) serves both dynamic and static content. The dynamic content is served from a fleet of Amazon EC2 instances behind an application load balancer (ALB) while the static assets are served from an Amazon S3 bucket. The ALB is configured as the default origin of the CloudFront distribution. An Origin Access Control (OAC) was created and applied to the S3 bucket policy to allow access only from the CloudFront distribution. Upon testing the CMS webpage, the static assets return an error 404 message.
Which of the following solutions must be implemented to solve this error? (Select TWO.)
Edit the CloudFront distribution and create another origin for serving the static assets.
(Correct)
Replace the CloudFront distribution with AWS Global Accelerator. Configure the AWS Global Accelerator with multiple endpoint groups that target endpoints on all AWS Regions. Use the accelerator’s static IP address to create a record in Amazon Route 53 for the apex domain.
Update the application load balancer listener to check for HEADER condition if the request is from CloudFront and forward it to the Amazon S3 bucket.
Update the application load balancer listener and create a new path-based rule for the static assets so that it will forward requests to the Amazon S3 bucket.
Update the CloudFront distribution and create a new behavior that will forward to the origin of the static assets based on path pattern.
(Correct)
Explanation
Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay) so that content is delivered with the best possible performance.
You create a CloudFront distribution to tell CloudFront where you want the content to be delivered from and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it.
In general, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there or you can restrict access. To restrict access to content that you serve from Amazon S3 buckets, follow these steps:
Create a special CloudFront user called an origin access control (OAC) and associate it with your distribution.
Configure your S3 bucket permissions so that CloudFront can use the OAC to access the files in your bucket and serve them to your users. Make sure that users can’t use a direct URL to the S3 bucket to access a file there.
After you take these steps, users can only access your files through CloudFront, not directly from the S3 bucket.
You can configure a single CloudFront web distribution to serve different types of requests from multiple origins. For example, if you are building a website that serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from a load balancer, you can serve both types of content from a CloudFront web distribution.
Follow these steps to configure a CloudFront web distribution to serve static content from an S3 bucket and dynamic content from a load balancer:
Open your web distribution from the CloudFront console.
Choose the Origins tab.
Create one origin for your S3 bucket and another origin for your load balancer. Note: If you're using a custom origin server or an S3 website endpoint, you must enter the origin's domain name into the Origin Domain Name field.
From your distribution, choose the Behaviors tab.
Create a behavior that specifies a path pattern to route all static content requests to the S3 bucket. For example, you can set the "images/*.jpg" path pattern to route all requests for ".jpg" files in the images directory to the S3 bucket.
Edit the Default (*) path pattern behavior and set its Origin as your load balancer.
Therefore, the correct answers are:
- Edit the CloudFront distribution and create another origin for serving the static assets.
- Update the CloudFront distribution and create a new behavior that will forward to the origin of the static assets based on path pattern.
The option that says: Update the application load balancer listener and create a new path-based rule for the static assets so that it will forward requests to the Amazon S3 bucket is incorrect. The OAC is used to allow only CloudFront to access the static assets on the S3 bucket. Using the load balancer to forward the requests will result in the access denied error.
The option that says: Replace the CloudFront distribution with AWS Global Accelerator. Configure the AWS Global Accelerator with multiple endpoint groups that target endpoints on all AWS Regions. Use the accelerator’s static IP address to create a record in Amazon Route 53 for the apex domain is incorrect. AWS Global Accelerator is a network layer service that improves the availability and performance of your applications used by a wide global audience. This service is not a substitute for a CDN service like Amazon CloudFront.
The option that says: Update the application load balancer listener to check for HEADER condition if the request is from CloudFront and forward it to the Amazon S3 bucket is incorrect. Although an ALB can inspect the HTTP Header requests, using the load balancer to forward the requests will result in the access denied error because of the OAC on the S3 bucket policy.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 58: Skipped
A company has several applications written in TypeScript and Python hosted on the AWS cloud. The company uses an automated deployment solution for its applications using AWS CloudFormation templates and AWS CodePipeline. The company recently acquired a new business unit that uses Python scripts to deploy applications on AWS. The developers from the new business are having difficulty migrating their deployments to AWS CloudFormation because they need to learn a new domain-specific language and their old Python scripts require programming loops, which are not supported in CloudFormation.
Which of the following is the recommended solution to address the developers’ concerns and help them update their deployment procedures?
Write new CloudFormation templates for the deployments of the new business unit. Extract parts of the Python scripts to be added as EC2 user data. Deploy the CloudFormation templates using the AWS Cloud Development Kit (AWS CDK). Add a stage on AWS CodePipeline to integrate AWS CDK using the templates for the application deployment.
Create a standard deployment process for the company and the new business unit by leveraging a third-party resource provisioning engine on AWS CodeBuild. Add a stage on AWS CodePipeline to integrate AWS CodeBuild on the application deployment.
Write TypeScript or Python code that will define AWS resources. Convert these codes to AWS CloudFormation templates by using AWS Cloud Development Kit (AWS CDK). Create CloudFormation stacks using AWS CDK. Create an AWS CodeBuild job that includes AWS CDK and add this stage on AWS CodePipeline.
(Correct)
Import the Python scripts on AWS OpsWorks which can then be integrated with AWS CodePipeline. Ask the developers to write Chef recipes that run can run the Python scripts for application deployment on AWS.
Explanation
AWS Cloud Development Kit (AWS CDK) is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
- Create and provision AWS infrastructure deployments predictably and repeatedly.
- Leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling.
- Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
- Use a template file to create and delete a collection of resources together as a single unit (a stack).
Use the AWS CDK to define your cloud resources in a familiar programming language. The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net. Developers can use one of the supported programming languages to define reusable cloud components known as Constructs.
The AWS Cloud Development Kit (AWS CDK) lets you define your cloud infrastructure as code in one of five supported programming languages. It is intended for moderately to highly experienced AWS users. An AWS CDK app is an application written in TypeScript, JavaScript, Python, Java, or C# that uses the AWS CDK to define AWS infrastructure. An app defines one or more stacks. Stacks (equivalent to AWS CloudFormation stacks) contain constructs, each of which defines one or more concrete AWS resources, such as Amazon S3 buckets, Lambda functions, Amazon DynamoDB tables, and so on.
Constructs (as well as stacks and apps) are represented as types in your programming language of choice. You instantiate constructs within a stack to declare them to AWS and connect them to each other using well-defined interfaces.
The AWS CDK lets you easily define applications in the AWS Cloud using your programming language of choice. To deploy your applications, you can use AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. Together, they allow you to build what's called a deployment pipeline for your application.
Therefore, the correct answer is: Write TypeScript or Python code that will define AWS resources. Convert these codes to AWS CloudFormation templates by using AWS Cloud Development Kit (AWS CDK). Create CloudFormation stacks using AWS CDK. Create an AWS CodeBuild job that includes AWS CDK and add this stage on AWS CodePipeline. With this solution, the developers no longer need to learn the AWS CloudFormation specific language as they can continue writing TypeScript or Python scripts. The AWS CDK stacks can be converted to AWS CloudFormation templates which can be integrated into the company deployment process.
The option that says: Write new CloudFormation templates for the deployments of the new business unit. Extract parts of the Python scripts to be added as EC2 user data. Deploy the CloudFormation templates using the AWS Cloud Development Kit (AWS CDK). Add a stage on AWS CodePipeline to integrate AWS CDK using the templates for the application deployment is incorrect. This is possible but you don't have to write new CloudFormation templates. AWS CDK can convert the TypeScript and Python code to create AWS CDK stacks and convert them to CloudFormation templates.
The option that says: Create a standard deployment process for the company and the new business unit by leveraging a third-party resource provisioning engine on AWS CodeBuild. Add a stage on AWS CodePipeline to integrate AWS CodeBuild on the application deployment is incorrect. You don't have to rely on third-party resources to standardize the deployment process. AWS CDK can help in creating CloudFormation templates based on the TypeScript or Python code.
The option that says: Import the Python scripts on AWS OpsWorks which can then be integrated with AWS CodePipeline. Ask the developers to write Chef recipes that run can run the Python scripts for application deployment on AWS is incorrect. This is not recommended because the developers will need to learn a new domain-specific language to write Chef recipes.
References:
Check out these AWS CloudFormation and AWS CodePipeline Cheat Sheets:
Question 63: Skipped
A government technology agency has recently hired a team to build a mobile tax app that allows users to upload their tax deductions and income records using their devices. The app would also allow users to view or download their uploaded files later on. These files are confidential, tax-related documents that need to be stored in a single, secure S3 bucket. The mobile app's design is to allow the users to upload, view, and download their files directly from an Amazon S3 bucket via the mobile app. Since this app will be used by potentially hundreds of thousands of taxpayers in the country, the solutions architect must ensure that proper user authentication and security features are in place.
Which of the following options should the solutions architect implement in the infrastructure when a new user registers on the app?
Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3.
Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the 'AssumeRole' function in STS. Store these credentials in the mobile app's memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app.(Correct)
Create an IAM user then assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
Use Amazon DynamoDB to record the user's information and when the user uses the mobile app, create access credentials using STS with appropriate permissions. Store these credentials in the mobile app's memory and use them to access the S3 bucket every time the user runs the app.
Explanation
This scenario requires the mobile application to have access to the S3 bucket. The mobile app might potentially have millions of tax-paying users that will upload their documents to S3. In this scenario where mobile applications need to access AWS Resources, always think about using STS actions such as "AssumeRole", "AssumeRoleWithSAML", and "AssumeRoleWithWebIdentity".
You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences:
-Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.
-Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permission to do so.
You can let users sign in using a well-known third-party identity provider such as Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC) 2.0 compatible provider. You can exchange the credentials from that provider for temporary permissions to use resources in your AWS account. This is known as the web identity federation approach to temporary access. When you use web identity federation for your mobile or web application, you don't need to create custom sign-in codes or manage your own user identities. Using web identity federation helps you keep your AWS account secure because you don't have to distribute long-term security credentials, such as IAM user access keys, with your application.
The option that says: Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the 'AssumeRole' function in STS. Store these credentials in the mobile app's memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app is correct because it creates an IAM Role with the required permissions and generates temporary security credentials using STS "AssumeRole" function. Furthermore, it generates new credentials when the user runs the app the next time around.
The option that says: Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3 is incorrect because you should never store credentials inside a mobile app for security purposes to avoid the risk of exposing any access keys or passwords. You should instead grant temporary credentials for the mobile app. In addition, you cannot create long-term credentials using AWS STS, as this service can only generate temporary access tokens.
The option that says: Use Amazon DynamoDB to record the user's information and when the user uses the mobile app, create access credentials using STS with appropriate permissions. Store these credentials in the mobile app's memory and use them to access the S3 bucket every time the user runs the app is incorrect. Even though the setup is similar to the previous option and uses DynamoDB, it is still wrong to store long-term credentials in a mobile app as it is a security risk. In addition, it does not create an IAM Role with proper permissions, which is an essential step.
The option that says: Create an IAM user then assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3 is incorrect because it creates an IAM User and not an IAM Role. You should create an IAM Role so that the app can access the AWS Resource via the "AssumeRole" action in STS.
References:
Check out this AWS IAM Cheat Sheet:
Question 65: Skipped
A company manages more than 50 AWS accounts under its AWS Organization. All AWS accounts deploy resources on a single AWS region only. To enable routing across all accounts, each VPC has a Transit Gateway Attachment to a centralized AWS Transit Gateway. Each VPC also has an internet gateway and NAT gateway to provide outbound internet connectivity for its resources. As a security requirement, the company must have a centrally managed rule-based filtering solution for outbound internet traffic on all AWS accounts under its organization. It is expected that peak outbound traffic for each Availability Zone will not exceed 25 Gbps.
Which of the following options should the solutions architect implement to fulfill the company requirements?
Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. On this VPC, create an Auto Scaling group of Amazon EC2 instances running with an open-source internet proxy software for rule-based filtering across all AZ in the region. Configure the route tables on each VPC to point to this Auto Scaling group.
Provision an Auto Scaling group of Amazon EC2 instances with network-optimized instance type on each AWS account. Install an open-source internet proxy software for rule-based filtering. Configure the route tables on each VPC to point to the Auto Scaling group.
Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. Configure an AWS Network Firewall firewall for the rule-based filtering. Modify all the default routes in each account to point to the Network Firewall endpoint.
(Correct)
Use AWS Network Firewall service to create firewall rule groups and firewall policies for rule-based filtering. Attach the firewall policy to a new Network Firewall firewall on each account. Modify all the default routes in each account to point to their corresponding Network Firewall firewall.
Explanation
AWS Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for your virtual private cloud (VPC) that you created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect.
Once AWS Network Firewall is deployed, you will see a firewall endpoint in each firewall subnet. Firewall endpoint is similar to interface endpoint and it shows up as vpce-id in your VPC route table target selection. You have multiple deployment models for Network Firewall.
For a centralized egress deployment model, an AWS Transit Gateway is a prerequisite. AWS Transit Gateway acts as a network hub and simplifies the connectivity between VPCs. For this model, we have a dedicated, central egress VPC which has a NAT gateway configured in a public subnet with access to IGW.
Traffic originating from spoke VPCs is forwarded to inspection VPC for processing. It is then forwarded to central egress VPC using a default route in the Transit Gateway firewall route table. The default route is set to target central egress VPC Attachment (pointing to the AWS Network Firewall endpoint).
Therefore, the correct answer is: Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. Configure an AWS Network Firewall firewall for the rule-based filtering. Modify all the default routes in each account to point to the Network Firewall endpoint. This solution provides a dedicated VPC for rule-based inspection and controlling of egress traffic. Please check the references section for more details.
The option that says: Use AWS Network Firewall service to create firewall rule groups and firewall policies for rule-based filtering. Attach the firewall policy to a new Network Firewall firewall on each account. Modify all the default routes in each account to point to their corresponding Network Firewall firewall is incorrect. For centralized rule-based filtering with a Network Firewall, you will need an AWS Transit Gateway to act as a network hub and allow the connectivity between VPCs.
The option that says: Provision an Auto Scaling group of Amazon EC2 instances with network-optimized instance type on each AWS account. Install an open-source internet proxy software for rule-based filtering. Configure the route tables on each VPC to point to the Auto Scaling group is incorrect. This will be difficult to manage since you will have a proxy cluster essentially on each AWS account.
The option that says: Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. On this VPC, create an Auto Scaling group of Amazon EC2 instances running with an open-source internet proxy software for rule-based filtering across all AZ in the region. Configure the route tables on each VPC to point to this Auto Scaling group is incorrect. This may be possible but is not recommended. AWS Network Firewall can offer centralized rule-based traffic filtering and inspection across your VPCs.
References:
Check out this AWS Transit Gateway Cheat Sheet:
Question 69: Skipped
A company wants to have a secure content management solution that can be accessed by its external custom applications via API calls. The solutions architect has been instructed to create the infrastructure design. The solution should enable users to upload documents as well as download a specific version or the latest version of a document. There is also a requirement to enable customer administrators to simply submit an API call that can roll back changes to existing files sent to the system.
Which of the following options is the MOST secure and suitable solution that the solutions architect should implement?
Use Amazon WorkDocs for document storage and utilize its user access management, version control, and built-in encryption. Integrate the Amazon WorkDocs Content Manager to the external custom applications. Develop a rollback feature to replace the current document version with the previous version from Amazon WorkDocs.
(Correct)
Use Amazon EFS for object storage and enable data encryption in transit with TLS. Store unique customer managed keys in AWS KMS. Set up IAM roles and IAM access policies for EFS to specify separate encryption keys for each customer application. Utilize file locking and file versioning features in EFS to roll back changes to existing files stored in the CMS.
Use S3 with Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Use client-side encryption to encrypt customer files then share the Customer Master Key (CMK) ID and the client-side master key to all customers in order to access the CMS.
Use Amazon S3 with Versioning and Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Encrypt all documents using client-side encryption for enhanced data security. Share the encryption keys to all customers to unlock the documents. Develop a rollback feature to replace the current document version with the previous version from Amazon S3.
Explanation
Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you can easily create, edit, and share content, and because it’s stored centrally on AWS, access it from anywhere on any device. Amazon WorkDocs makes it easy to collaborate with others, and lets you easily share content, provide rich feedback, and collaboratively edit documents. You can use Amazon WorkDocs to retire legacy file share infrastructure by moving file shares to the cloud. Amazon WorkDocs lets you integrate with your existing systems, and offers a rich API so that you can develop your own content-rich applications. Amazon WorkDocs is built on AWS, where your content is secured on the world's largest cloud infrastructure.
Amazon WorkDocs Content Manager is a high-level utility tool that uploads content or downloads it from an Amazon WorkDocs site. It can be used for both administrative and user applications. For user applications, a developer must construct the Amazon WorkDocs Content Manager with anonymous AWS credentials and an authentication token. For administrative applications, the Amazon WorkDocs client must be initialized with AWS Identity and Access Management (IAM) credentials. In addition, the authentication token must be omitted in subsequent API calls.
Hence, the correct answer in this scenario is: Use Amazon WorkDocs for document storage and utilize its user access management, version control, and built-in encryption. Integrate the Amazon WorkDocs Content Manager to the external custom applications. Develop a rollback feature to replace the current document version with the previous version from Amazon WorkDocs.
The option that says: Use Amazon S3 with Versioning and Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Encrypt all documents using client-side encryption for enhanced data security. Share the encryption keys to all customers to unlock the documents. Develop a rollback feature to replace the current document version with the previous version from Amazon S3 is incorrect. Although you can use Amazon S3, sharing the encryption keys to all customers just to unlock the documents is a security risk.
The option that says: Use S3 with Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Use client-side encryption to encrypt customer files then share the Customer Master Key (CMK) ID and the client-side master key to all customers in order to access the CMS is incorrect because the S3 bucket being used has only enabled Server Access Logging and not Versioning. In addition, it is also a security risk to share the Customer Master Key (CMK) ID and the client-side master key to all customers.
The option that says: Use Amazon EFS for object storage and enable data encryption in transit with TLS. Store unique customer managed keys in AWS KMS. Set up IAM roles and IAM access policies for EFS to specify separate encryption keys for each customer application. Utilize file locking and file versioning features in EFS to roll back changes to existing files stored in the CMS is incorrect because EFS is primarily used as a file system and not for object storage. Although EFS has file locking capabilities, it does not have file versioning features.
References:
Question 71: Skipped
A logistics company is running its business application on Amazon EC2 instances. The web application is running on an Auto Scaling group of EC2 instances behind an Application Load Balancer. The self-managed MySQL database is also running on a large EC2 instance to handle the heavy I/O operations needed by the application. The application is able to handle the amount of traffic during normal hours. However, the performance slows down significantly during the last four days of the month as more users run their month-end reports simultaneously. The Solutions Architect was tasked to improve the performance of the application, especially during the peak days.
Which of the following should the Solutions Architect implement to improve the application performance with the LEAST impact on availability?
Migrate the Amazon EC2 database instance to Amazon RDS for MySQL. Add more read replicas to the database cluster during the end of the month to handle the spike in traffic.
(Correct)
Create Amazon CloudWatch metrics based on EC2 instance CPU usage or response time on the ALB. Trigger an AWS Lambda function to change the instance size, type, and the allocated IOPS of the EBS volumes based on the breached threshold.
Convert all EBS volumes of the EC2 instances to GP2 volumes to improve I/O performance. Scale up the EC2 instances into bigger instance types. Pre-warm the Application Load Balancer to handle sudden spikes in traffic.
Take a snapshot of the EBS volumes with I/O heavy operations and replace them with Provisioned IOPS volumes during the end of the month. Revert to the old EBS volume type afterward to save on costs.
Explanation
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
Amazon RDS supports the most demanding database applications. You can choose between two SSD-backed storage options: one optimized for high-performance OLTP applications, and the other for cost-effective general-purpose use. You can scale your database's compute and storage resources with only a few mouse clicks or an API call, often with no downtime.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Because read replicas can be promoted to master status, they are useful as part of a sharding implementation.
In this scenario, the Amazon EC2 instances are in an Auto Scaling group already which means that the database read operations is the possible bottleneck especially during the month-end wherein the reports are generated. This can be solved by creating RDS read replicas.
Therefore, the correct answer is: Migrate the Amazon EC2 database instance to Amazon RDS for MySQL. Add more read replicas to the database cluster during the end of the month to handle the spike in traffic.
The option that says: Convert all EBS volumes of the EC2 instances to GP2 volumes to improve I/O performance. Scale up the EC2 instances into bigger instance types. Pre-warm the Application Load Balancer to handle sudden spikes in traffic is incorrect. The Amazon EC2 instances are in an Auto Scaling group already which means that the database read operations is the possible bottleneck so changing to bigger EC2 instances will just increase the costs unnecessarily.
The option that says: Create Amazon CloudWatch metrics based on EC2 instance CPU usage or response time on the ALB. Trigger an AWS Lambda function to change the instances size, type, and the allocated IOPS of the EBS volumes based on the breached threshold is incorrect. This will cause a lot of interruption on the application when you change the EBS volumes and the instance type for the EC2 database instance. You will have to reboot the database instances when you change the instance type.
The option that says: Take a snapshot of the EBS volumes with I/O heavy operations and replace them with Provisioned IOPS volumes during the end of the month. Revert to the old EBS volume type afterward to save on costs is incorrect. Creating snapshots will cause the disk write operations to temporarily stop in which the database will stop responding to write requests. Changing the disk types every month is not ideal as this also causes downtime on the application during the switch. A better solution for heavy read operations is to provision an Amazon RDS database with Read Replicas.
References:
Check out this Amazon RDS Cheat Sheet:
Question 72: Skipped
A tech company uses AWS CloudFormation to deploy a three-tier web application that consists of a web tier, application tier, and database tier. The application will utilize an Amazon DynamoDB table for database storage. All resources will be created using a CloudFormation template.
Which of the following options would allow the application instances access to the DynamoDB tables without exposing the API credentials?
Launch an IAM Role that has the required permissions to read and write from the required DynamoDB table. Associate the Role to the application instances by referencing it to the AWS::IAM::InstanceRoleName
Property.
Launch an IAM Role that has the required permissions to read and write from the DynamoDB table. Reference the IAM Role as a property inside the AWS::IAM::InstanceProfile
of the application instance.
(Correct)
Launch an IAM user in the CloudFormation template that has permissions to read and write from the DynamoDB table. Use the GetAtt function to retrieve the Access and secret keys and pass them to the web application instance through the use of its instance user-data.
In the CloudFormation template, use the Parameter section to have the user input the AWS Access and Secret Keys from an already created IAM user that has the permissions required to interact with the DynamoDB table.
Explanation
Identity and Access Management is an AWS service that you can use to manage users and their permissions in AWS. You can use IAM with AWS CloudFormation to specify what AWS CloudFormation actions users can perform, such as viewing stack templates, creating stacks, or deleting stacks. Furthermore, anyone managing AWS CloudFormation stacks will require permissions to resources within those stacks. For example, if users want to use AWS CloudFormation to launch, update, or terminate Amazon EC2 instances, they must have permission to call the relevant Amazon EC2 actions.
The instance profile contains the role and can provide the role's temporary credentials to an application that runs on the instance. Those temporary credentials can then be used in the application's API calls to access resources and to limit access to only those resources that the role specifies. Note that only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions.
Using roles in this way has several benefits. Because role credentials are temporary and rotated automatically, you don't have to manage credentials, and you don't have to worry about long-term security risks. In addition, if you use a single role for multiple instances, you can make a change to that one role and the change is propagated automatically to all the instances.
An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. If you use the AWS Management Console to create a role for Amazon EC2, the console automatically creates an instance profile and gives it the same name as the role.
Hence, the correct answer is: Launch an IAM Role that has the required permissions to read and write from the DynamoDB table. Reference the IAM Role as a property inside the AWS::IAM::InstanceProfile
of the application instance.
The option that says: Launch an IAM Role that has the required permissions to read and write from the required DynamoDB table. Associate the Role to the application instances by referencing it to the AWS::IAM::InstanceRoleName
property is incorrect because you have to use the InstanceProfile
property instead. Take note that there is no InstanceRoleName
property in IAM.
The option that says: In the CloudFormation template, use the Parameter section to have the user input the AWS Access and Secret Keys from an already created IAM user that has the permissions required to interact with the DynamoDB table is incorrect because it is a security risk to include the IAM access keys in a CloudFormation template. You have to use an IAM Role instead.
The option that says: Launch an IAM user in the CloudFormation template that has permissions to read and write from the DynamoDB table. Use the GetAtt function to retrieve the Access and secret keys and pass them to the web application instance through the use of its instance user-data is incorrect because it is inappropriate to use an IAM User to provide an EC2 instance the required access to the DynamoDB table. Attaching an IAM Role to the EC2 instances is the most suitable solution in this scenario.
References:
Check out this AWS CloudFormation Cheat Sheet:
ChatGPT (Wrong) -
ChatGPT (Correct):
ChatGPT (Correct):
ChatGPT (Correct):
ChatGPT (Correct) -
ChatGPT (Correct) -
ChatGPT (Correct) -
ChatGPT second try (Inaccurate) -
ChatGPT (Correct) -
ChatGPT (Correct) -
Each instance in a stack must be a member of at least one layer, except for . You cannot configure an instance directly, except for some basic settings such as the SSH key and hostname.
ChatGPT (Wrong) -
ChatGPT (Open -> Options, wrong) -
ChatGPT (Correct Perfectly) -
ChatGPT Explanation:
ChatGPT (correct and beyond) -
ChatGPT -
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of ), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Using roles to grant permissions to applications that run on EC2 instances requires a bit of extra configuration. An application running on an EC2 instance is abstracted from AWS by the virtualized operating system. Because of this extra separation, an additional step is needed to assign an AWS role and its associated permissions to an EC2 instance and make them available to its applications. This extra step is the creation of an that is attached to the instance.