22 Mar 2024 night study
Last updated
Last updated
Question 46: Skipped (review)
A large media company based in Los Angeles, California runs a MySQL RDS instance inside an AWS VPC. The company runs a custom analytics application on its on-premises data center that will require read-only access to the database. The company wants to have a read replica of a running MySQL RDS instance inside of AWS cloud to its on-premises data center and use it as an endpoint for the analytics application. [-> Read replica of the RDS? wrong coz it need to have the read replica in on-prem]
Which of the following options is the most secure way of performing this replication?
Create a Data Pipeline that exports the MySQL data each night and securely downloads the data from an S3 HTTPS endpoint. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication.
Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the Virtual Private Cloud service. Prepare an instance of MySQL running external to Amazon RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the Amazon RDS instance to the on-premises MySQL instance and start the replication from the Amazon RDS Read Replica.
(Correct)
Configure the RDS instance as the master and enable replication over the open Internet using an SSL endpoint to the on-premises server. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication.
RDS cannot replicate to an on-premises database server. Instead, configure the RDS instance to replicate to an EC2 instance with core MySQL and then configure replication over a secure VPN/VPG connection.
Explanation
Amazon supports Internet Protocol security (IPsec) VPN connections. IPsec is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a data stream. Data transferred between your VPC and datacenter routes over an encrypted VPN connection to help maintain the confidentiality and integrity of data in transit. An Internet gateway is not required to establish a hardware VPN connection.
You can set up replication between an Amazon RDS MySQL (or MariaDB DB instance) that is running in AWS and a MySQL (or MariaDB instance) to your on-premises data center. Replication to an instance of MySQL running external to Amazon RDS is only supported during the time it takes to export a database from a MySQL DB instance.
To allow communication between RDS and to your on-premises network, you must first set up a VPN or an AWS Direct Connect connection. Once that is done, just follow the below the steps to perform the replication:
Prepare an instance of MySQL running external to Amazon RDS.
Configure the MySQL DB instance to be the replication source.
Use mysqldump
to transfer the database from the Amazon RDS instance to the instance external to Amazon RDS (e.g. on-premises server)
Start replication to the instance running external to Amazon RDS.
The option that says: Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the Virtual Private Cloud service. Prepare an instance of MySQL running external to Amazon RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the Amazon RDS instance to the on-premises MySQL instance and start the replication from the Amazon RDS Read Replica is correct because it is feasible to set up the secure IPSec VPN connection between the on-premises server and AWS VPC using the VPN/Gateways.
The option that says: Configure the RDS instance as the master and enable replication over the open Internet using an SSL endpoint to the on-premises server. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication is incorrect because SSL endpoint cannot be utilized here as it is only used to securely access the database.
The option that says: RDS cannot replicate to an on-premises database server. Instead, configure the RDS instance to replicate to an EC2 instance with core MySQL and then configure replication over a secure VPN/VPG connection is incorrect because you do not need to establish a secure VPN/VPG connection in the first place as EC2 and RDS are both in AWS Cloud.
The option that says: Create a Data Pipeline that exports the MySQL data each night and securely downloads the data from an S3 HTTPS endpoint. Use mysqldump
to transfer the database from the Amazon S3 to the on-premises MySQL instance and start the replication is incorrect because Data Pipeline is for batch jobs and is not suitable for this scenario.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 47: Skipped
ChatGPT: Perfectly Correct
A travel and tourism company has multiple AWS accounts that are assigned to various departments. The marketing department stores the images and media files that are used in its marketing campaigns on an encrypted Amazon S3 bucket in its AWS account. The marketing team wants to share this S3 bucket so that the management team can review the files.
The solutions architect created an IAM role named mgmt_reviewer in the Management AWS account as well as a custom AWS Key Management System (AWS KMS) key on the Marketing AWS account which is associated with the S3 bucket. However, when users from the Management account received an Access Denied error when they assume the IAM role {STS:AssumeRole Policy? wrong] and try to access the objects on the S3 bucket.
Which of the following options should the solutions architect implement to make sure that the users on the Management AWS account can access the Marketing team's S3 bucket with the minimum required permissions? (Select THREE.)
Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Marketing team’s AWS account ID.
Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the mgmt_reviewer IAM role. [why?]
(Correct)
Ensure that the mgmt_reviewer IAM role on the Management account has full permissions to access the S3 bucket. Add a decrypt permission for the custom KMS key on the IAM policy.
Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the Management team’s AWS account ID.
Ensure that the mgmt_reviewer IAM role policy includes read permissions to the Amazon S3 bucket and a decrypt permission to the custom AWS KSM key.
(Correct)
Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Management team’s AWS account ID.
(Correct)
Explanation
An AWS account—for example, Account A—can grant another AWS account, Account B, permission to access its resources, such as buckets and objects. Account B can then delegate those permissions to users in its account. Account A can also directly grant a user in Account B permissions using a bucket policy. But the user will still need permission from the parent account, Account B, to which the user belongs, even if Account B does not have permission from Account A. As long as the user has permission from both the resource owner and the parent account, the user will be able to access the resource.
In Amazon S3, you can grant users in another AWS account (Account B) granular cross-account access to objects owned by your account (Account A).
Depending on the type of access that you want to provide, use one of the following solutions to grant cross-account access to objects:
-AWS Identity and Access Management (IAM) policies and resource-based bucket policies for programmatic-only access to S3 bucket objects
-IAM policies and resource-based Access Control Lists (ACLs) for programmatic-only access to S3 bucket objects
-Cross-account IAM roles for programmatic and console access to S3 bucket objects
If the requester is an IAM principal, then the AWS account that owns the principal must grant the S3 permissions through an IAM policy. Based on your specific use case, the bucket owner must also grant permissions through a bucket policy or ACL. After access is granted, programmatic access of cross-account buckets is the same as accessing the same account buckets.
To grant access to an AWS KMS-encrypted bucket in Account A to a user in Account B, you must have these permissions in place:
-The bucket policy in Account A must grant access to Account B.
-The AWS KMS key policy in Account A must grant access to the user in Account B.
-The AWS Identity and Access Management (IAM) policy in Account B must grant user access to the bucket and the AWS KMS key in Account A.
The option that says: Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Management team’s AWS account ID is correct. The S3 bucket policy must allow read permission from the Management team. Thus, the Principal on the bucket policy should be set to the Management team's account ID.
The option that says: Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the mgmt_reviewer IAM role is correct. The decrypt permission is needed by the mgmt_reviewer IAM role to use the KMS key to decrypt the objects on the S3 bucket.
The option that says: Ensure that the mgmt_reviewer IAM role policy includes read permissions to the Amazon S3 bucket and a decrypt permission to the custom AWS KSM key is correct. This is needed so that all users assuming this role will have access permission to access the S3 and use the KMS key to decrypt the S3 objects.
The option that says: Ensure that the mgmt_reviewer IAM role on the Management account has full permissions to access the S3 bucket. Add a decrypt permission for the custom KMS key on the IAM policy is incorrect. The Management team only requires read permissions on the S3 bucket. Unless required, you should not grant full permission access to the S3 bucket.
The option that says: Add an Amazon S3 bucket policy that includes read permission. Ensure that the Principal is set to the Marketing team’s AWS account ID is incorrect. The Marketing team already owns the S3 bucket. The bucket policy Principal should be set to the Management team's account ID.
The option that says: Update the custom AWS KMS key policy in the Marketing account to include decrypt permission for the Management team’s AWS account ID is incorrect. Policy on the KMS key should include the mgmt_reviewer IAM role ARN, not the Management team's account ID.
References:
Check out these Amazon S3 and AWS Key Management Service Cheat Sheets:
Question 49: Skipped
A leading media company in the country is building a voting system for a popular singing competition show on national TV. The viewers who watch the performances can visit the company’s dynamic website to vote for their favorite singer. After the show has finished, it is expected that the site will receive millions of visitors who would like to cast their votes. Web visitors should log in using their social media accounts and then submit their votes. The webpage will display the winner after the show, as well as the vote total for each singer. The solutions architect is tasked to build the voting site and ensure that it can handle the rapid influx of incoming traffic in the most cost-effective way possible. [-> Kinesis Firehose, S3, ... wrong?]
Which of the following architecture should you use to meet the requirement?
Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Use Amazon Cognito for user authentication. The web servers will process the user's vote and pass the result in an SQS queue. Set up an IAM Role to grant the EC2 instances permissions to write to the SQS queue. A group of EC2 instances will then retrieve and process the items from the queue. Finally, store the results in a DynamoDB table.
(Correct)
Use a CloudFront web distribution and an Application Load balancer in front of an Auto Scaling group of EC2 instances. The servers will first authenticate the user using IAM and then process the user's vote which will then be stored to RDS.
Use a CloudFront web distribution and deploy the website using S3 hosting feature. Write a custom NodeJS application to authenticate the user using STS and AssumeRole API. Setup an IAM Role to grant permission to store the user's vote to a DynamoDB table.
Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Develop a custom authentication service using STS and AssumeRoleWithSAML API. The servers will process the user's vote and store the result in a DynamoDB table.
Explanation
For User authentication, you can use Amazon Cognito. To host the static assets of the website, you can use CloudFront. Considering that there would be millions of voters and data to be stored, it is best to use DynamoDB which can automatically scale.
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Therefore, the correct answer is: Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Use Amazon Cognito for user authentication. The web servers will process the user's vote and pass the result in an SQS queue. Set up an IAM Role to grant the EC2 instances permissions to write to the SQS queue. A group of EC2 instances will then retrieve and process the items from the queue. Finally, store the results in a DynamoDB table.
The option that says: Use a CloudFront web distribution and an Application Load balancer in front of an Auto Scaling group of EC2 instances. The servers will first authenticate the user using IAM and then process the user's vote which will then be stored to RDS is incorrect. By default, you can't use IAM alone to set up social media account registration to your website. You have to use Amazon Cognito. It is also more suitable to use DynamoDB instead of an RDS database since this is only a simple voting application that doesn't warrant a complex table relationship. DynamoDB is a fully managed database that automatically scales, unlike RDS. It can store and accommodate millions of data from the users who cast their votes more effectively than RDS.
The option that says: Use a CloudFront web distribution and deploy the website using S3 hosting feature. Write a custom NodeJS application to authenticate the user using STS and AssumeRole API. Setup an IAM Role to grant permission to store the user's vote to a DynamoDB table is incorrect. The AssumeRole
API is not suitable for user authentication that uses a web identity provider such as Amazon Cognito, Login with Amazon, Facebook, Google, or any social media identity provider. You can use the AssumeRoleWithWebIdentity API or better yet, Amazon Cognito instead.
The option that says: Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Develop a custom authentication service using STS and AssumeRoleWithSAML API. The servers will process the user's vote and store the result in a DynamoDB table is incorrect. The AssumeRoleWithSAML
API is primarily used for SAML 2.0-based federation and for social media user registration.
References:
Check out these Amazon Cognito Cheat Sheets:
Question 50: Skipped
A FinTech startup has recently consolidated its multiple AWS accounts using AWS Organizations. It currently has two teams in its organization, a security team and a development team. The former is responsible for protecting their cloud infrastructure and making sure that all of their resources are compliant, while the latter is responsible for developing new applications that are deployed to EC2 instances. The security team is required to set up a system that will check if all of the running EC2 instances are using an approved AMI. However, the solution should not stop the development team from deploying an EC2 instance running on a non-approved AMI [-> Config Managed Rule; not policy nor restriction; SNS to security team]. The disruption is only allowed once the deployment has been completed. In addition, they have to set up a notification system that sends the compliance state of the resources to determine whether they are compliant.
Which of the following options is the most suitable solution that the security team should implement?
Create and assign an SCP and an IAM policy that restricts the AWS accounts and the development team from launching an EC2 instance using an unapproved AMI. Create a CloudWatch alarm that will automatically notify the security team if there are non-compliant EC2 instances running in their VPCs.
Use an AWS Config Managed Rule and specify a list of approved AMI IDs. This rule will check whether running EC2 instances are using specified AMIs. Configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a notification for non-compliant instances.
(Correct)
Set up a Trusted Advisor check that will verify whether the running EC2 instances in your VPCs are using approved AMIs. Create a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will check all of the AMIs being used by your EC2 instances and that will send a notification if there is a running instance which uses an unapproved AMI.
Use the Amazon Inspector service to automatically check all of the AMIs that are being used by your EC2 instances. Set up an SNS topic that will send a notification to both the security and development teams if there is a non-compliant EC2 instance running in their VPCs.
Explanation
When you run your applications on AWS, you usually use AWS resources, which you must create and manage collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS resources. AWS Config is designed to help you oversee your application resources.
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. With AWS Config, you can do the following:
- Evaluate your AWS resource configurations for desired settings.
- Get a snapshot of the current configurations of the supported resources that are associated with your AWS account.
- Retrieve configurations of one or more resources that exist in your account.
- Retrieve historical configurations of one or more resources.
- Receive a notification whenever a resource is created, modified, or deleted.
- View relationships between resources. For example, you might want to find all resources that use a particular security group.
AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. In this scenario, you can use the approved-amis-by-id
AWS managed rule which checks whether running instances are using specified AMIs.
Therefore, the correct answer is: Use an AWS Config Managed Rule and specify a list of approved AMI IDs. This rule will check whether running EC2 instances are using specified AMIs. Configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a notification for non-compliant instances.
The option that says: Create and assign an SCP and an IAM policy that restricts the AWS accounts and the development team from launching an EC2 instance using an unapproved AMI. Create a CloudWatch alarm that will automatically notify the security team if there are non-compliant EC2 instances running in their VPCs is incorrect. Setting up an SCP and IAM Policy will totally restrict the development team from launching EC2 instances with unapproved AMIs. The scenario clearly says that the solution should not have this kind of restriction.
The option that says: Use the Amazon Inspector service to automatically check all of the AMIs that are being used by your EC2 instances. Set up an SNS topic that will send a notification to both the security and development teams if there is a non-compliant EC2 instance running in their VPCs is incorrect. The Amazon Inspector service is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not have the capability to detect EC2 instances that are using unapproved AMIs, unlike AWS Config.
The option that says: Set up a Trusted Advisor check that will verify whether the running EC2 instances in your VPCs are using approved AMIs. Create a CloudWatch alarm and integrate it with the Trusted Advisor metrics that will check all of the AMIs being used by your EC2 instances and that will send a notification if there is a running instance which uses an unapproved AMI is incorrect. AWS Trusted Advisor is primarily used to check if your cloud infrastructure is in compliance with the best practices and recommendations across five categories: cost optimization, security, fault tolerance, performance, and service limits. Their security checks for EC2 do not cover the checking of individual AMIs that are being used by your EC2 instances.
References:
Check out this AWS Config Cheat Sheet:
Question 55: Skipped
A company runs hundreds of Amazon EC2 instances inside a VPC. Whenever an EC2 error is encountered, the solutions architect performs manual steps in order to regain access to the impaired instance. The management wants to automatically recover impaired EC2 instances in the VPC [-> ASG?]. The goal is to automatically fix an instance that has become unreachable due to network misconfigurations, RDP issues, firewall settings, and many others to meet the compliance requirements.
Which of the following options is the most suitable solution that the solutions architect should implement to meet the above requirements?
To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager Session Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by setting up a monitoring system using CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances.
To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager State Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by using AWS Systems Manager Maintenance Windows.
Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue
document.
(Correct)
Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using AWS Lambda and a custom script.
Explanation
EC2Rescue can help you diagnose and troubleshoot problems on Amazon EC2 Linux and Windows Server instances. You can run the tool manually, or you can run the tool automatically by using Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue document. The AWSSupport-ExecuteEC2Rescue document is designed to perform a combination of Systems Manager actions, AWS CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue.
Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation enables you to do the following:
- Build Automation workflows to configure and manage instances and AWS resources.
- Create custom workflows or use pre-defined workflows maintained by AWS.
- Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.
- Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager console.
Therefore, the correct answer is: Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue
document.
The option that says: To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager State Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by using AWS Systems Manager Maintenance Windows is incorrect. AWS Config is a service which is primarily used to assess, audit, and evaluate the configurations of your AWS resources but not to diagnose and troubleshoot problems in your EC2 instances. In addition, AWS Systems Manager State Manager is primarily used as a secure and scalable configuration management service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define but does not help you in troubleshooting your EC2 instances.
The option that says: To meet the compliance requirements, use a combination of AWS Config and the AWS Systems Manager Session Manager to self-diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Automate the recovery process by setting up a monitoring system using CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances is incorrect. Just like the other option, AWS Config does not help you troubleshoot the problems in your EC2 instances. The AWS Systems Manager Sessions Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys for your EC2 instances, but it does not provide the capability of helping you diagnose and troubleshoot problems in your instance like what the EC2Rescue tool can do. In addition, setting up a CloudWatch, AWS Lambda, and the AWS Systems Manager Run Command that will automatically monitor and recover impaired EC2 instances is an operational overhead that can be easily done by using AWS Systems Manager Automation.
The option that says: Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using AWS Lambda and a custom script is incorrect because while it is technically possible to use AWS Lambda to automate the running of the EC2Rescue tool, this approach requires significant development effort to create and maintain the custom script. On the other hand, using the existing AWSSupport-ExecuteEC2Rescue
automation document in AWS Systems Manager can be a more straightforward solution. This document, maintained by AWS, can be triggered automatically, reducing the need for custom scripting and the associated development and maintenance effort.
References:
Check out this AWS Systems Manager Cheat Sheet:
Question 57: Skipped
ChatGPT - Perfectly Correct
A company recently switched to using Amazon CloudFront for its content delivery network. The development team already made the preparations necessary to optimize the application performance for global users. The company’s content management system (CMS) serves both dynamic and static content. The dynamic content is served from a fleet of Amazon EC2 instances behind an application load balancer (ALB) while the static assets are served from an Amazon S3 bucket. The ALB is configured as the default origin of the CloudFront distribution. An Origin Access Control (OAC) was created and applied to the S3 bucket policy to allow access only from the CloudFront distribution. Upon testing the CMS webpage, the static assets return an error 404 message. [-> bucket policy, public access, etc...?]
Which of the following solutions must be implemented to solve this error? (Select TWO.)
Edit the CloudFront distribution and create another origin for serving the static assets.
(Correct)
Replace the CloudFront distribution with AWS Global Accelerator. Configure the AWS Global Accelerator with multiple endpoint groups that target endpoints on all AWS Regions. Use the accelerator’s static IP address to create a record in Amazon Route 53 for the apex domain.
Update the application load balancer listener to check for HEADER condition if the request is from CloudFront and forward it to the Amazon S3 bucket.
Update the application load balancer listener and create a new path-based rule for the static assets so that it will forward requests to the Amazon S3 bucket.
Update the CloudFront distribution and create a new behavior that will forward to the origin of the static assets based on path pattern.
(Correct)
Explanation
Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay) so that content is delivered with the best possible performance.
You create a CloudFront distribution to tell CloudFront where you want the content to be delivered from and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it.
In general, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there or you can restrict access. To restrict access to content that you serve from Amazon S3 buckets, follow these steps:
Create a special CloudFront user called an origin access control (OAC) and associate it with your distribution.
Configure your S3 bucket permissions so that CloudFront can use the OAC to access the files in your bucket and serve them to your users. Make sure that users can’t use a direct URL to the S3 bucket to access a file there.
After you take these steps, users can only access your files through CloudFront, not directly from the S3 bucket.
You can configure a single CloudFront web distribution to serve different types of requests from multiple origins. For example, if you are building a website that serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from a load balancer, you can serve both types of content from a CloudFront web distribution.
Follow these steps to configure a CloudFront web distribution to serve static content from an S3 bucket and dynamic content from a load balancer:
Open your web distribution from the CloudFront console.
Choose the Origins tab.
Create one origin for your S3 bucket and another origin for your load balancer. Note: If you're using a custom origin server or an S3 website endpoint, you must enter the origin's domain name into the Origin Domain Name field.
From your distribution, choose the Behaviors tab.
Create a behavior that specifies a path pattern to route all static content requests to the S3 bucket. For example, you can set the "images/*.jpg" path pattern to route all requests for ".jpg" files in the images directory to the S3 bucket.
Edit the Default (*) path pattern behavior and set its Origin as your load balancer.
Therefore, the correct answers are:
- Edit the CloudFront distribution and create another origin for serving the static assets.
- Update the CloudFront distribution and create a new behavior that will forward to the origin of the static assets based on path pattern.
The option that says: Update the application load balancer listener and create a new path-based rule for the static assets so that it will forward requests to the Amazon S3 bucket is incorrect. The OAC is used to allow only CloudFront to access the static assets on the S3 bucket. Using the load balancer to forward the requests will result in the access denied error.
The option that says: Replace the CloudFront distribution with AWS Global Accelerator. Configure the AWS Global Accelerator with multiple endpoint groups that target endpoints on all AWS Regions. Use the accelerator’s static IP address to create a record in Amazon Route 53 for the apex domain is incorrect. AWS Global Accelerator is a network layer service that improves the availability and performance of your applications used by a wide global audience. This service is not a substitute for a CDN service like Amazon CloudFront.
The option that says: Update the application load balancer listener to check for HEADER condition if the request is from CloudFront and forward it to the Amazon S3 bucket is incorrect. Although an ALB can inspect the HTTP Header requests, using the load balancer to forward the requests will result in the access denied error because of the OAC on the S3 bucket policy.
Takeaway: - Static and dynamic content from Cloudfront -> 2 Origins + behavior based on path pattern
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 58: Skipped
A company has several applications written in TypeScript and Python hosted on the AWS cloud. The company uses an automated deployment solution for its applications using AWS CloudFormation templates and AWS CodePipeline. The company recently acquired a new business unit that uses Python scripts to deploy applications on AWS. The developers from the new business are having difficulty migrating their deployments to AWS CloudFormation because they need to learn a new domain-specific language and their old Python scripts require programming loops, which are not supported in CloudFormation. [-> Cloud Development Kit CDK - CodeBuild - CodePipeline]
Which of the following is the recommended solution to address the developers’ concerns and help them update their deployment procedures?
Write new CloudFormation templates for the deployments of the new business unit. Extract parts of the Python scripts to be added as EC2 user data. Deploy the CloudFormation templates using the AWS Cloud Development Kit (AWS CDK). Add a stage on AWS CodePipeline to integrate AWS CDK using the templates for the application deployment.
Create a standard deployment process for the company and the new business unit by leveraging a third-party resource provisioning engine on AWS CodeBuild. Add a stage on AWS CodePipeline to integrate AWS CodeBuild on the application deployment.
Write TypeScript or Python code that will define AWS resources. Convert these codes to AWS CloudFormation templates by using AWS Cloud Development Kit (AWS CDK). Create CloudFormation stacks using AWS CDK. Create an AWS CodeBuild job that includes AWS CDK and add this stage on AWS CodePipeline.
(Correct)
Import the Python scripts on AWS OpsWorks which can then be integrated with AWS CodePipeline. Ask the developers to write Chef recipes that run can run the Python scripts for application deployment on AWS.
Explanation
AWS Cloud Development Kit (AWS CDK) is a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
AWS CloudFormation enables you to:
- Create and provision AWS infrastructure deployments predictably and repeatedly.
- Leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling.
- Build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.
- Use a template file to create and delete a collection of resources together as a single unit (a stack).
Use the AWS CDK to define your cloud resources in a familiar programming language. The AWS CDK supports TypeScript, JavaScript, Python, Java, and C#/.Net. Developers can use one of the supported programming languages to define reusable cloud components known as Constructs.
The AWS Cloud Development Kit (AWS CDK) lets you define your cloud infrastructure as code in one of five supported programming languages. It is intended for moderately to highly experienced AWS users. An AWS CDK app is an application written in TypeScript, JavaScript, Python, Java, or C# that uses the AWS CDK to define AWS infrastructure. An app defines one or more stacks. Stacks (equivalent to AWS CloudFormation stacks) contain constructs, each of which defines one or more concrete AWS resources, such as Amazon S3 buckets, Lambda functions, Amazon DynamoDB tables, and so on.
Constructs (as well as stacks and apps) are represented as types in your programming language of choice. You instantiate constructs within a stack to declare them to AWS and connect them to each other using well-defined interfaces.
The AWS CDK lets you easily define applications in the AWS Cloud using your programming language of choice. To deploy your applications, you can use AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. Together, they allow you to build what's called a deployment pipeline for your application.
Therefore, the correct answer is: Write TypeScript or Python code that will define AWS resources. Convert these codes to AWS CloudFormation templates by using AWS Cloud Development Kit (AWS CDK). Create CloudFormation stacks using AWS CDK. Create an AWS CodeBuild job that includes AWS CDK and add this stage on AWS CodePipeline. With this solution, the developers no longer need to learn the AWS CloudFormation specific language as they can continue writing TypeScript or Python scripts. The AWS CDK stacks can be converted to AWS CloudFormation templates which can be integrated into the company deployment process.
The option that says: Write new CloudFormation templates for the deployments of the new business unit. Extract parts of the Python scripts to be added as EC2 user data. Deploy the CloudFormation templates using the AWS Cloud Development Kit (AWS CDK). Add a stage on AWS CodePipeline to integrate AWS CDK using the templates for the application deployment is incorrect. This is possible but you don't have to write new CloudFormation templates. AWS CDK can convert the TypeScript and Python code to create AWS CDK stacks and convert them to CloudFormation templates.
The option that says: Create a standard deployment process for the company and the new business unit by leveraging a third-party resource provisioning engine on AWS CodeBuild. Add a stage on AWS CodePipeline to integrate AWS CodeBuild on the application deployment is incorrect. You don't have to rely on third-party resources to standardize the deployment process. AWS CDK can help in creating CloudFormation templates based on the TypeScript or Python code.
The option that says: Import the Python scripts on AWS OpsWorks which can then be integrated with AWS CodePipeline. Ask the developers to write Chef recipes that run can run the Python scripts for application deployment on AWS is incorrect. This is not recommended because the developers will need to learn a new domain-specific language to write Chef recipes.
Takeaway:
Cloud Development Kit (CDK) = CloudFormation in programming language
(Not SDK - Software Dev Kit)
References:
Check out these AWS CloudFormation and AWS CodePipeline Cheat Sheets:
Question 63: Skipped
A government technology agency has recently hired a team to build a mobile tax app that allows users to upload their tax deductions and income records using their devices. The app would also allow users to view or download their uploaded files later on. These files are confidential, tax-related documents [-> PII, Macie] that need to be stored in a single, secure S3 bucket. The mobile app's design is to allow the users to upload, view, and download their files directly from an Amazon S3 bucket via the mobile app. Since this app will be used by potentially hundreds of thousands of taxpayers in the country, the solutions architect must ensure that proper user authentication and security features are in place. [-> Cognito; STS?]
Which of the following options should the solutions architect implement in the infrastructure when a new user registers on the app?
Create a set of long-term credentials (x) using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3.
Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the 'AssumeRole' function in STS. Store these credentials in the mobile app's memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app.(Correct)
Create an IAM user (x) then assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
Use Amazon DynamoDB to record the user's information and when the user uses the mobile app, create access credentials using STS with appropriate permissions. Store these credentials in the mobile app's memory (x) and use them to access the S3 bucket every time the user runs the app.
Explanation
This scenario requires the mobile application to have access to the S3 bucket. The mobile app might potentially have millions of tax-paying users that will upload their documents to S3. In this scenario where mobile applications need to access AWS Resources, always think about using STS actions such as "AssumeRole", "AssumeRoleWithSAML", and "AssumeRoleWithWebIdentity".
You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences:
-Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.
-Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permission to do so.
You can let users sign in using a well-known third-party identity provider such as Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC) 2.0 compatible provider. You can exchange the credentials from that provider for temporary permissions to use resources in your AWS account. This is known as the web identity federation approach to temporary access. When you use web identity federation for your mobile or web application, you don't need to create custom sign-in codes or manage your own user identities. Using web identity federation helps you keep your AWS account secure because you don't have to distribute long-term security credentials, such as IAM user access keys, with your application.
The option that says: Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the 'AssumeRole' function in STS. Store these credentials in the mobile app's memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app is correct because it creates an IAM Role with the required permissions and generates temporary security credentials using STS "AssumeRole" function. Furthermore, it generates new credentials when the user runs the app the next time around.
The option that says: Create a set of long-term credentials using AWS Security Token Service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3 is incorrect because you should never store credentials inside a mobile app for security purposes to avoid the risk of exposing any access keys or passwords. You should instead grant temporary credentials for the mobile app. In addition, you cannot create long-term credentials using AWS STS, as this service can only generate temporary access tokens.
The option that says: Use Amazon DynamoDB to record the user's information and when the user uses the mobile app, create access credentials using STS with appropriate permissions. Store these credentials in the mobile app's memory and use them to access the S3 bucket every time the user runs the app is incorrect. Even though the setup is similar to the previous option and uses DynamoDB, it is still wrong to store long-term credentials in a mobile app as it is a security risk. In addition, it does not create an IAM Role with proper permissions, which is an essential step.
The option that says: Create an IAM user then assign appropriate permissions to the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3 is incorrect because it creates an IAM User and not an IAM Role. You should create an IAM Role so that the app can access the AWS Resource via the "AssumeRole" action in STS.
References:
Check out this AWS IAM Cheat Sheet: