25 Mar 2024 noon study
Last updated
Last updated
Question 12: Skipped
A company is using Microsoft Active Directory to manage all employee accounts and devices. The IT department instructed the solutions architect to implement a single sign-on [-> SSO, STS; Directory Service; Enable with Managed Microsoft AD] feature to allow the employees to use their existing Windows account password to connect and use the various AWS resources.
Which of the following options is the recommended way to extend the current Active Directory domain to AWS?
Use Amazon Cognito to authorize users to your applications using direct sign-in or through third-party apps, and access your apps' backend resources in AWS.
Create users and groups with AWS IAM Identity Center along with AWS Organizations to help you manage SSO access and user permissions across all the AWS accounts.
Use IAM Roles to set up cross-account access and delegate access to resources that are in your AWS account.
Use AWS Directory Service to integrate your AWS resources with the existing Active Directory using trust relationship. Enable single sign-on using Managed Microsoft AD.
(Correct)
Explanation
Because the company is using Microsoft Active Directory already, you can use AWS Directory Service for Microsoft AD to create secure Windows trusts between your on-premises Microsoft Active Directory domains and your AWS Microsoft AD domain in the AWS Cloud. By setting up a trust relationship, you can integrate SSO to the AWS Management Console and the AWS Command Line Interface (CLI), as well as your Windows-based workloads.
AWS Directory Service helps you to set up and run a standalone AWS Managed Microsoft AD directory hosted in the AWS Cloud. You can also use AWS Directory Service to connect your AWS resources with an existing on-premises Microsoft Active Directory. To configure AWS Directory Service to work with your on-premises Active Directory, you must first set up trust relationships to extend authentication from on-premises to the cloud.
Therefore, the correct answer is: Use AWS Directory Service to integrate your AWS resources with the existing Active Directory using trust relationship. Enable single sign-on using Managed Microsoft AD.
The option that says: Use Amazon Cognito to authorize users to your applications using direct sign-in or through third-party apps, and access your apps' backend resources in AWS is incorrect because Cognito is primarily used for federation to your web and mobile apps running on AWS. It allows you to authenticate users through social identity providers. But since the company is already using Microsoft AD, AWS Directory Service is the better choice here.
The option that says: Create users and groups with AWS IAM Identity Center along with AWS Organizations to help you manage SSO access and user permissions across all the AWS accounts is incorrect because using the AWS IAM Identity Center service alone is not enough to meet the requirement. Although it can help you manage SSO access and user permissions across all your AWS accounts in AWS Organizations, you still have to use the AWS Directory Service to integrate your on-premises Microsoft AD. AWS IAM Identity Center integrates with Microsoft AD using AWS Directory Service so there is no need to create users and groups.
The option that says: Use IAM Roles to set up cross-account access and delegate access to resources that are in your AWS account is incorrect because setting up cross-account access allows you to share resources in one AWS account with users in a different AWS account. Since the company is already using Microsoft AD then, the better choice to use here is the AWS Directory Service.
References:
Check out this AWS Directory Service Cheat Sheet:
Question 19: Skipped
A company located on the west coast of North America plans to release a new online service for its customers. The company already created a new VPC in the us-west-1 region where they will launch the Amazon EC2 instances that will host the web application. The application must be highly-available and must dynamically scale based on user traffic [-> ASG, EC2, ALB, multi-AZ]. In addition, the company wants to have a disaster recovery site in the us-east-1 region [-> pilot light?] that will act as a passive backup of the running application.
Which of the following options should the Solutions Architect implement in order to achieve the requirements?
Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB.
Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create record entries in Amazon Route 53 pointing to the ALBs with health check enabled and a failover routing policy.
(Correct)
Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create separate record entries for each region’s ALB on Amazon Route 53 and enable health checks to ensure high-availability for both regions.
Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) that spans multiple Availability Zones (AZs) on both VPCs. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB. Create an Alias record entry in Amazon Route 53 that points to the DNS name of the ALB.
Explanation
On Amazon Route 53, after you create a hosted zone for your domain, such as tutorialsdojo.com, you create records to tell the Domain Name System (DNS) how you want traffic to be routed for that domain. You can create a record that points to the DNS name of your Application Load Balancer on AWS.
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
Failover routing policy – Use when you want to configure active-passive failover.
Geolocation routing policy – Use when you want to route traffic based on the location of your users.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions, and you want to route traffic to the region that provides the best latency.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
You can use Route 53 health checks to configure active-active and active-passive failover configurations. You configure active-active failover using any routing policy (or combination of routing policies) other than failover, and you configure active-passive failover using the failover routing policy.
Use an active-passive failover configuration when you want a primary resource or group of resources to be available the majority of the time, and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the “healthy primary” resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries.
This way, you can create a failover routing policy that will direct traffic to the backup region when your primary region fails.
Amazon EC2 Auto Scaling integrates with Elastic Load Balancing to enable you to insert one or more Application Load Balancer, Network Load Balancer, or Gateway Load Balancer with multiple target groups in front of your Auto Scaling group.
Creating an Application Load Balancer on each region with an Auto Scaling group that spans multiple Availability Zones ensures that your application will be highly available and will scale based on the user traffic.
Therefore, the correct answer is: Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create record entries in Amazon Route 53 pointing to the ALBs with health check enabled and a failover routing policy.
The option that says: Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB is incorrect because an Auto Scaling group cannot span AZs on multiple regions, and the Application Load Balancer cannot serve traffic to EC2 instances on a different region even with Inter-Region VPC peering.
The option that says: Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) that spans multiple Availability Zones (AZs) on both VPCs. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB. Create an Alias record entry in Amazon Route 53 that points to the DNS name of the ALB is incorrect because an Application Load Balancer cannot span to multiple regions, only multiple AZs on the same region. An Auto Scaling group also cannot span multiple regions as it can only deploy EC2 instances in one region.
The option that says: Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create separate record entries for each region’s ALB on Amazon Route 53 and enable health checks to ensure high-availability for both regions is incorrect. Although this setup is possible, it does not mention the routing policy to be used on Amazon Route 53. The question requires that the second region acts as a passive backup, which means only the main region receives all the traffic so you need to specifically use a failover routing policy in Amazon Route 53.
References:
Check out the Amazon Route 53 Cheat Sheet:
Question 20: Skipped
A company has a team of data analysts that uploads generated data points to an Amazon S3 bucket. The data points are used by other departments, so the objects on this primary S3 bucket need to be replicated to other S3 buckets on several AWS Accounts owned by the company. The Solutions Architect created an AWS Lambda function that is triggered by S3 PUT events on the primary bucket. This Lambda function will replicate the newly uploaded object to other destination buckets. [-> Bucket lifecycle, replication? ] Since there will be thousands of object uploads on the primary bucket every day, the company is concerned that this Lambda function may affect other critical Lambda functions because of the regional concurrency limit in AWS Lambda. The replication of the objects does not need to happen in real-time. The company needs to ensure that this Lambda function will not affect the execution of other critical Lambda functions.
Which of the following options will meet the requirements in the LEAST amount of development effort?
Implement an exponential backoff algorithm in the new Lambda function to ensure that it will not run if the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached.
Set the execution timeout of the new Lambda function to 5 minutes. This will allow it to wait for other Lambda function executions to finish in case the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached.
Configure a reserved concurrency limit for the new function to ensure that its executions will not exceed this limit. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to ensure that the concurrency limit is not being reached.
(Correct)
Decouple the Amazon S3 event notifications and send the events to an Amazon SQS queue in a separate AWS account. Create the new Lambda function on this account too. Invoke the Lambda function whenever an event message is received in the SQS queue.
Explanation
Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency. Concurrency is subject to a Regional quota that is shared by all functions in a Region.
There are two types of concurrency available:
Reserved concurrency – Reserved concurrency creates a pool of requests that can only be used by its function, and also prevents its function from using unreserved concurrency.
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond to your function's invocations.
In a single account with the default concurrency limit of 1000 concurrent executions, when other services invoke Lambda function concurrently, there is the possibility for two issues to pop up:
- One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
- A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.
For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. One way to solve both of these problems is by applying a concurrency limit to the Lambda functions in an account.
You can set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.
If you set a concurrency limit for a specific function then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.
Therefore, the correct answer is: Configure a reserved concurrency limit for the new function to ensure that its executions will not exceed this limit. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to ensure that the concurrency limit is not being reached.
The option that says: Set the execution timeout of the new Lambda function to 5 minutes. This will allow it to wait for other Lambda function executions to finish in case the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached is incorrect. This will still invoke the new Lambda function and it will cause more problems because during the wait time, the slice for the Lambda function is still consumed. Other Lambda functions won't be able to execute if all slices are consumed.
The option that says: Decouple the Amazon S3 event notifications and send the events to an Amazon SQS queue in a separate AWS account. Create the new Lambda function on this account too. Invoke the Lambda function whenever an event message is received in the SQS queue is incorrect. This is possible and you will have the concurrency limit on the separate AWS account all for the new Lambda function. However, this requires more work and the creation of another AWS account. Setting a concurrency limit is recommended as it can be used to limit the number of executions of a particular function.
The option that says: Implement an exponential backoff algorithm in the new Lambda function to ensure that it will not run if the concurrency limit is being reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached is incorrect. This will require you to write a backoff algorithm to check the concurrency limit. The function needs to execute in order to run the backoff algorithm which defeats the purpose of limiting concurrency.
References:
Check out the AWS Lambda Cheat Sheet:
Question 25: Skipped
A call center company uses its custom application to process and store call recordings in its on-premises data center. The recordings are stored on an NFS share [-> EFS?]. An offshore team is contracted to transcribe about 2% of the call recordings to be used for quality assurance purposes. It could take up to 3 days before the recordings are completely transcribed [-> AWS Translate? Transcribe]. The application that processes the calls and manages the transcription queue [-> SQS? Lambda] is hosted on Linux servers. A web portal is available for the quality assurance team to review the call recordings. After 90 days, the recordings are sent to an offsite location for long-term storage [-> S3 Glacier?]. The company plans to migrate the system to the AWS cloud to reduce storage costs and automate the transcription of the recordings.
Which of the following options is the recommended solution to meet the company’s requirements?
Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using AWS IQ. Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group.
Store all recordings in an Amazon S3 bucket and send the object key to an Amazon SQS queue. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an Auto Scaling group of Amazon EC2 instances to push the recordings to Amazon Translate for transcription. Set the Auto Scaling policy based on the number of objects on the SQS queue. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda.
Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using Amazon Transcribe. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda.
(Correct)
Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group. Store all recordings in an Amazon EFS share that is mounted on all instances. After 90 days, archive all call recordings using AWS Backup and use Amazon Transcribe to transcribe the recordings.
Explanation
Amazon Transcribe is an AWS service that makes it easy for customers to convert speech to text. Using Automatic Speech Recognition (ASR) technology, customers can choose to use Amazon Transcribe for a variety of business applications, including transcription of voice-based customer service calls, generation of subtitles on audio/video content, and conduct (text-based) content analysis on audio/video content.
Amazon Transcribe analyzes audio files that contain speech and uses advanced machine-learning techniques to transcribe the voice data into text. You can then use the transcription as you would any text document.
Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate subtitling, and generate metadata for media assets to create a fully searchable archive. Amazon Transcribe automatically adds speaker diarization, punctuation, and formatting so that the output closely matches the quality of manual transcription at a fraction of the time and expense. Speech-to-text processing can be applied to live audio streams or batch audio content for transcription.
To transcribe an audio file, Amazon Transcribe uses three operations:
StartTranscriptionJob
– Starts a batch job to transcribe the speech in an audio file to text.
ListTranscriptionJobs
– Returns a list of transcription jobs that have been started. You can specify the status of the jobs that you want the operation to return. For example, you can get a list of all pending jobs or a list of completed jobs.
GetTranscriptionJob
– Returns the result of a transcription job. The response contains a link to a JSON file containing the results.
To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
Transition actions—Define when objects transition to another Using Amazon S3 storage classes. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them or archive objects to the S3 Glacier storage class one year after creating them.
Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf. The lifecycle expiration costs depend on when you choose to expire objects.
Therefore, the correct answer is: Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using Amazon Transcribe. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda. Amazon S3 and Glacier offer very cheap object storage for recordings. Amazon Transcribe offers speech-to-text services that can quickly transcribe recordings. Amazon S3, API Gateway, and Lambda are cheap and scalable ways to host the web portal.
The option that says: Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using AWS IQ. Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group is incorrect because this AWS IQ is just a freelancing platform that provides hands-on help from AWS experts, and it does not have a feature to automate a transcription job. The correct answer is Amazon Transcribe, where in itis faster in the transcription process, which is a requirement by the company.
The option that says: Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group. Store all recordings in an Amazon EFS share that is mounted on all instances. After 90 days, archive all call recordings using AWS Backup and use Amazon Transcribe to transcribe the recordings is incorrect because Amazon Transcribe is primarily used to trancribe your recordings and stored on Amazon S3, not on Amazon EFS. Storing the call recordings to Amazon S3 is cheaper compared to Amazon EFS.
The option that says: Store all recordings in an Amazon S3 bucket and send the object key to an Amazon SQS queue. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an Auto Scaling group of Amazon EC2 instances to push the recordings to Amazon Translate for transcription. Set the Auto Scaling policy based on the number of objects on the SQS queue. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda is incorrect because Amazon Translate is only a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. It is not used for transcribing voice messages to text.
References:
Check out the Amazon Transcribe and Amazon S3 Cheat Sheets:
Question 26: Skipped
A company has several resources in its production environment that is shared among various business units of the company. A single business unit may have one or more AWS accounts that have resources in the production environment. There were a lot of incidents in which the developers from a specific business unit accidentally terminated the Amazon EC2 instances, Amazon EKS clusters, and Amazon Aurora Serverless databases which are owned by another business unit. The solutions architect has been tasked to come up with a solution to only allow a specific business unit that owns the EC2 instances, and other AWS resources, to terminate their own resources. [-> Organization; OU; SCP; resource-level access]
Which of the following is the most suitable multi-account strategy implementation to meet the company requirements?
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the SCP to the OUs, which will then be automatically inherited by its member accounts.
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create an IAM Role in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Create an AWSServiceRoleForOrganizations
service-linked role for the individual member accounts of the OU to enable trusted access.
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to individual Organization Units (OU). Create an IAM Role in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the IAM policy to every member accounts of the OU.
(Correct)
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Provide the cross-account access and the SCP to the individual member accounts to tightly control who can terminate the EC2 instances.
Explanation
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization.
You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts. For example, you can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the policy. You can create multiple OUs within a single organization, and you can create OUs within other OUs. Each OU can contain multiple accounts, and you can move accounts from one OU to another. However, OU names must be unique within a parent OU or root.
Resource-level permissions refer to the ability to specify which resources users are allowed to perform actions on. Amazon EC2 has partial support for resource-level permissions. This means that for certain Amazon EC2 actions, you can control when users are allowed to use those actions based on conditions that have to be fulfilled, or specific resources that users are allowed to use. For example, you can grant users permissions to launch instances, but only of a specific type, and only using a specific AMI.
The scenario on this question has a lot of AWS Accounts that need to be managed. AWS Organization solves this problem and provides you with control by assigning the different business units as individual Organization Units (OU). Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. However, SCPs alone are not sufficient for allowing access to the accounts in your organization. Attaching an SCP to an AWS Organizations entity just defines a guardrail for what actions the principals can perform. You still need to attach identity-based or resource-based policies to principals or resources in your organization's accounts to actually grant permission to them.
Since SCPs only allow or deny the use of an AWS service, you don't want to block OUs from completely using the EC2 service. Thus, you will need to provide cross-account access and the IAM policy to every member accounts of the OU.
Hence, the correct answer is: Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to individual Organization Units (OU). Create an IAM Role in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the IAM policy to every member accounts of the OU.
The option that says: Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create an IAM Role in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Create an AWSServiceRoleForOrganizations
service-linked role for the individual member accounts of the OU to enable trusted access is incorrect because AWSServiceRoleForOrganizations service-linked role is primarily used to only allow AWS Organizations to create service-linked roles for other AWS services. This service-linked role is present in all organizations and not just in a specific OU.
The following options are incorrect because an SCP policy simply specifies the services and actions that users and roles can use in the accounts:
1. Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Provide the cross-account access and the SCP to the individual member accounts to tightly control who can terminate the EC2 instances.
2. Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the SCP to the OUs, which will then be automatically inherited by its member accounts.
SCPs are similar to IAM permission policies except that they don't grant any permissions.
References:
Check out this AWS Organizations Cheat Sheet:
Service Control Policies (SCP) vs IAM Policies:
Comparison of AWS Services Cheat Sheets:
Question 29: Skipped
A multinational software provider in the US hosts both of its development and test environments in the AWS cloud. The CTO decided to use separate AWS accounts in hosting each environment. The solutions architect has enabled Consolidated Billing to link each of the accounts' bill to a Master AWS account. To make sure that each account is kept within the budget, the administrators in the master account must have the power to stop, delete, and/or terminate resources in both development and test environment AWS accounts.
Which of the following options is the recommended action to meet the requirements for this scenario?
By linking all accounts under Consolidated Billing, you will be able to provide IAM users in the master account access to Dev and Test account resources.
First, create IAM users in the master account. Then in the Dev and Test accounts, generate cross-account roles that have full admin permissions while granting access for the master account.(Correct)
In the master account, you are to create IAM users and a cross-account role that has full admin permissions to the Dev and Test accounts.
IAM users with full admin permissions will be created in the master account. In both Dev and Test accounts, generate cross-account roles that would grant the master account access to Dev and Test account resources through permissions inherited from the master account.
Explanation
You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual IAM users in each account. In addition, users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts.
Therefore, the correct answer is: First, create IAM users in the master account. Then in the Dev and Test accounts, generate cross-account roles that have full admin permissions while granting access for the master account. The cross-account role is created in Dev and Test accounts, and the users are created in the Master account that are given that role.
The option that says: In the master account, you are to create IAM users and a cross-account role that has full admin permissions to the Dev and Test accounts is incorrect. A cross-account role should be created in Dev and Test accounts, not Master account.
The option that says: IAM users with full admin permissions will be created in the master account. In both Dev and Test accounts, generate cross-account roles that would grant the master account access to Dev and Test account resources through permissions inherited from the master account is incorrect. The permissions cannot be inherited from one AWS account to another.
The option that says: By linking all accounts under Consolidated Billing, you will be able to provide IAM users in the master account access to Dev and Test account resources is incorrect. Consolidated billing does not give access to resources in this fashion.
References:
Check out this AWS IAM Cheat Sheet:
Question 34: Skipped
A logistics company is developing a new application that will be used for all its departments. All of the company's AWS accounts are under OrganizationA in its AWS Organizations. A certain feature of the application must allow AWS resource access from a third-party account [Another AWS Acc] which is under AWS Organizations named OrganizationB. The company wants to follow security best practices and grant "least privilege" access [-> IAM analyzer? IAM Role? - External ID from AWS B + IAM Role from AWS A] using API or CLI to the third-party account.
Which of the following options is the recommended way to securely allow OrganizationB to access AWS resources on OrganizationA?
The logistics company should create an IAM role and attach an IAM policy allowing only the required access. The third-party account should then use AWS STS to assume the IAM role’s Amazon Resource Name (ARN) when requesting access to OrganizationA’s AWS resources.
The logistics company must create an IAM user with an IAM policy allowing only the required access. The logistics company should then send the AWS credentials to the third-party account to allow login and perform only specific tasks.
The third-party account should create an External ID that will be given to OrganizationA. The logistics company should then create an IAM role with the required access and put the External ID in the IAM role’s trust policy. The third-party account should use the IAM role’s ARN and External ID when requesting access to OrganizationA’s AWS resources.
(Correct)
The third-party AWS Organization must integrate with the AWS Identity Center of the logistics company. Then create custom IAM policies for the third-party account to only access specific resources under OrganizationA.
Explanation
At times, you need to give a third-party access to your AWS resources (delegate access). One important aspect of this scenario is the External ID, optional information that you can use in an IAM role trust policy to designate who can assume the role. The external ID allows the user that is assuming the role to assert the circumstances in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances.
In a multi-tenant environment where you support multiple customers with different AWS accounts, AWS recommends using one external ID per AWS account. This ID should be a random string generated by the third party.
To require that the third party provides an external ID when assuming a role, update the role's trust policy with the external ID of your choice. To provide an external ID when you assume a role, use the AWS CLI or AWS API to assume that role.
For example, let's say that you decide to hire a third-party company called Example Corp to monitor your AWS account and help optimize costs. In order to track your daily spending, Example Corp needs to access your AWS resources. Example Corp also monitors many other AWS accounts for other customers.
Do not give Example Corp access to an IAM user and its long-term credentials in your AWS account. Instead, use an IAM role and its temporary security credentials. An IAM role provides a mechanism to allow a third party to access your AWS resources without needing to share long-term credentials.
Therefore, the correct answer is: The third-party account should create an External ID that will be given to OrganizationA. The logistics company should then create an IAM role with the required access and put the External ID in the IAM role’s trust policy. The third-party account should use the IAM role’s ARN and External ID when requesting access to OrganizationA’s AWS resources. An External ID in IAM role's trust policy is a recommended best practice to ensure that the external party can assume your role only when it is acting on your behalf.
The option that says: The third-party AWS Organization must integrate with the AWS Identity Center of the logistics company. Then create custom IAM policies for the third-party account to only access specific resources under OrganizationA is incorrect. This is not recommended as this will add unnecessary complexity, and there is a possibility of OrganizationB users to log in and view the accounts on OrganizationA.
The option that says: The logistics company must create an IAM user with an IAM policy allowing only the required access. The logistics company should then send the AWS credentials to the third-party account to allow login and perform only specific tasks is incorrect. This is possible but not recommended because AWS credentials may get leaked or exposed. It is recommended to use STS to generate a temporary token when requesting access to AWS resources.
The option that says: The logistics company should create an IAM role and attach an IAM policy allowing only the required access. The third-party account should then use AWS STS to assume the IAM role’s Amazon Resource Name (ARN) when requesting access to OrganizationA’s AWS resources is incorrect. This is possible but not the most secure way among the options. Without the External ID on the IAM role's trust policy, it could be possible that other AWS accounts can assume that IAM role.
References:
Check out these AWS Organizations and AWS IAM Cheat Sheets:
Question 51: Skipped
A global enterprise web application is using a private S3 bucket, named MANILATECH-CONFIG, which has Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3) to store its configuration files for different regions in North America, Latin America, Europe, and Asia. There has been a lot of database changes and feature toggle switching for the past few weeks. Your CTO assigned you the task of enabling versioning on this bucket to track any changes made to the configuration files and have the ability to use the old settings if needed [-> S3 Versioning] . In the coming days ahead, a new region in Oceania will be supported by the web application and thus, a new configuration file will be added soon. Currently, there are already four files in the bucket, namely: MNL-NA.config, MNL-LA.config, MNL-EUR.config, and MNL-ASIA.config which are updated regularly. As instructed, you enabled the versioning in the bucket and after a few days, the new MNL-O.config configuration file for the Oceania region has been uploaded. A week after, a configuration has been done on MNL-NA.config, MNL-LA.config, and MNL-O.config files. [-> 2 versions for MNL-0 (First version ID is not null coz it's created *AFTER* versioning is enabled), NA & LA; but NA & LA first version ID has value of null
; 1 version on ASIA ENR, having version ID of null]
In this scenario, which of the following is correct about files inside the MANILATECH-CONFIG S3 bucket? (Select TWO.)
The latest Version ID of MNL-NA.config and MNL-LA.config has a value of null.
The first Version ID of MNL-NA.config and MNL-LA.config has a value of 1.
There would be two available versions for each of the MNL-NA.config, MNL-LA.config, and MNL-O.config files. The first Version ID of MNL-NA.config and MNL-LA.config has a value of null.
(Correct)
The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of null.(Correct)
The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of 1.
Explanation
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.
In this scenario, we have an initial 4 files in the MANILATECH-CONFIG bucket: MNL-NA.config, MNL-LA.config, MNL-EUR.config, and MNL-ASIA.config. Then, the Versioning feature was enabled which caused all of the 4 existing files to have a Version ID of null. This new configuration will enable the new files that will be added to have an alphanumeric VERSION ID, as well as any new updates for the first 4 files. Hence, when a new MNL-O.config configuration file was added, its Version ID was an alphanumeric key since this file was uploaded after the Versioning feature was enabled.
A week after, a new update has been done on the 3 configuration files only (MNL-NA.config, MNL-LA.config, and MNL-O.config files). Take note that at this point, there are NO changes made on the MNL-EUR.config and MNL-ASIA.config files, which is why their first (and latest) version ID will still remain as null since there were no new updates made yet.
However, for MNL-NA.config and MNL-LA.config, it has the first Version ID of null and then the second Version ID would be an alphanumeric key. For the MNL-O.config file, the first Version ID is already an alphanumeric key since this file was created after the Versioning was enabled.
Therefore, the correct answers are:
- There would be two available versions for each of the MNL-NA.config, MNL-LA.config, and MNL-O.config files. The first Version ID of MNL-NA.config and MNL-LA.config has a value of null.
- The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of null
The option that says: The first Version ID of MNL-NA.config and MNL-LA.config has a value of 1 is incorrect. The first VERSION ID of these files would be null since they were already existing when the S3 Versioning was enabled.
The option that says: The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of 1 is incorrect because the Version ID for these files is null.
The option that says: The latest Version ID of MNL-NA.config and MNL-LA.config has a value of null is incorrect because the latest VERSION ID value for these 2 files would be an alphanumeric value and not null.
Takeaway:
S3 bucket items, after enabled versioning, default first Version ID to null
After enabled versioning, new items uploaded would have version ID.
References:
Check out this Amazon S3 Cheat Sheet:
Question 54: Skipped
A telecommunications company has several Amazon EC2 instances inside an AWS VPC. To improve data leak protection, the company wants to restrict the internet connectivity of its EC2 instances [-> NAT Gateway]. The EC2 instances that are launched on a public subnet should be able to access product updates and patches from the Internet. The packages are accessible through the third-party provider via their URLs. The company wants to explicitly deny any other outbound connections [-> NACL? ] from the VPC instances to hosts on the Internet.
Filter by URLs? -> Not SG / NACL / Route Tables
Which of the following options would the solutions architect consider implementing to meet the company requirements?
Move all instances from the public subnets to the private subnets. Additionally, remove the default routes from your routing tables and replace them instead with routes that specify your package locations.
Use network ACL rules that allow network access to your specific package destinations. Add an implicit deny for all other cases.
Create security groups with the appropriate outbound access rules that will let you retrieve software packages from the Internet.
You can use a forward web proxy server in your VPC and manage outbound access using URL-based rules. Default routes are also removed.(Correct)
Explanation
A forward proxy server acts as an intermediary for requests from internal users and servers, often caching content to speed up subsequent requests. Companies usually implement proxy solutions to provide URL and web content filtering, IDS/IPS, data loss prevention, monitoring, and advanced threat protection. AWS customers often use a VPN or AWS Direct Connect connection to leverage existing corporate proxy server infrastructure, or build a forward proxy farm on AWS using software such as Squid proxy servers with internal Elastic Load Balancing (ELB).
You can limit outbound web connections from your VPC to the internet, using a web proxy (such as a squid server) with custom domain whitelists or DNS content filtering services. The solution is scalable, highly available, and deploys in a fully automated way.
Therefore, the correct answer is: You can use a forward web proxy server in your VPC and manage outbound access using URL-based rules. Default routes are also removed. A proxy server filters requests from the client, and allows only those that are related to the product updates, and in this case helps filter all other requests except the ones for the product updates.
The option that says: Move all instances from the public subnets to the private subnets. Additionally, remove the default routes from your routing tables and replace them instead with routes that specify your package locations is incorrect. Even though moving the instances in a private subnet is a good idea, the routing table does not have the filtering logic. It only connects the subnets with Internet gateway.
The option that says: Using network ACL rules that allow network access to your specific package destinations then adding an implicit deny for all other cases is incorrect. NACLs cannot filter requests based on URLs.
The option that says: Creating security groups with the appropriate outbound access rules that will let you retrieve software packages from the Internet is incorrect. A security group cannot filter requests based on URLs.
References:
Check out this Amazon VPC Cheat Sheet:
Question 68: Skipped
A leading fast-food chain has recently adopted a hybrid cloud infrastructure that extends its data centers into AWS Cloud. The solutions architect has been tasked to allow on-premises users, who are already signed in using their corporate accounts, to manage AWS resources without creating separate IAM users for each of them. [-> IAM Roles, SSO with Cognito? SAML IDP, STS + IAM Identity Center] This is to avoid having two separate login accounts and memorizing multiple credentials.
Which of the following is the best way to handle user authentication in this hybrid architecture?
Retrieve AWS temporary security credentials with Web Identity Federation using STS and AssumeRoleWithWebIdentity to enable users to log in to the AWS console.
Authenticate using your on-premises SAML 2.0-compliant identity provider (IDP), retrieve temporary credentials using STS, and grant federated access to the AWS console via the AWS IAM Identity Center.
(Correct)
Authenticate through your on-premises SAML 2.0-compliant identity provider (IDP) using STS and AssumeRoleWithWebIdentity to retrieve temporary security credentials, which enables your users to log in to the AWS console using a browser.
Integrate the company’s authentication process with Amazon AppFlow and allow Amazon STS to retrieve temporary AWS credentials using OAuth 2.0 to enable your members to log in to the AWS Console.
Explanation
In this scenario, you need to provide temporary access to AWS resources to the existing users, but you should not create new IAM users for them to avoid having to maintain two login accounts. This means that you need to set up a single-sign on authentication for your users so they only need to sign-in once in their on-premises network and can also access the AWS cloud at the same time.
You can use a role to configure your SAML 2.0-compliant identity provider (IdP) and AWS to permit your federated users to access the AWS Management Console. The role grants the user permission to carry out tasks in the console.
Therefore, the correct answer is: Authenticate using your on-premises SAML 2.0-compliant identity provider (IDP), retrieving temporary credentials using STS, and granting federated access to the AWS console via the AWS IAM Identity center. It gives federated access to the users to AWS resources by using SAML 2.0 identity provider and it uses on-premises single sign-on (SSO) endpoint to authenticate users, which gives them access tokens prior to providing the federated access.
The option that says: Integrate the company’s authentication process with Amazon AppFlow and allow Amazon STS to retrieve temporary AWS credentials using OAuth 2.0 to enable your members to log in to the AWS Console is incorrect. Amazon AppFlow is used to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services. It is not used for authentication. Additionally, OAuth 2.0 is not applicable in this scenario. We are not using Web Identity Federation as it is used with public identity providers such as Facebook, Google, etc.
The option that says: Authenticate through your on-premises SAML 2.0-compliant identity provider (IDP) using STS and AssumeRoleWithWebIdentity to retrieve temporary security credentials, which enables your users to log in to the AWS console using a browser is incorrect. The use of AssumeRoleWithWebIdentity is wrong which is only for Web Identity Federation (Facebook, Google, and other social logins). Even though it uses SAML 2.0 identity provider, the requirement is to provide a single sign-on to users, which means that the users should not sign in to the AWS console using any security credentials but through their corporate identity provider.
The option that says: Retrieve AWS temporary security credentials with Web Identity Federation using STS and AssumeRoleWithWebIdentity to enable users to log in to the AWS console is incorrect. We are not using Web Identity Federation as it is used with public identity providers such as Facebook, Google, etc.
References:
Check out this AWS IAM Cheat Sheet:
Question 4: Skipped
A tech company is about to undergo a financial audit. It has been planned to use a third-party web application that needs to have certain AWS access to issue several API commands. It will discover Amazon EC2 resources running within the enterprise's account. The company has internal security policies that require any outside access to its environment to conform to the principles of least privilege. The solutions architect must ensure that the credentials used by the third-party vendor cannot be used by any other third party. The third-party vendor also has an AWS account where it runs its web application and it already provided a unique customer ID, including their AWS account number. [-> External ID + AWS Role Policy]
Which of the following options would allow the solutions architect to give permissions to the third-party vendor in compliance with the company requirements?
Create an IAM user in the enterprise account that has permissions allowing only the actions required by the third-party application. Also generate a new access key and secret key from the user to be given to the third-party provider.
Create a new IAM role for the 3rd-party vendor. Add a permission policy that only allows the actions required by the third party application. Also, add a trust policy with a Condition
element for the ExternalId
context key. The Condition must test the ExternalId
context key to ensure that it matches the unique customer ID from the 3rd party vendor.
(Correct)
Use Amazon Connect to allow the third-party application to access your AWS resources. In the AWS Connect configuration, input the ExternalId
context key to ensure that it matches the unique customer ID of the 3rd party vendor.
Provide your own access key and secret key to the third-party software.
Explanation
At times, you need to give third party access to your AWS resources (delegate access). One important aspect of this scenario is the External ID, optional information that you can use in an IAM role trust policy to designate who can assume the role.
To use an external ID, update a role trust policy with the external ID of your choice. Then, when someone uses the AWS CLI or AWS API to assume that role, they must provide the external ID.
For example, let's say that you decide to hire a third-party company called Boracay Corp to monitor your AWS account and help optimize costs. In order to track your daily spending, Boracay Corp needs to access your AWS resources. Boracay Corp also monitors many other AWS accounts for other customers.
Do not give Boracay Corp access to an IAM user and its long-term credentials in your AWS account. Instead, use an IAM role and its temporary security credentials. An IAM role provides a mechanism to allow a third party to access your AWS resources without needing to share long-term credentials (for example, an IAM user's access key).
You can use an IAM role to establish a trusted relationship between your AWS account and the Boracay Corp account. After this relationship is established, a member of the Boracay Corp account can call the AWS STS AssumeRole API to obtain temporary security credentials. The Boracay Corp members can then use the credentials to access AWS resources in your account.
When a user, a resource, an application, or any service needs to access any AWS service or resource, always opt to create an appropriate role that has the least privileged access or only the required access, rather than using any other credentials such as keys.
Therefore, the correct answer is: Create a new IAM role for the 3rd-party vendor. Add a permission policy that only allows the actions required by the third party application. Also, add a trust policy with a Condition
element for the ExternalId
context key. The Condition must test the ExternalId
context key to ensure that it matches the unique customer ID from the 3rd party vendor.
The option that says: Use Amazon Connect to allow the third-party application to access your AWS resources. In the AWS Connect configuration, input the ExternalId
context key to ensure that it matches the unique customer ID of the 3rd party vendor is incorrect because Amazon Connect is simply an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost. You should use an IAM Role in this scenario instead of Amazon Connect.
The option that says: Provide your own access key and secret key to the third-party software is incorrect because you should never share your access and secret keys.
The option that says: Create an IAM user in the enterprise account that has permissions allowing only the actions required by the third-party application and generating a new access key and secret key from the user to be given to the third-party provider is incorrect because when a user is created, its security credentials are stored in the EC2 which can be compromised, and the creation of the appropriate role is always the better solution rather than creating a user.
References:
Check out this AWS IAM Cheat Sheet:
(Explain Lambda Concurrency with magic pizza analogy)