Design Solutions for Organizational Complexity (26%)
Last updated
Last updated
Question 7: Incorrect
A multinational consumer goods corporation structured their AWS accounts to use AWS Organizations, which consolidates payment of their multiple AWS accounts for their various Business Units (BU’s) namely Beauty products, Baby products, Health products, and Home Care products unit. One of their Solutions Architects for the Baby products business unit has purchased 10 Reserved Instances for their new Supply Chain application which will go live 3 months from now. However, they do not want their Reserved Instance (RI) discounts to be shared by the other business units.
Which of the following options is the most suitable solution for this scenario?
Set the Reserved Instance (RI) sharing to private on the AWS account of the Baby products business unit.
Remove the AWS account of the Baby products business unit out of the AWS Organization.
Turn off the Reserved Instance (RI) sharing on the master account for all of the member accounts in the Baby products business unit.
(Correct)
Since the Baby product business unit is part of an AWS Organization, the Reserved Instances will always be shared across other member accounts. There is no way to disable this setting.(Incorrect)
Explanation
For billing purposes, the consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account. This means that all accounts in the organization can receive the hourly cost-benefit of Reserved Instances that are purchased by any other account. In the payer account, you can turn off Reserved Instance discount sharing on the Preferences page on the Billing and Cost Management console.
The master account of an organization can turn off Reserved Instance (RI) sharing for member accounts in that organization. This means that Reserved Instances are not shared between that member account and other member accounts. You can change this preference multiple times. Each estimated bill is computed using the last set of preferences. However, take note that turning off Reserved Instance sharing can result in a higher monthly bill.
Hence, the correct answer is: Turn off the Reserved Instance (RI) sharing on the master account for all of the member accounts in the Baby products business unit.
The option that says: Set the Reserved Instance (RI) sharing to private on the AWS account of the Baby products business unit is incorrect because there is no "private" option in the RI and Savings Plan discount sharing settings in the Billing Management Console. By default, the member account doesn't have the capability to turn off RI sharing on their account.
The option that says: Remove the AWS account of the Baby products business unit out of the AWS Organization is incorrect because removing the Baby products business unit account from the AWS Organization is not the optimal solution to prevent the other account from sharing its RI discounts. You can simply turn off the Reserved Instance discount sharing in the payer account.
The option that says: Since the Baby product business unit is part of an AWS Organization, the Reserved Instances will always be shared across other member accounts. There is no way to disable this setting is incorrect because this statement is false. There is certainly a way to disable the current setting by simply turning off RI sharing.
References:
Check out this AWS Billing and Cost Management Cheat Sheet:
Question 15: Incorrect
An IT consulting company has multiple AWS accounts for its teams and departments that have been grouped into several organizational units (OUs) using AWS Organizations. The lead solutions architect received a report from the security team that there was a suspected breach in one of the environments wherein a third-party AWS account was suddenly added to the AWS Organization without any prior approval. The external account has high-level access privileges to the accounts that the company owns. Fortunately, no detrimental action was performed yet.
Which of the following actions should the solutions architect take to properly set up a monitoring system that notifies for any changes to the company AWS accounts? (Select TWO.)
Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use Amazon EventBridge and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.
(Correct)
Set up a CloudWatch Dashboard to monitor any changes to your organizations and create an SNS topic that would send you a notification.
Configure AWS Control Tower to manage and monitor all child accounts under the organization. Use Amazon Inspector to analyze any possible breach and notify the administrators using AWS SNS.
(Incorrect)
Monitor all changes to your organization using Systems Manager and use Amazon EventBridge to notify you of any new activity to your account.
Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or Amazon EventBridge that will send alerts to you for any changes.
(Correct)
Explanation
AWS Organizations can work with CloudWatch Events to raise events when administrator-specified actions occur in an organization. For example, because of the sensitivity of such actions, most administrators would want to be warned every time someone creates a new account in the organization or when an administrator of a member account attempts to leave the organization. You can configure CloudWatch Events rules that look for these actions and then send the generated events to administrator-defined targets. Targets can be an Amazon SNS topic that emails or text messages its subscribers. Combining this with Amazon CloudTrail, you can set an event to trigger whenever a matching API call is received.
Multi-account, multi-region data aggregation in AWS Config enables you to aggregate AWS Config data from multiple accounts and regions into a single account. Multi-account, multi-region data aggregation is useful for central IT administrators to monitor compliance for multiple AWS accounts in the enterprise. An aggregator is a new resource type in AWS Config that collects AWS Config data from multiple source accounts and regions. Create an aggregator in the Region where you want to see the aggregated AWS Config data. While creating an aggregator, you can choose to add either individual account IDs or your organization.
Therefore, the following options are the correct answers:
- Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use Amazon EventBridge and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.
- Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or Amazon EventBridge that will send alerts to you for any changes.
The option that says: Monitor all changes to your organization using Systems Manager and use Amazon EventBridge to notify you of any new activity to your account is incorrect. AWS Systems Manager is a collection of capabilities for configuring and managing your Amazon EC2 instances, on-premises servers and virtual machines, and other AWS resources at scale. This can't be used to monitor the changes to the setup of AWS Organizations.
The option that says: Setting up a CloudWatch Dashboard to monitor any changes to your organizations and creating an SNS topic that would send you a notification is incorrect because a CloudWatch Dashboard is primarily used to monitor your AWS resources and not the configuration of your AWS Organizations. Although you can enable sharing of all CloudWatch Events across all accounts in your organization, this can't be used to monitor if there is a new AWS account added to your AWS Organizations. Most of the time, the Amazon EventBridge service is primarily used to monitor your AWS resources and the applications you run on AWS in real-time.
The option that says: Configure AWS Control Tower to manage and monitor all child accounts under the organization. Use Amazon Inspector to analyze any possible breach and notify the administrators using AWS SNS is incorrect. AWS Control Tower can send logs to CloudWatch and CloudTrail for monitoring AWS accounts in the organization. However, Amazon Inspector is used as an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities, not for monitoring events on individual AWS accounts.
References:
Check out these AWS Organizations and AWS Config Cheat Sheets:
Question 16: Incorrect
A media company hosts its entire infrastructure on the AWS cloud. There is a requirement to copy information to or from the shared resources from another AWS account. The solutions architect has to provide the other account access to several AWS resources such as Amazon S3, AWS KMS, and Amazon ES in the form of a list of AWS account ID numbers. In addition, the user in the other account should still work in the trusted account and there is no need to give up his or her user permissions in place of the role permissions. The solutions architect must also set up a solution that continuously assesses, audits, and monitors the policy configurations.
Which of the following is the MOST suitable type of policy that you should use in this scenario?
Set up a service-linked role with a service control policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration.
Set up cross-account access with a user-based policy configuration. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration.
(Incorrect)
Set up a service-linked role with an identity-based policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration.
Set up cross-account access with a resource-based Policy. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration.
(Correct)
Explanation
For some AWS services, you can grant cross-account access to your resources. To do this, you attach a policy directly to the resource that you want to share, instead of using a role as a proxy. The resource that you want to share must support resource-based policies. Unlike a user-based policy, a resource-based policy specifies who (in the form of a list of AWS account ID numbers) can access that resource.
Cross-account access with a resource-based policy has some advantages over a role. With a resource that is accessed through a resource-based policy, the user still works in the trusted account and does not have to give up his or her user permissions in place of the role permissions. In other words, the user continues to have access to resources in the trusted account at the same time as he or she has access to the resource in the trusting account. This is useful for tasks such as copying information to or from the shared resource in the other account.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
Hence, the option that says: Set up cross-account access with a resource-based Policy. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is correct.
The option that says: Set up cross-account access with a user-based policy. configuration. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a user-based policy maps the access to a certain IAM user and not to a certain AWS resource.
The option that says: Set up a service-linked role with an identity-based policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a service-linked role is just a unique type of IAM role that is linked directly to an AWS service. In addition, it is the AWS Config service, and not the AWS Systems Manager, that enables you to assess, audit, and evaluate the configurations of your AWS resources.
The option that says: Set up a service-linked role with a service control policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a service control policy is primarily used in AWS Organizations and not for cross-account access. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf. This is not suitable for providing access to your resources to other AWS accounts, unlike cross-account access. You should also use AWS Config, and not AWS Systems Manager, to periodically audit changes to the IAM policy.
References:
Check out this AWS Config Cheat Sheet:
Question 35: Correct
A company has created multiple accounts in AWS to support the rapid growth of its cloud services. The multiple accounts are used to separate their various departments such as finance, human resources, engineering, and many others. Each account is managed by a Systems Administrator which has root access for that specific account only. There is a requirement to centrally manage policies across multiple AWS accounts by allowing or denying particular AWS services for individual accounts, or for groups of accounts.
Which is the most suitable solution that you should implement with the LEAST amount of complexity?
Use AWS Organizations and Service Control Policies to control the list of AWS services that can be used by each member account.
(Correct)
Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider.
Connect all departments by setting up cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.
Set up AWS Organizations and Organizational Units (OU) to connect all AWS accounts of each department. Create a custom IAM Policy to allow or deny the use of certain AWS services for each account.
Explanation
AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, and apply and manage policies for those groups. Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes. It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.
Remember that AWS Organizations does not replace associating IAM policies with users, groups, and roles within an AWS account. Hence, you still need to set up appropriate IAM policies for your root and member accounts.
IAM policies let you allow or deny access to AWS services (such as Amazon S3), individual AWS resources (such as a specific S3 bucket), or individual API actions (such as s3:CreateBucket
). An IAM policy can be applied only to IAM users, groups, or roles, and it can never restrict the root identity of the AWS account.
By contrast, AWS Organizations lets you use service control policies (SCPs) to allow or deny access to particular AWS services for individual AWS accounts, or for groups of accounts within an organizational unit (OU). The specified actions from an attached SCP affect all IAM users, groups, and roles for an account, including the root account identity.
When you apply an SCP to an OU or an individual AWS account, you choose to either enable (whitelist), or disable (blacklist) the specified AWS service. Access to any service that isn’t explicitly allowed by the SCPs associated with an account, its parent OUs, or the master account is denied to the AWS accounts or OUs associated with the SCP. When an SCP is applied to an OU, it is inherited by all of the AWS accounts in that OU.
Therefore, the correct answer is: Use AWS Organizations and Service Control Policies to control the list of AWS services that can be used by each member account.
The option that says: Setting up AWS Organizations and Organizational Units (OU) to connect all AWS accounts of each department and creating a custom IAM Policy to allow or deny the use of certain AWS services for each account is incorrect. Although it is correct to use AWS Organizations, this option is incorrect about IAM Policy. It is the Service Control Policy (SCP) which enables you to allow or deny the use of certain AWS services for each account, and not the IAM Policy.
The option that says: Connecting all departments by setting up cross-account access to each of the AWS accounts of the company, then creating and attaching IAM policies to your resources based on their respective departments to control access is incorrect. Although you can set up cross-account access to each department, this entails a lot of configuration compared with using AWS Organizations and Service Control Policies (SCPs). Cross-account access would be a more suitable choice if you only have two accounts to manage, but not for multiple accounts.
The option that says: Providing access to externally authenticated users via Identity Federation and setting up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider is incorrect. This option is focused on the Identity Federation authentication set up for your AWS accounts but not the IAM policy management for multiple AWS accounts. A combination of AWS Organizations and Service Control Policies (SCPs) is a better choice compared to this option.
References:
Check out this AWS Organizations Cheat Sheet:
Question 43: Correct
A multinational financial company has a suite of web applications hosted in multiple VPCs in various AWS regions. As part of their security compliance, the company’s Solutions Architect has been tasked to set up a logging solution to track all of the changes made to their AWS resources in all regions, which host their enterprise accounting systems. The company is using different AWS services such as Amazon EC2 instances, Amazon S3 buckets, CloudFront web distributions, and AWS IAM. The logging solution must ensure the security, integrity, and durability of your log data in order to pass the compliance requirements. In addition, it should provide an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and API calls.
In this scenario, which of the following options is the best solution to use?
Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
(Correct)
Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --no-include-global-service-events and --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass the --include-global-service-events parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Explanation
The accounting firm requires a secure and durable logging solution that will track all of the activities of all AWS resources (such as EC2, S3, CloudFront, and IAM) on all regions. CloudTrail can be used for this case with multi-region trail enabled. However, CloudTrail will only cover the activities of the regional services (EC2, S3, RDS etc.) and not for global services such as IAM, CloudFront, AWS WAF, and Route 53.
The option that says: Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is correct because it provides security, integrity, and durability to your log data. In addition, it has the -include-global-service-events parameter enabled which will also include activity from global services such as IAM, Route 53, AWS WAF, and CloudFront.
The option that says: Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch.
The option that says: Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass the --include-global-service-events parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. In addition, the --is-multi-region-trail parameter is also missing in this setup.
The option that says: Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --no-include-global-service-events and --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect. The --is-multi-region-trail is not enough as you also need to add the --include-global-service-events parameter to track the global service events. The --no-include-global-service-events parameter actually prevents CloudTrail from publishing events from global services such as IAM to the log files.
References:
Check out this AWS CloudTrail Cheat Sheet:
Question 49: Correct
A company runs a sports web portal that covers the latest cricket news in Australia. The solutions architect manages the main AWS account which has resources in multiple AWS regions. The web portal is hosted on a fleet of on-demand EC2 instances and an RDS database which are also deployed to other AWS regions. The IT Security Compliance Officer has given the solutions architect the task of developing a reliable and durable logging solution to track changes made to all of your EC2, IAM, and RDS resources in all of the AWS regions. The solution must ensure the integrity and confidentiality of the log data.
Which of the following solutions would be the best option to choose?
Create a new trail in CloudTrail and assign it a new S3 bucket to store the logs. Configure AWS SNS to send delivery notifications to your management system. Secure the S3 bucket that stores your logs using IAM roles and S3 bucket policies.
Create three new CloudTrail trails, each with its own S3 bucket to store the logs: one for the AWS Management console, one for AWS SDKs, and one for command line tools. Then create IAM roles and S3 bucket policies for the S3 buckets storing your logs.
Create a new trail in AWS CloudTrail with the global services option selected, and create one new Amazon S3 bucket to store the logs. Create IAM roles, S3 bucket policies, and enable Multi Factor Authentication (MFA) Delete on the S3 bucket storing your logs.(Correct)
Create a new trail in AWS CloudTrail with the global services option selected, and assign it an existing S3 bucket to store the logs. Create S3 ACLs and enable Multi Factor Authentication (MFA) delete on the S3 bucket storing your logs.
Explanation
For most services, events are recorded in the region where the action occurred to its respective AWS CloudTrail. For global services such as AWS Identity and Access Management (IAM), AWS STS, Amazon CloudFront, and Route 53, events are delivered to any trail that includes global services (IncludeGlobalServiceEvents flag). AWS CloudTrail service should be your top choice for the scenarios where the application is tracking the changes made by any AWS service, resource, or API.
Therefore, the correct answer is: Create a new trail in AWS CloudTrail with the global services option selected, and create one new Amazon S3 bucket to store the logs. Create IAM roles, S3 bucket policies, and enable Multi Factor Authentication (MFA) Delete on the S3 bucket storing your logs. It uses AWS CloudTrail with (includeGlobalServiceEvents flag) Global Option enabled, a single new S3 bucket and IAM Roles so that it has the confidentiality, and MFA on Delete on S3 bucket so that it maintains the data integrity.
The option that says: Create a new trail in AWS CloudTrail with the global services option selected, and assign it an existing S3 bucket to store the logs. Create S3 ACLs and enable Multi Factor Authentication (MFA) delete on the S3 bucket storing your logs is incorrect. As an existing S3 bucket is used, it may already be accessed by the user, hence not maintaining the confidentiality, and it is not using IAM roles.
The option that says: Create three new CloudTrail trails, each with its own S3 bucket to store the logs: one for the AWS Management console, one for AWS SDKs, and one for command line tools. Then create IAM roles and S3 bucket policies for the S3 buckets storing your logs is incorrect. Although it uses AWS CloudTrail, the Global Option is not enabled, and three S3 buckets are not needed.
The option that says: Create a new trail in CloudTrail and assign it a new S3 bucket to store the logs. Configure AWS SNS to send delivery notifications to your management system. Secure the S3 bucket that stores your logs using IAM roles and S3 bucket policies is incorrect. Although it uses AWS CloudTrail, the Global Option is not enabled.
References:
Check out these AWS CloudTrail and IAM Cheat Sheets:
Question 54: Incorrect
A company has a fitness tracking app that accompanies its smartwatch. The primary customers are North American and Asian users. The application is read-heavy as it pings the servers at regular intervals for user-authorization. The company wants the infrastructure to have the following capabilities:
- The application must be fault-tolerant to problems in any Region.
- The database writes must be highly-available in a single Region.
- The application tier must be able to read the database on multiple Regions.
- The application tier must be resilient in each Region.
- Relational database semantics must be reflected in the application.
Which of the following options must the Solutions Architect implement to meet the company requirements? (Select TWO.)
Create a geoproximity routing policy on Amazon Route 53 to control traffic and direct users to their closest regional endpoint. Combine this with a multivalue answer routing policy with health checks to direct users to a healthy region at any given time.
(Incorrect)
Create a geolocation routing policy on Amazon Route 53 to point the global users to their designated regions. Combine this with a failover answer routing policy with health checks to direct users to a healthy region at any given time.
(Correct)
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region in an active-active configuration. Create a cluster of Amazon Aurora global database in both Regions. Configure the application to use the in-Region Aurora database endpoint for the read/write operations. Create snapshots of the application servers regularly. Store the snapshots in Amazon S3 buckets in both regions.
(Correct)
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable cross-Region replication for the database servers. Create snapshots of the application and database servers regularly. Store the snapshots in Amazon S3 buckets in both regions.
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable Multi-AZ failover support for the EC2 Auto Scaling group and the RDS database. Enable cross-Region replication for the database servers.
Explanation
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.
By using an Amazon Aurora global database, you can have a single Aurora database that spans multiple AWS Regions to support your globally distributed applications.
An Aurora global database consists of one primary AWS Region where your data is mastered, and up to five read-only secondary AWS Regions. You issue write operations directly to the primary DB cluster in the primary AWS Region. Aurora replicates data to the secondary AWS Regions using dedicated infrastructure, with latency typically under a second.
You can also change the configuration of your Aurora global database while it's running to support various use cases. For example, you might want the read/write capabilities to move from one Region to another, say, in different time zones, to 'follow the sun.' Or, you might need to respond to an outage in one Region. With Aurora global database, you can promote one of the secondary Regions to the primary role to take full read/write workloads in under a minute.
On Amazon Route 53, after you create a hosted zone for your domain, such as tutorialsdojo.com, you can create records to tell the Domain Name System (DNS) how you want traffic to be routed for that domain. You can create a record that points to the DNS name of your Application Load Balancer on AWS.
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
Failover routing policy – Use when you want to configure active-passive failover.
Geolocation routing policy – Use when you want to route traffic based on the location of your users.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
You can use Route 53 health checks to configure active-active and active-passive failover configurations. You configure active-active failover using any routing policy (or combination of routing policies) other than failover, and you configure active-passive failover using the failover routing policy.
The option that says: Create a geolocation routing policy on Amazon Route 53 to point the global users to their designated regions. Combine this with a failover answer routing policy with health checks to direct users to a healthy region at any given time is correct. You can use geolocation routing policy to direct the North American users to your servers on the North America region and configure failover routing to the Asia region in case the North America region fails. You can configure the same for the Asian users pointed to the Asia region servers and have the North America region as its backup.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region in an active-active configuration. Create a cluster of Amazon Aurora global database in both Regions. Configure the application to use the in-Region Aurora database endpoint for the read/write operations. Create snapshots of the application servers regularly. Store the snapshots in Amazon S3 buckets in both regions is correct. The Amazon Aurora global database solves the problem on read/write as well as syncing the data across all the regions. With both regions in an active-active configuration, each region can accept the traffic from users around the world.
The option that says: Create a geoproximity routing policy on Amazon Route 53 to control traffic and direct users to their closest regional endpoint. Combine this with a multivalue answer routing policy with health checks to direct users to a healthy region at any given time is incorrect. Geoproximity routing policy is good to control the user traffic to specific regions. However, a multivalue answer routing policy may cause the users to be randomly sent to other healthy regions that may be far away from the user’s location.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable cross-Region replication for the database servers. Create snapshots of the application and database servers regularly. Store the snapshots in Amazon S3 buckets in both regions is incorrect. You can’t sync two MySQL master databases that both accept writes on their respective regions.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable Multi-AZ failover support for the EC2 Auto Scaling group and the RDS database. Enable cross-Region replication for the database servers is incorrect. You can’t sync two MySQL master databases that both accept writes on their respective regions. Enabling Multi-AZ on the RDS MySQL server does not protect you from AWS Regional failures.
References:
Check out these Amazon Aurora and Route 53 Cheat Sheets:
Question 60: Incorrect
A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises central office network in Washington, D.C. The central office requires inter-region VPC access over a private network that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.
Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?
Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN.
(Incorrect)
Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
(Correct)
Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.
Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
Explanation
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS to achieve higher privacy benefits, additional data transfer bandwidth, and more predictable data transfer performance. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. Virtual interfaces can be reconfigured at any time to meet your changing needs. You can use an AWS Direct Connect gateway to connect your AWS Direct Connect connection over a private virtual interface to one or more VPCs in your account that are located in the same or different Regions. You associate a Direct Connect gateway with the virtual private gateway for the VPC. Then, create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway. You can attach multiple private virtual interfaces to your Direct Connect gateway.
With Direct Connect Gateway, you no longer need to establish multiple BGP sessions for each VPC; this reduces your administrative workload as well as the load on your network devices.
Therefore, the correct answer is: Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
The option that says: Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway is incorrect. You only need to create private virtual interfaces to the Direct Connect gateway since you are only connecting to resources inside a VPC. Using a link aggregation group (LAG) is also irrelevant in this scenario because it is just a logical interface that uses the Link Aggregation Control Protocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection.
The option that says: Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN is incorrect since the scenario requires a service that can provide a dedicated network between the VPCs and the on-premises network, as well as enhanced privacy and predictable data transfer performance. Simply using AWS Transit Gateway will not fulfill the conditions above. This option is best suited for customers who want to leverage AWS-provided, automated high availability network connectivity features and also optimize their investments in third-party product licensing such as VPN software.
The option that says: Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public internet is incorrect. This solution would require a lot of manual setup and management overhead to successfully build a functional, error-free inter-region VPC network compared with just using a Direct Connect Gateway. Although the Inter-Region VPC Peering provides a cost-effective way to share resources between regions or replicate data for geographic redundancy, its connections are not dedicated and highly available.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 61: Correct
A company is using AWS Organizations to manage their multi-account and multi-region AWS infrastructure. They are currently doing large-scale automation for their key daily processes to save costs. One of these key processes is sharing specified AWS resources, which an organizational account owns, with other AWS accounts of the company using AWS RAM. There is already an existing service which was previously managed by a separate organization account moderator, who also maintained the specific configuration details.
In this scenario, what could be a simple and effective solution that would allow the service to perform its tasks on the organization accounts on the moderator's behalf?
Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM.
Enable cross-account access with AWS Organizations in the Resource Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service.
Use trusted access by running the enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.
(Correct)
Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes.
Explanation
AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. To enable trusted access with AWS Organizations:
From the AWS RAM CLI, use the enable-sharing-with-aws-organizations
command.
Name of the IAM service-linked role that can be created in accounts when trusted access is enabled: AWSResourceAccessManagerServiceRolePolicy.
You can use trusted access to enable an AWS service that you specify, called the trusted service, to perform tasks in your organization and its accounts on your behalf. This involves granting permissions to the trusted service but does not otherwise affect the permissions for IAM users or roles. When you enable access, the trusted service can create an IAM role called a service-linked role in every account in your organization. That role has a permissions policy that allows the trusted service to do the tasks that are described in that service's documentation. This enables you to specify settings and configuration details that you would like the trusted service to maintain in your organization's accounts on your behalf.
Therefore the correct answer is: Use trusted access by running the enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.
The option that says: Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes is incorrect because this is not the simplest way to automate the interaction of AWS RAM with AWS Organizations. AWS Systems Manager is a tool that helps with the automation of EC2 instances, on-premises servers, and other virtual machines. It might not support all the services being used by the key processes.
The option that says: Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM is incorrect. This is not the simplest solution for integrating AWS RAM and AWS Organizations since using AWS Organization's trusted access will create the service-linked role for you. Also, the trust policy of a service-linked role cannot be modified. Only the linked AWS service can assume a service-linked role, which is why you cannot modify the trust policy of a service-linked role.
The option that says: Enable cross-account access with AWS Organizations in the Resources Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service is incorrect because you should enable trusted access to AWS RAM, not cross-account access.
References:
Check out this AWS Resource Access Manager Cheat Sheet:
Question 62: Correct
A company has just launched a new central employee registry application that contains all of the public employee registration information of each staff of the company. The application has a microservices architecture running in Docker in a single AWS Region. The management teams from other departments who have their servers located in different VPCs need to connect to the central repository application to continue their work. The Solutions Architect must ensure that the traffic to the application does not traverse the public Internet. The IT Security team must also be notified of any denied requests and be able to view the corresponding source IP.
How will the Architect implement the architecture of the new application given these circumstances?
Use AWS Direct Connect to create a dedicated connection between the central VPC and each of the teams' VPCs. Enable the VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to a CloudWatch Logs group. Set up an Amazon CloudWatch Logs subscription that streams the log data to the IT Security account.
Set up an IPSec Tunnel between the central VPC and each of the teams' VPCs. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Create a CloudWatch Logs subscription that streams the log data to the IT Security account.
Link each of the teams' VPCs to the central VPC using VPC Peering. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
(Correct)
Set up a Transit VPC by using third-party marketplace VPN appliances running on an On-Demand Amazon EC2 instance that dynamically routes the VPN connections to the virtual private gateways (VGWs) attached to each VPC. Set up an AWS Config rule on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
Explanation
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.
Flow logs can help you with a number of tasks, such as:
- Diagnosing overly restrictive security group rules
- Monitoring the traffic that is reaching your instance
- Determining the direction of the traffic to and from the network interfaces
Flow log data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance.
Hence, the correct answer is: Link each of the teams' VPCs to the central VPC using VPC Peering. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
The option that says: Set up an IPSec Tunnel between the central VPC and each of the teams' VPCs. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Create a CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. It is mentiond in the scenario that the traffic to the application must not traverse the public Internet. Since an IPSec tunnel uses the Internet to transfer data from your VPC to a specified destination, this solution is definitely incorrect.
The option that says: Use AWS Direct Connect to create a dedicated connection between the central VPC and each of the teams' VPCs. Enable the VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to a CloudWatch Logs group. Set up an Amazon CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. You cannot set up Direct Connect between different VPCs. AWS Direct Connect is primarily used to set up a dedicated connection between your on-premises data center and your Amazon VPC.
The option that says: Set up a Transit VPC by using third-party marketplace VPN appliances running on an On-Demand Amazon EC2 instance that dynamically routes the VPN connections to the virtual private gateways (VGWs) attached to each VPC. Set up an AWS Config rule on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. An AWS Config rule is not capable of capturing the source IP of the incoming requests. A VPN appliance is using the public Internet to transfer data. Thus, it violates the requirement of ensuring that the data is securely within the AWS network.
References:
Check out these Amazon VPC and VPC Peering Cheat Sheets:
Question 74: Correct
The department of education just recently decided to leverage the AWS cloud infrastructure to supplement its current on-premises network. They are building a new learning portal that teaches kids basic computer science concepts and provides innovative gamified courses for teenagers where they can gain higher rankings, power-ups and badges. A Solutions Architect is instructed to build a highly available cloud infrastructure in AWS with multiple Availability Zones. The department wants to increase the application’s reliability and gain actionable insights using application logs. A Solutions Architect needs to aggregate logs, automate log analysis for errors and immediately notify the IT Operations team when errors breached a certain threshold.
Which of the following is the MOST suitable solution that the Architect should implement?
Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon EventBridge. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon Athena to monitor the metric filter and immediately notify the IT Operations team for any issues.
Download and install the Amazon Managed Service for Prometheus in the on-premises servers and send the logs to AWS Lambda to turn log data into numerical metrics that identify and measure application errors. Write the processed metrics back to the time series database in Prometheus. Create a CloudWatch Alarm that monitors the metric and immediately notifies the IT Operations team for any issues.
Download and install the Amazon Kinesis agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon QuickSight to monitor the metric filter in CloudWatch and immediately notify the IT Operations team for any issues.
Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Create a CloudWatch Alarm that monitors the metric filter and immediately notify the IT Operations team for any issues.
(Correct)
Explanation
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. The CloudWatch home page automatically displays metrics about every AWS service you use. You can additionally create custom dashboards to display metrics about your custom applications and display custom collections of metrics that you choose.
After the CloudWatch Logs agent begins publishing log data to Amazon CloudWatch, you can begin searching and filtering the log data by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. You can use any type of CloudWatch statistic, including percentile statistics when viewing these metrics or setting alarms.
You can create alarms that watch metrics and send notifications or automatically make changes to the resources you are monitoring when a threshold is breached. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether you should launch additional instances to handle the increased load. You can also use this data to stop under-used instances to save money. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.
Hence, the correct answer is: Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Create a CloudWatch Alarm that monitors the metric filter and immediately notify the IT Operations team for any issues.
The option that says: Download and install the Amazon Kinesis agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon QuickSight to monitor the metric filter in CloudWatch and immediately notify the IT Operations team for any issues is incorrect. You have to use an Amazon CloudWatch agent instead of an Amazon Kinesis agent to send the logs to Amazon CloudWatch Logs. It is also better to use CloudWatch Alarms to monitor the metric filter than to use Amazon QuickSight.
The option that says: Download and install the Amazon Managed Service for Prometheus in the on-premises servers and send the logs to AWS Lambda to turn log data into numerical metrics that identify and measure application errors. Write the processed metrics back to the time series database in Prometheus. Create a CloudWatch Alarm that monitors the metric and immediately notifies the IT Operations team for any issues is incorrect. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics. You don't have to create a custom Lambda function to process the logs. Amazon Managed Service for Prometheus is integrated with CloudWatch to monitor metrics and Logs.
The option that says: Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon EventBridge. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon Athena to monitor the metric filter and immediately notify the IT Operations team for any issues is incorrect. You have to send the logs to CloudWatch Logs and not CloudWatch Events. It is also better to use CloudWatch Alarm to monitor the metric filter and immediately notify the IT Operations team for any issues.
References:
Check out this Amazon CloudWatch Cheat Sheet:
Question 75: Incorrect
An IT consultancy company has multiple offices located in San Francisco, Frankfurt, Tokyo, and Manila. The company is using AWS Organizations to easily manage its several AWS accounts which are being used by its regional offices and subsidiaries. A new AWS account was recently added to a specific organizational unit (OU) which is responsible for the overall systems administration. The solutions architect noticed that the account is using a root-created Amazon ECS Cluster with an attached service-linked role. For regulatory purposes, the solutions architect created a custom SCP that would deny the new account from performing certain actions in relation to using ECS. However, after applying the policy, the new account could still perform the actions that it was supposed to be restricted from doing.
Which of the following is the most likely reason for this problem?
The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified.
The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization.
There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator.
(Incorrect)
SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.
(Correct)
Explanation
Users and roles must still be granted permissions using IAM permission policies attached to them or to groups. The SCPs filter the permissions granted by such policies, and the user can't perform any actions that the applicable SCPs don't allow. Actions allowed by the SCPs can be used if they are granted to the user or role by one or more IAM permission policies.
When you attach SCPs to the root, OUs, or directly to accounts, all policies that affect a given account are evaluated together using the same rules that govern IAM permission policies:
- Any action that has an explicit Deny
in an SCP can't be delegated to users or roles in the affected accounts. An explicit Deny
statement overrides any Allow
that other SCPs might grant.
- Any action that has an explicit Allow
in an SCP (such as the default "*" SCP or by any other SCP that calls out a specific service or action) can be delegated to users and roles in the affected accounts.
- Any action that isn't explicitly allowed by an SCP is implicitly denied and can't be delegated to users or roles in the affected accounts.
By default, an SCP named FullAWSAccess
is attached to every root, OU, and account. This default SCP allows all actions and all services. So in a new organization, until you start creating or manipulating the SCPs, all of your existing IAM permissions continue to operate as they did. As soon as you apply a new or modified SCP to a root or OU that contains an account, the permissions that your users have in that account become filtered by the SCP. Permissions that used to work might now be denied if they're not allowed by the SCP at every level of the hierarchy down to the specified account.
As stated in the documentation of AWS Organizations, SCPs DO NOT affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.
The option that says: The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified is incorrect. The scenario already implied that the administrator created a Deny policy. By default, an SCP named FullAWSAccess is attached to every root, OU, and account. This default SCP allows all actions and all services. However, you specify a Deny policy if you want to create a blacklist that blocks all access to the specified services and actions. The explicit Deny on specific actions in the blacklist policy overrides the Allow in any other policy, such as the one in the default SCP.
The option that says: There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator is incorrect because even if a higher-level OU has an SCP attached with an Allow policy for the service, the current set up should still have restricted access to the service. Creating and attaching a new Deny SCP to the new account's OU will not be affected by the pre-existing Allow policy in the same OU.
The option that says: The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization is incorrect because the service-linked role must have been created within the organization, most notably by the root account of the organization. It also does not make sense if we make the assumption that the service is indeed outside of the organization's jurisdiction because the Principal element of a policy specifies which entity will have limited permissions. But the scenario tells us that it should be the new account that is denied certain actions, not the service itself.
References:
Service Control Policies (SCP) vs IAM Policies:
Comparison of AWS Services Cheat Sheets:
Question 12: Skipped
A company is using Microsoft Active Directory to manage all employee accounts and devices. The IT department instructed the solutions architect to implement a single sign-on feature to allow the employees to use their existing Windows account password to connect and use the various AWS resources.
Which of the following options is the recommended way to extend the current Active Directory domain to AWS?
Use Amazon Cognito to authorize users to your applications using direct sign-in or through third-party apps, and access your apps' backend resources in AWS.
Create users and groups with AWS IAM Identity Center along with AWS Organizations to help you manage SSO access and user permissions across all the AWS accounts.
Use IAM Roles to set up cross-account access and delegate access to resources that are in your AWS account.
Use AWS Directory Service to integrate your AWS resources with the existing Active Directory using trust relationship. Enable single sign-on using Managed Microsoft AD.
(Correct)
Explanation
Because the company is using Microsoft Active Directory already, you can use AWS Directory Service for Microsoft AD to create secure Windows trusts between your on-premises Microsoft Active Directory domains and your AWS Microsoft AD domain in the AWS Cloud. By setting up a trust relationship, you can integrate SSO to the AWS Management Console and the AWS Command Line Interface (CLI), as well as your Windows-based workloads.
AWS Directory Service helps you to set up and run a standalone AWS Managed Microsoft AD directory hosted in the AWS Cloud. You can also use AWS Directory Service to connect your AWS resources with an existing on-premises Microsoft Active Directory. To configure AWS Directory Service to work with your on-premises Active Directory, you must first set up trust relationships to extend authentication from on-premises to the cloud.
Therefore, the correct answer is: Use AWS Directory Service to integrate your AWS resources with the existing Active Directory using trust relationship. Enable single sign-on using Managed Microsoft AD.
The option that says: Use Amazon Cognito to authorize users to your applications using direct sign-in or through third-party apps, and access your apps' backend resources in AWS is incorrect because Cognito is primarily used for federation to your web and mobile apps running on AWS. It allows you to authenticate users through social identity providers. But since the company is already using Microsoft AD, AWS Directory Service is the better choice here.
The option that says: Create users and groups with AWS IAM Identity Center along with AWS Organizations to help you manage SSO access and user permissions across all the AWS accounts is incorrect because using the AWS IAM Identity Center service alone is not enough to meet the requirement. Although it can help you manage SSO access and user permissions across all your AWS accounts in AWS Organizations, you still have to use the AWS Directory Service to integrate your on-premises Microsoft AD. AWS IAM Identity Center integrates with Microsoft AD using AWS Directory Service so there is no need to create users and groups.
The option that says: Use IAM Roles to set up cross-account access and delegate access to resources that are in your AWS account is incorrect because setting up cross-account access allows you to share resources in one AWS account with users in a different AWS account. Since the company is already using Microsoft AD then, the better choice to use here is the AWS Directory Service.
References:
Check out this AWS Directory Service Cheat Sheet:
Question 19: Skipped
A company located on the west coast of North America plans to release a new online service for its customers. The company already created a new VPC in the us-west-1 region where they will launch the Amazon EC2 instances that will host the web application. The application must be highly-available and must dynamically scale based on user traffic. In addition, the company wants to have a disaster recovery site in the us-east-1 region that will act as a passive backup of the running application.
Which of the following options should the Solutions Architect implement in order to achieve the requirements?
Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB.
Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create record entries in Amazon Route 53 pointing to the ALBs with health check enabled and a failover routing policy.
(Correct)
Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create separate record entries for each region’s ALB on Amazon Route 53 and enable health checks to ensure high-availability for both regions.
Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) that spans multiple Availability Zones (AZs) on both VPCs. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB. Create an Alias record entry in Amazon Route 53 that points to the DNS name of the ALB.
Explanation
On Amazon Route 53, after you create a hosted zone for your domain, such as tutorialsdojo.com, you create records to tell the Domain Name System (DNS) how you want traffic to be routed for that domain. You can create a record that points to the DNS name of your Application Load Balancer on AWS.
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
Failover routing policy – Use when you want to configure active-passive failover.
Geolocation routing policy – Use when you want to route traffic based on the location of your users.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions, and you want to route traffic to the region that provides the best latency.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
You can use Route 53 health checks to configure active-active and active-passive failover configurations. You configure active-active failover using any routing policy (or combination of routing policies) other than failover, and you configure active-passive failover using the failover routing policy.
Use an active-passive failover configuration when you want a primary resource or group of resources to be available the majority of the time, and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the “healthy primary” resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries.
This way, you can create a failover routing policy that will direct traffic to the backup region when your primary region fails.
Amazon EC2 Auto Scaling integrates with Elastic Load Balancing to enable you to insert one or more Application Load Balancer, Network Load Balancer, or Gateway Load Balancer with multiple target groups in front of your Auto Scaling group.
Creating an Application Load Balancer on each region with an Auto Scaling group that spans multiple Availability Zones ensures that your application will be highly available and will scale based on the user traffic.
Therefore, the correct answer is: Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create record entries in Amazon Route 53 pointing to the ALBs with health check enabled and a failover routing policy.
The option that says: Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB is incorrect because an Auto Scaling group cannot span AZs on multiple regions, and the Application Load Balancer cannot serve traffic to EC2 instances on a different region even with Inter-Region VPC peering.
The option that says: Configure an Inter-Region VPC peering between the us-west-1 VPC and a new VPC in the us-east-1 region. Create an Application Load Balancer (ALB) that spans multiple Availability Zones (AZs) on both VPCs. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs of both regions and place it behind the ALB. Create an Alias record entry in Amazon Route 53 that points to the DNS name of the ALB is incorrect because an Application Load Balancer cannot span to multiple regions, only multiple AZs on the same region. An Auto Scaling group also cannot span multiple regions as it can only deploy EC2 instances in one region.
The option that says: Create an Application Load Balancer (ALB) in the us-west-1 region that spans multiple Availability Zones (AZs) of the VPC. Create an Auto Scaling group that will deploy EC2 instances across the multiple AZs and place it behind the ALB. Set up the same configuration to the us-east-1 region VPC. Create separate record entries for each region’s ALB on Amazon Route 53 and enable health checks to ensure high-availability for both regions is incorrect. Although this setup is possible, it does not mention the routing policy to be used on Amazon Route 53. The question requires that the second region acts as a passive backup, which means only the main region receives all the traffic so you need to specifically use a failover routing policy in Amazon Route 53.
References:
Check out the Amazon Route 53 Cheat Sheet:
Question 20: Skipped
A company has a team of data analysts that uploads generated data points to an Amazon S3 bucket. The data points are used by other departments, so the objects on this primary S3 bucket need to be replicated to other S3 buckets on several AWS Accounts owned by the company. The Solutions Architect created an AWS Lambda function that is triggered by S3 PUT events on the primary bucket. This Lambda function will replicate the newly uploaded object to other destination buckets. Since there will be thousands of object uploads on the primary bucket every day, the company is concerned that this Lambda function may affect other critical Lambda functions because of the regional concurrency limit in AWS Lambda. The replication of the objects does not need to happen in real-time. The company needs to ensure that this Lambda function will not affect the execution of other critical Lambda functions.
Which of the following options will meet the requirements in the LEAST amount of development effort?
Implement an exponential backoff algorithm in the new Lambda function to ensure that it will not run if the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached.
Set the execution timeout of the new Lambda function to 5 minutes. This will allow it to wait for other Lambda function executions to finish in case the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached.
Configure a reserved concurrency limit for the new function to ensure that its executions will not exceed this limit. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to ensure that the concurrency limit is not being reached.
(Correct)
Decouple the Amazon S3 event notifications and send the events to an Amazon SQS queue in a separate AWS account. Create the new Lambda function on this account too. Invoke the Lambda function whenever an event message is received in the SQS queue.
Explanation
Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency. Concurrency is subject to a Regional quota that is shared by all functions in a Region.
There are two types of concurrency available:
Reserved concurrency – Reserved concurrency creates a pool of requests that can only be used by its function, and also prevents its function from using unreserved concurrency.
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond to your function's invocations.
In a single account with the default concurrency limit of 1000 concurrent executions, when other services invoke Lambda function concurrently, there is the possibility for two issues to pop up:
- One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
- A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.
For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. One way to solve both of these problems is by applying a concurrency limit to the Lambda functions in an account.
You can set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.
If you set a concurrency limit for a specific function then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.
Therefore, the correct answer is: Configure a reserved concurrency limit for the new function to ensure that its executions will not exceed this limit. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to ensure that the concurrency limit is not being reached.
The option that says: Set the execution timeout of the new Lambda function to 5 minutes. This will allow it to wait for other Lambda function executions to finish in case the concurrency limit is reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached is incorrect. This will still invoke the new Lambda function and it will cause more problems because during the wait time, the slice for the Lambda function is still consumed. Other Lambda functions won't be able to execute if all slices are consumed.
The option that says: Decouple the Amazon S3 event notifications and send the events to an Amazon SQS queue in a separate AWS account. Create the new Lambda function on this account too. Invoke the Lambda function whenever an event message is received in the SQS queue is incorrect. This is possible and you will have the concurrency limit on the separate AWS account all for the new Lambda function. However, this requires more work and the creation of another AWS account. Setting a concurrency limit is recommended as it can be used to limit the number of executions of a particular function.
The option that says: Implement an exponential backoff algorithm in the new Lambda function to ensure that it will not run if the concurrency limit is being reached. Use Amazon CloudWatch alarms to monitor the Throttles metric for Lambda functions to check if the concurrency limit is reached is incorrect. This will require you to write a backoff algorithm to check the concurrency limit. The function needs to execute in order to run the backoff algorithm which defeats the purpose of limiting concurrency.
References:
Check out the AWS Lambda Cheat Sheet:
Question 25: Skipped
A call center company uses its custom application to process and store call recordings in its on-premises data center. The recordings are stored on an NFS share. An offshore team is contracted to transcribe about 2% of the call recordings to be used for quality assurance purposes. It could take up to 3 days before the recordings are completely transcribed. The application that processes the calls and manages the transcription queue is hosted on Linux servers. A web portal is available for the quality assurance team to review the call recordings. After 90 days, the recordings are sent to an offsite location for long-term storage. The company plans to migrate the system to the AWS cloud to reduce storage costs and automate the transcription of the recordings.
Which of the following options is the recommended solution to meet the company’s requirements?
Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using AWS IQ. Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group.
Store all recordings in an Amazon S3 bucket and send the object key to an Amazon SQS queue. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an Auto Scaling group of Amazon EC2 instances to push the recordings to Amazon Translate for transcription. Set the Auto Scaling policy based on the number of objects on the SQS queue. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda.
Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using Amazon Transcribe. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda.
(Correct)
Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group. Store all recordings in an Amazon EFS share that is mounted on all instances. After 90 days, archive all call recordings using AWS Backup and use Amazon Transcribe to transcribe the recordings.
Explanation
Amazon Transcribe is an AWS service that makes it easy for customers to convert speech to text. Using Automatic Speech Recognition (ASR) technology, customers can choose to use Amazon Transcribe for a variety of business applications, including transcription of voice-based customer service calls, generation of subtitles on audio/video content, and conduct (text-based) content analysis on audio/video content.
Amazon Transcribe analyzes audio files that contain speech and uses advanced machine-learning techniques to transcribe the voice data into text. You can then use the transcription as you would any text document.
Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate subtitling, and generate metadata for media assets to create a fully searchable archive. Amazon Transcribe automatically adds speaker diarization, punctuation, and formatting so that the output closely matches the quality of manual transcription at a fraction of the time and expense. Speech-to-text processing can be applied to live audio streams or batch audio content for transcription.
To transcribe an audio file, Amazon Transcribe uses three operations:
StartTranscriptionJob
– Starts a batch job to transcribe the speech in an audio file to text.
ListTranscriptionJobs
– Returns a list of transcription jobs that have been started. You can specify the status of the jobs that you want the operation to return. For example, you can get a list of all pending jobs or a list of completed jobs.
GetTranscriptionJob
– Returns the result of a transcription job. The response contains a link to a JSON file containing the results.
To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
Transition actions—Define when objects transition to another Using Amazon S3 storage classes. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them or archive objects to the S3 Glacier storage class one year after creating them.
Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf. The lifecycle expiration costs depend on when you choose to expire objects.
Therefore, the correct answer is: Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using Amazon Transcribe. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda. Amazon S3 and Glacier offer very cheap object storage for recordings. Amazon Transcribe offers speech-to-text services that can quickly transcribe recordings. Amazon S3, API Gateway, and Lambda are cheap and scalable ways to host the web portal.
The option that says: Store all recordings in an Amazon S3 bucket. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an AWS Lambda trigger to start a transcription job using AWS IQ. Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group is incorrect because this AWS IQ is just a freelancing platform that provides hands-on help from AWS experts, and it does not have a feature to automate a transcription job. The correct answer is Amazon Transcribe, where in itis faster in the transcription process, which is a requirement by the company.
The option that says: Create an Auto Scaling group of Amazon EC2 instances to host the web portal. Provision an Application Load Balancer in front of the Auto Scaling group. Store all recordings in an Amazon EFS share that is mounted on all instances. After 90 days, archive all call recordings using AWS Backup and use Amazon Transcribe to transcribe the recordings is incorrect because Amazon Transcribe is primarily used to trancribe your recordings and stored on Amazon S3, not on Amazon EFS. Storing the call recordings to Amazon S3 is cheaper compared to Amazon EFS.
The option that says: Store all recordings in an Amazon S3 bucket and send the object key to an Amazon SQS queue. Create an S3 lifecycle policy to move objects older than 90 days to Amazon S3 Glacier. Create an Auto Scaling group of Amazon EC2 instances to push the recordings to Amazon Translate for transcription. Set the Auto Scaling policy based on the number of objects on the SQS queue. Update the web portal so it can be hosted on an Amazon S3 bucket, Amazon API Gateway, and AWS Lambda is incorrect because Amazon Translate is only a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. It is not used for transcribing voice messages to text.
References:
Check out the Amazon Transcribe and Amazon S3 Cheat Sheets:
Question 26: Skipped
A company has several resources in its production environment that is shared among various business units of the company. A single business unit may have one or more AWS accounts that have resources in the production environment. There were a lot of incidents in which the developers from a specific business unit accidentally terminated the Amazon EC2 instances, Amazon EKS clusters, and Amazon Aurora Serverless databases which are owned by another business unit. The solutions architect has been tasked to come up with a solution to only allow a specific business unit that owns the EC2 instances, and other AWS resources, to terminate their own resources.
Which of the following is the most suitable multi-account strategy implementation to meet the company requirements?
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the SCP to the OUs, which will then be automatically inherited by its member accounts.
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create an IAM Role in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Create an AWSServiceRoleForOrganizations
service-linked role for the individual member accounts of the OU to enable trusted access.
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to individual Organization Units (OU). Create an IAM Role in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the IAM policy to every member accounts of the OU.
(Correct)
Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Provide the cross-account access and the SCP to the individual member accounts to tightly control who can terminate the EC2 instances.
Explanation
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization.
You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts. For example, you can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the policy. You can create multiple OUs within a single organization, and you can create OUs within other OUs. Each OU can contain multiple accounts, and you can move accounts from one OU to another. However, OU names must be unique within a parent OU or root.
Resource-level permissions refer to the ability to specify which resources users are allowed to perform actions on. Amazon EC2 has partial support for resource-level permissions. This means that for certain Amazon EC2 actions, you can control when users are allowed to use those actions based on conditions that have to be fulfilled, or specific resources that users are allowed to use. For example, you can grant users permissions to launch instances, but only of a specific type, and only using a specific AMI.
The scenario on this question has a lot of AWS Accounts that need to be managed. AWS Organization solves this problem and provides you with control by assigning the different business units as individual Organization Units (OU). Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. However, SCPs alone are not sufficient for allowing access to the accounts in your organization. Attaching an SCP to an AWS Organizations entity just defines a guardrail for what actions the principals can perform. You still need to attach identity-based or resource-based policies to principals or resources in your organization's accounts to actually grant permission to them.
Since SCPs only allow or deny the use of an AWS service, you don't want to block OUs from completely using the EC2 service. Thus, you will need to provide cross-account access and the IAM policy to every member accounts of the OU.
Hence, the correct answer is: Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to individual Organization Units (OU). Create an IAM Role in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the IAM policy to every member accounts of the OU.
The option that says: Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create an IAM Role in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Create an AWSServiceRoleForOrganizations
service-linked role for the individual member accounts of the OU to enable trusted access is incorrect because AWSServiceRoleForOrganizations service-linked role is primarily used to only allow AWS Organizations to create service-linked roles for other AWS services. This service-linked role is present in all organizations and not just in a specific OU.
The following options are incorrect because an SCP policy simply specifies the services and actions that users and roles can use in the accounts:
1. Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account for each business unit which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances that it owns. Provide the cross-account access and the SCP to the individual member accounts to tightly control who can terminate the EC2 instances.
2. Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific business unit, to an individual Organization Unit (OU). Create a Service Control Policy in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the SCP to the OUs, which will then be automatically inherited by its member accounts.
SCPs are similar to IAM permission policies except that they don't grant any permissions.
References:
Check out this AWS Organizations Cheat Sheet:
Service Control Policies (SCP) vs IAM Policies:
Comparison of AWS Services Cheat Sheets:
Question 29: Skipped
A multinational software provider in the US hosts both of its development and test environments in the AWS cloud. The CTO decided to use separate AWS accounts in hosting each environment. The solutions architect has enabled Consolidated Billing to link each of the accounts' bill to a Master AWS account. To make sure that each account is kept within the budget, the administrators in the master account must have the power to stop, delete, and/or terminate resources in both development and test environment AWS accounts.
Which of the following options is the recommended action to meet the requirements for this scenario?
By linking all accounts under Consolidated Billing, you will be able to provide IAM users in the master account access to Dev and Test account resources.
First, create IAM users in the master account. Then in the Dev and Test accounts, generate cross-account roles that have full admin permissions while granting access for the master account.(Correct)
In the master account, you are to create IAM users and a cross-account role that has full admin permissions to the Dev and Test accounts.
IAM users with full admin permissions will be created in the master account. In both Dev and Test accounts, generate cross-account roles that would grant the master account access to Dev and Test account resources through permissions inherited from the master account.
Explanation
You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual IAM users in each account. In addition, users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts.
Therefore, the correct answer is: First, create IAM users in the master account. Then in the Dev and Test accounts, generate cross-account roles that have full admin permissions while granting access for the master account. The cross-account role is created in Dev and Test accounts, and the users are created in the Master account that are given that role.
The option that says: In the master account, you are to create IAM users and a cross-account role that has full admin permissions to the Dev and Test accounts is incorrect. A cross-account role should be created in Dev and Test accounts, not Master account.
The option that says: IAM users with full admin permissions will be created in the master account. In both Dev and Test accounts, generate cross-account roles that would grant the master account access to Dev and Test account resources through permissions inherited from the master account is incorrect. The permissions cannot be inherited from one AWS account to another.
The option that says: By linking all accounts under Consolidated Billing, you will be able to provide IAM users in the master account access to Dev and Test account resources is incorrect. Consolidated billing does not give access to resources in this fashion.
References:
Check out this AWS IAM Cheat Sheet:
Question 34: Skipped
A logistics company is developing a new application that will be used for all its departments. All of the company's AWS accounts are under OrganizationA in its AWS Organizations. A certain feature of the application must allow AWS resource access from a third-party account which is under AWS Organizations named OrganizationB. The company wants to follow security best practices and grant "least privilege" access using API or CLI to the third-party account.
Which of the following options is the recommended way to securely allow OrganizationB to access AWS resources on OrganizationA?
The logistics company should create an IAM role and attach an IAM policy allowing only the required access. The third-party account should then use AWS STS to assume the IAM role’s Amazon Resource Name (ARN) when requesting access to OrganizationA’s AWS resources.
The logistics company must create an IAM user with an IAM policy allowing only the required access. The logistics company should then send the AWS credentials to the third-party account to allow login and perform only specific tasks.
The third-party account should create an External ID that will be given to OrganizationA. The logistics company should then create an IAM role with the required access and put the External ID in the IAM role’s trust policy. The third-party account should use the IAM role’s ARN and External ID when requesting access to OrganizationA’s AWS resources.
(Correct)
The third-party AWS Organization must integrate with the AWS Identity Center of the logistics company. Then create custom IAM policies for the third-party account to only access specific resources under OrganizationA.
Explanation
At times, you need to give a third-party access to your AWS resources (delegate access). One important aspect of this scenario is the External ID, optional information that you can use in an IAM role trust policy to designate who can assume the role. The external ID allows the user that is assuming the role to assert the circumstances in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances.
In a multi-tenant environment where you support multiple customers with different AWS accounts, AWS recommends using one external ID per AWS account. This ID should be a random string generated by the third party.
To require that the third party provides an external ID when assuming a role, update the role's trust policy with the external ID of your choice. To provide an external ID when you assume a role, use the AWS CLI or AWS API to assume that role.
For example, let's say that you decide to hire a third-party company called Example Corp to monitor your AWS account and help optimize costs. In order to track your daily spending, Example Corp needs to access your AWS resources. Example Corp also monitors many other AWS accounts for other customers.
Do not give Example Corp access to an IAM user and its long-term credentials in your AWS account. Instead, use an IAM role and its temporary security credentials. An IAM role provides a mechanism to allow a third party to access your AWS resources without needing to share long-term credentials.
Therefore, the correct answer is: The third-party account should create an External ID that will be given to OrganizationA. The logistics company should then create an IAM role with the required access and put the External ID in the IAM role’s trust policy. The third-party account should use the IAM role’s ARN and External ID when requesting access to OrganizationA’s AWS resources. An External ID in IAM role's trust policy is a recommended best practice to ensure that the external party can assume your role only when it is acting on your behalf.
The option that says: The third-party AWS Organization must integrate with the AWS Identity Center of the logistics company. Then create custom IAM policies for the third-party account to only access specific resources under OrganizationA is incorrect. This is not recommended as this will add unnecessary complexity, and there is a possibility of OrganizationB users to log in and view the accounts on OrganizationA.
The option that says: The logistics company must create an IAM user with an IAM policy allowing only the required access. The logistics company should then send the AWS credentials to the third-party account to allow login and perform only specific tasks is incorrect. This is possible but not recommended because AWS credentials may get leaked or exposed. It is recommended to use STS to generate a temporary token when requesting access to AWS resources.
The option that says: The logistics company should create an IAM role and attach an IAM policy allowing only the required access. The third-party account should then use AWS STS to assume the IAM role’s Amazon Resource Name (ARN) when requesting access to OrganizationA’s AWS resources is incorrect. This is possible but not the most secure way among the options. Without the External ID on the IAM role's trust policy, it could be possible that other AWS accounts can assume that IAM role.
References:
Check out these AWS Organizations and AWS IAM Cheat Sheets:
Question 51: Skipped
A global enterprise web application is using a private S3 bucket, named MANILATECH-CONFIG, which has Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3) to store its configuration files for different regions in North America, Latin America, Europe, and Asia. There has been a lot of database changes and feature toggle switching for the past few weeks. Your CTO assigned you the task of enabling versioning on this bucket to track any changes made to the configuration files and have the ability to use the old settings if needed. In the coming days ahead, a new region in Oceania will be supported by the web application and thus, a new configuration file will be added soon. Currently, there are already four files in the bucket, namely: MNL-NA.config, MNL-LA.config, MNL-EUR.config, and MNL-ASIA.config which are updated regularly. As instructed, you enabled the versioning in the bucket and after a few days, the new MNL-O.config configuration file for the Oceania region has been uploaded. A week after, a configuration has been done on MNL-NA.config, MNL-LA.config, and MNL-O.config files.
In this scenario, which of the following is correct about files inside the MANILATECH-CONFIG S3 bucket? (Select TWO.)
The latest Version ID of MNL-NA.config and MNL-LA.config has a value of null.
The first Version ID of MNL-NA.config and MNL-LA.config has a value of 1.
There would be two available versions for each of the MNL-NA.config, MNL-LA.config, and MNL-O.config files. The first Version ID of MNL-NA.config and MNL-LA.config has a value of null.
(Correct)
The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of null.(Correct)
The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of 1.
Explanation
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.
In this scenario, we have an initial 4 files in the MANILATECH-CONFIG bucket: MNL-NA.config, MNL-LA.config, MNL-EUR.config, and MNL-ASIA.config. Then, the Versioning feature was enabled which caused all of the 4 existing files to have a Version ID of null. This new configuration will enable the new files that will be added to have an alphanumeric VERSION ID, as well as any new updates for the first 4 files. Hence, when a new MNL-O.config configuration file was added, its Version ID was an alphanumeric key since this file was uploaded after the Versioning feature was enabled.
A week after, a new update has been done on the 3 configuration files only (MNL-NA.config, MNL-LA.config, and MNL-O.config files). Take note that at this point, there are NO changes made on the MNL-EUR.config and MNL-ASIA.config files, which is why their first (and latest) version ID will still remain as null since there were no new updates made yet.
However, for MNL-NA.config and MNL-LA.config, it has the first Version ID of null and then the second Version ID would be an alphanumeric key. For the MNL-O.config file, the first Version ID is already an alphanumeric key since this file was created after the Versioning was enabled.
Therefore, the correct answers are:
- There would be two available versions for each of the MNL-NA.config, MNL-LA.config, and MNL-O.config files. The first Version ID of MNL-NA.config and MNL-LA.config has a value of null.
- The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of null
The option that says: The first Version ID of MNL-NA.config and MNL-LA.config has a value of 1 is incorrect. The first VERSION ID of these files would be null since they were already existing when the S3 Versioning was enabled.
The option that says: The MNL-EUR.config and MNL-ASIA.config files will have a Version ID of 1 is incorrect because the Version ID for these files is null.
The option that says: The latest Version ID of MNL-NA.config and MNL-LA.config has a value of null is incorrect because the latest VERSION ID value for these 2 files would be an alphanumeric value and not null.
References:
Check out this Amazon S3 Cheat Sheet:
Question 54: Skipped
A telecommunications company has several Amazon EC2 instances inside an AWS VPC. To improve data leak protection, the company wants to restrict the internet connectivity of its EC2 instances. The EC2 instances that are launched on a public subnet should be able to access product updates and patches from the Internet. The packages are accessible through the third-party provider via their URLs. The company wants to explicitly deny any other outbound connections from the VPC instances to hosts on the Internet.
Which of the following options would the solutions architect consider implementing to meet the company requirements?
Move all instances from the public subnets to the private subnets. Additionally, remove the default routes from your routing tables and replace them instead with routes that specify your package locations.
Use network ACL rules that allow network access to your specific package destinations. Add an implicit deny for all other cases.
Create security groups with the appropriate outbound access rules that will let you retrieve software packages from the Internet.
You can use a forward web proxy server in your VPC and manage outbound access using URL-based rules. Default routes are also removed.(Correct)
Explanation
A forward proxy server acts as an intermediary for requests from internal users and servers, often caching content to speed up subsequent requests. Companies usually implement proxy solutions to provide URL and web content filtering, IDS/IPS, data loss prevention, monitoring, and advanced threat protection. AWS customers often use a VPN or AWS Direct Connect connection to leverage existing corporate proxy server infrastructure, or build a forward proxy farm on AWS using software such as Squid proxy servers with internal Elastic Load Balancing (ELB).
You can limit outbound web connections from your VPC to the internet, using a web proxy (such as a squid server) with custom domain whitelists or DNS content filtering services. The solution is scalable, highly available, and deploys in a fully automated way.
Therefore, the correct answer is: You can use a forward web proxy server in your VPC and manage outbound access using URL-based rules. Default routes are also removed. A proxy server filters requests from the client, and allows only those that are related to the product updates, and in this case helps filter all other requests except the ones for the product updates.
The option that says: Move all instances from the public subnets to the private subnets. Additionally, remove the default routes from your routing tables and replace them instead with routes that specify your package locations is incorrect. Even though moving the instances in a private subnet is a good idea, the routing table does not have the filtering logic. It only connects the subnets with Internet gateway.
The option that says: Using network ACL rules that allow network access to your specific package destinations then adding an implicit deny for all other cases is incorrect. NACLs cannot filter requests based on URLs.
The option that says: Creating security groups with the appropriate outbound access rules that will let you retrieve software packages from the Internet is incorrect. A security group cannot filter requests based on URLs.
References:
Check out this Amazon VPC Cheat Sheet:
Question 68: Skipped
A leading fast-food chain has recently adopted a hybrid cloud infrastructure that extends its data centers into AWS Cloud. The solutions architect has been tasked to allow on-premises users, who are already signed in using their corporate accounts, to manage AWS resources without creating separate IAM users for each of them. This is to avoid having two separate login accounts and memorizing multiple credentials.
Which of the following is the best way to handle user authentication in this hybrid architecture?
Retrieve AWS temporary security credentials with Web Identity Federation using STS and AssumeRoleWithWebIdentity to enable users to log in to the AWS console.
Authenticate using your on-premises SAML 2.0-compliant identity provider (IDP), retrieve temporary credentials using STS, and grant federated access to the AWS console via the AWS IAM Identity Center.
(Correct)
Authenticate through your on-premises SAML 2.0-compliant identity provider (IDP) using STS and AssumeRoleWithWebIdentity to retrieve temporary security credentials, which enables your users to log in to the AWS console using a browser.
Integrate the company’s authentication process with Amazon AppFlow and allow Amazon STS to retrieve temporary AWS credentials using OAuth 2.0 to enable your members to log in to the AWS Console.
Explanation
In this scenario, you need to provide temporary access to AWS resources to the existing users, but you should not create new IAM users for them to avoid having to maintain two login accounts. This means that you need to set up a single-sign on authentication for your users so they only need to sign-in once in their on-premises network and can also access the AWS cloud at the same time.
You can use a role to configure your SAML 2.0-compliant identity provider (IdP) and AWS to permit your federated users to access the AWS Management Console. The role grants the user permission to carry out tasks in the console.
Therefore, the correct answer is: Authenticate using your on-premises SAML 2.0-compliant identity provider (IDP), retrieving temporary credentials using STS, and granting federated access to the AWS console via the AWS IAM Identity center. It gives federated access to the users to AWS resources by using SAML 2.0 identity provider and it uses on-premises single sign-on (SSO) endpoint to authenticate users, which gives them access tokens prior to providing the federated access.
The option that says: Integrate the company’s authentication process with Amazon AppFlow and allow Amazon STS to retrieve temporary AWS credentials using OAuth 2.0 to enable your members to log in to the AWS Console is incorrect. Amazon AppFlow is used to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services. It is not used for authentication. Additionally, OAuth 2.0 is not applicable in this scenario. We are not using Web Identity Federation as it is used with public identity providers such as Facebook, Google, etc.
The option that says: Authenticate through your on-premises SAML 2.0-compliant identity provider (IDP) using STS and AssumeRoleWithWebIdentity to retrieve temporary security credentials, which enables your users to log in to the AWS console using a browser is incorrect. The use of AssumeRoleWithWebIdentity is wrong which is only for Web Identity Federation (Facebook, Google, and other social logins). Even though it uses SAML 2.0 identity provider, the requirement is to provide a single sign-on to users, which means that the users should not sign in to the AWS console using any security credentials but through their corporate identity provider.
The option that says: Retrieve AWS temporary security credentials with Web Identity Federation using STS and AssumeRoleWithWebIdentity to enable users to log in to the AWS console is incorrect. We are not using Web Identity Federation as it is used with public identity providers such as Facebook, Google, etc.
References:
Check out this AWS IAM Cheat Sheet:
Question 4: Skipped
A tech company is about to undergo a financial audit. It has been planned to use a third-party web application that needs to have certain AWS access to issue several API commands. It will discover Amazon EC2 resources running within the enterprise's account. The company has internal security policies that require any outside access to its environment to conform to the principles of least privilege. The solutions architect must ensure that the credentials used by the third-party vendor cannot be used by any other third party. The third-party vendor also has an AWS account where it runs its web application and it already provided a unique customer ID, including their AWS account number.
Which of the following options would allow the solutions architect to give permissions to the third-party vendor in compliance with the company requirements?
Create an IAM user in the enterprise account that has permissions allowing only the actions required by the third-party application. Also generate a new access key and secret key from the user to be given to the third-party provider.
Create a new IAM role for the 3rd-party vendor. Add a permission policy that only allows the actions required by the third party application. Also, add a trust policy with a Condition
element for the ExternalId
context key. The Condition must test the ExternalId
context key to ensure that it matches the unique customer ID from the 3rd party vendor.
(Correct)
Use Amazon Connect to allow the third-party application to access your AWS resources. In the AWS Connect configuration, input the ExternalId
context key to ensure that it matches the unique customer ID of the 3rd party vendor.
Provide your own access key and secret key to the third-party software.
Explanation
At times, you need to give third party access to your AWS resources (delegate access). One important aspect of this scenario is the External ID, optional information that you can use in an IAM role trust policy to designate who can assume the role.
To use an external ID, update a role trust policy with the external ID of your choice. Then, when someone uses the AWS CLI or AWS API to assume that role, they must provide the external ID.
For example, let's say that you decide to hire a third-party company called Boracay Corp to monitor your AWS account and help optimize costs. In order to track your daily spending, Boracay Corp needs to access your AWS resources. Boracay Corp also monitors many other AWS accounts for other customers.
Do not give Boracay Corp access to an IAM user and its long-term credentials in your AWS account. Instead, use an IAM role and its temporary security credentials. An IAM role provides a mechanism to allow a third party to access your AWS resources without needing to share long-term credentials (for example, an IAM user's access key).
You can use an IAM role to establish a trusted relationship between your AWS account and the Boracay Corp account. After this relationship is established, a member of the Boracay Corp account can call the AWS STS AssumeRole API to obtain temporary security credentials. The Boracay Corp members can then use the credentials to access AWS resources in your account.
When a user, a resource, an application, or any service needs to access any AWS service or resource, always opt to create an appropriate role that has the least privileged access or only the required access, rather than using any other credentials such as keys.
Therefore, the correct answer is: Create a new IAM role for the 3rd-party vendor. Add a permission policy that only allows the actions required by the third party application. Also, add a trust policy with a Condition
element for the ExternalId
context key. The Condition must test the ExternalId
context key to ensure that it matches the unique customer ID from the 3rd party vendor.
The option that says: Use Amazon Connect to allow the third-party application to access your AWS resources. In the AWS Connect configuration, input the ExternalId
context key to ensure that it matches the unique customer ID of the 3rd party vendor is incorrect because Amazon Connect is simply an easy to use omnichannel cloud contact center that helps companies provide superior customer service at a lower cost. You should use an IAM Role in this scenario instead of Amazon Connect.
The option that says: Provide your own access key and secret key to the third-party software is incorrect because you should never share your access and secret keys.
The option that says: Create an IAM user in the enterprise account that has permissions allowing only the actions required by the third-party application and generating a new access key and secret key from the user to be given to the third-party provider is incorrect because when a user is created, its security credentials are stored in the EC2 which can be compromised, and the creation of the appropriate role is always the better solution rather than creating a user.
References:
Check out this AWS IAM Cheat Sheet:
Question 8: Skipped
A large software company has an on-premises LDAP server and a web application hosted on its VPC in AWS. The solutions architect has established an IPSec VPN connection between the AWS VPC and the company’s on-premises network. The company wants to enable employees to access the web application and other AWS resources using the same corporate account used inside the company network.
Which of the following actions should the solutions architect implement to achieve the company requirements? (SELECT TWO.)
Launch an identity broker that authenticates against LDAP server and then calls STS to get IAM federated user credentials. Configure the web application to call the identity broker that you created to get IAM federated user credentials with access to the appropriate AWS service.(Correct)
Integrate the on-premises LDAP server with IAM so the users can log into IAM using their corporate LDAP credentials. Once authenticated, they can use the temporary credentials to access any AWS resource.
Configure the web application to authenticate against the on-premises LDAP server and retrieve the name of an IAM role associated with the user. The application then calls the STS to assume that IAM role. The application can use the temporary credentials to access any AWS resource.
(Correct)
Create an identity broker that authenticates against STS to assume an IAM role to generate temporary AWS security credentials. For user authentication, configure the web application to call the identity broker to get AWS temporary security credentials.
Explanation
If your identity store is not compatible with SAML 2.0, then you can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.
To enable corporate employees to access the company's AWS resources, you can develop a custom identity broker application. The application verifies that employees are signed into the existing identity and authentication system of the company (which might use LDAP, Active Directory, or another system). The identity broker application then obtains temporary security credentials for the employees. To get temporary security credentials, the identity broker application calls either the AssumeRole
or GetFederationToken
actions in STS to obtain temporary security credentials.
The option that says: Configure the web application to authenticate against the on-premises LDAP server and retrieve the name of an IAM role associated with the user. The application then calls the STS to assume that IAM role. The application can use the temporary credentials to access any AWS resource is correct as it properly authenticates users using LDAP, gets the security token from STS, and then accesses the S3 bucket using the temporary credentials.
The option that says: Launch an identity broker that authenticates against LDAP server and then calls STS to get IAM federated user credentials. Configure the web application to call the identity broker that you created to get IAM federated user credentials with access to the appropriate AWS service is correct as it properly provides access to the users using STS and IAM. You can develop an identity broker that authenticates users against LDAP then gets the temporary security token from STS, which can be used to access the S3 bucket using the IAM federated user credentials.
The option that says: Create an identity broker that authenticates against STS to assume an IAM role to generate temporary AWS security credentials. For user authentication, configure the web application to call the identity broker to get AWS temporary security credentials is incorrect as the users need to be authenticated using LDAP first and not via STS. In addition, the temporary credentials to log into AWS should be provided by STS and not the identity broker.
The option that says: Integrate the on-premises LDAP server with IAM so the users can log into IAM using their corporate LDAP credentials. Once authenticated, they can use the temporary credentials to access any AWS resource is incorrect as you cannot use the LDAP credentials to log into IAM.
Reference:
Check out this AWS IAM Cheat Sheet:
Question 11: Skipped
A retail company has several subsidiaries with offices located in different countries in Southeast Asia. Each subsidiary has an AWS account that is used for hosting the company retail website, which is customized per country. The parent company wants to have better control on all the AWS accounts as well as visibility on the costs incurred for each account. The Solutions Architect has been tasked to implement a solution that will satisfy the following requirements:
- Provide a cost breakdown report for each subsidiary AWS account.
- Have a single AWS invoice for all the subsidiary AWS accounts.
- Provide full administration privileges on each subsidiary AWS account, regardless of the parent company’s policy.
- Have the ability to restrict the services and features that can be used on each subsidiary AWS account, as defined by the parent company’s policy.
Which of the following actions should the Solutions Architect take in order to fulfill the requirements? (Select TWO.)
Define service quotas that will restrict services and features depending on the permissions set by the parent company policy. Apply this service quota to each subsidiary AWS account.
Create an AWS account for the parent company and create a single AWS Organization with the Consolidated Billing features set. Invite each of the subsidiary AWS accounts to join the AWS Organization of the parent company.
Create an AWS account for the parent company and create an AWS organization for each of the subsidiaries. Invite each of the subsidiary AWS Accounts to join their respective AWS organization on the parent company.
Create an AWS Organization on the parent company's AWS account and invite all the subsidiary AWS accounts. Ensure that All features set is enabled.
(Correct)
Define Service Control Policy (SCP) documents to only allow services and features defined by the parent company policy. Apply the necessary SCP for each subsidiary AWS account.
(Correct)
Explanation
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new AWS accounts and allocate resources, group accounts to organize your workflows, apply policies to accounts or groups for governance, and simplify billing by using a single payment method for all of your accounts.
AWS Organizations has two available feature sets:
All features – The default feature set that is available to AWS Organizations. It includes all the functionality of consolidated billing, plus advanced features such as applying Service control policies (SCPs) to restrict the services and actions that users (including the root user) and roles in an account can access.
Consolidated billing – This feature set provides shared billing functionality, but doesn't include the more advanced features of AWS Organizations. For example, you can't enable other AWS services to integrate with your organization to work across all of its accounts or use policies to restrict what users and roles in different accounts can do. To use the advanced AWS Organizations features, you must enable all features in your organization.
Once you’ve created the organization and verified your email, you can create or invite other accounts into your organization, categorize the accounts into Organizational Units (OUs), create service control policies (SCPs), and take advantage of the Organization's features from supported AWS services.
The consolidated billing feature in AWS Organizations is used to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd (AISPL) accounts. Every organization in AWS Organizations has a management account that pays the charges of all the member accounts.
Consolidated billing has the following benefits:
One bill – You get one bill for multiple accounts.
Easy tracking – You can track the charges across multiple accounts and download the combined cost and usage data.
Combined usage – You can combine the usage across all accounts in the organization to share the volume pricing discounts, Reserved Instance discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.
No extra fee – Consolidated billing is offered at no additional cost.
With consolidated billing, the management account is billed for all charges of the member accounts. However, unless the organization is changed to support all features in the organization (not consolidated billing features only) and member accounts are explicitly restricted by policies, each member account is otherwise independent of the other member accounts. For example, the owner of a member account can sign up for AWS services, access resources, and use AWS Premium Support unless the management account restricts those actions. Each account owner continues to use their own IAM user name and password, with account permissions assigned independently of other accounts in the organization.
The option that says: Create an AWS Organization on the parent company's AWS account and invite all the subsidiary AWS accounts. Ensure that All features set is enabled. is correct. Consolidated billing allows the management account owner to have only one invoice for all accounts in the organization. And by default, each member account is independent of the other member accounts, so each subsidiary has full administration privileges unless controlled by the parent account.
The option that says: Define Service Control Policy (SCP) documents to only allow services and features defined by the parent company policy. Apply the necessary SCP for each subsidiary AWS account is correct. This satisfies the requirement for restricting access to the subsidiary AWS accounts as defined by the parent AWS account.
The option that says: Create an AWS account for the parent company and create an AWS Organization for each of the subsidiaries. Invite each of the subsidiary AWS accounts to join their respective AWS organization on the parent company is incorrect. You only have to create a single organization and link the member accounts.
The option that says: Define service quotas that will restrict services and features depending on the permissions set by the parent company policy. Apply this service quota to each subsidiary AWS account is incorrect. Applying service quota will not restrict the member accounts from using AWS services or features that are not permitted by the parent account. Service quota only restricts how much you can use for a particular service.
The option that says: Create an AWS account for the parent company and create a single AWS Organization with the Consolidated Billing features set. Invite each of the subsidiary AWS accounts to join the AWS Organization of the parent company is incorrect. Although creating an AWS organization is necessary, using only the Consolidated Billing features set is not enough to satisfy the requirements. Even though "All Features" is enabled by default, this will be overridden if you enable only the "Consolidated Billing" feature. This means that you cannot use the SCP to your member AWS accounts anymore. You need to enable "All features" on the AWS Organization to be able to create and apply SCP for each subsidiary.
References:
Check out this AWS Organizations Cheat Sheet:
Question 19: Skipped
A multinational manufacturing company has multiple AWS accounts in multiple AWS regions across North America, Europe, and Asia. The solutions architect has been tasked to set up AWS Organizations to centrally manage policies and have full administrative control across the multiple AWS accounts owned by the company, without requiring custom scripts and manual processes.
Which of the following options is the recommended implementation to achieve this requirement with the LEAST effort?
Use AWS Control Tower from the master account and enroll all the member AWS accounts of the company. AWS Control Tower will automatically provision the needed IAM permissions to have full administrative control across all member accounts.
Set up AWS Organizations by sending an invitation to the master account of your organization from each of the member accounts of the company. Create an OrganizationAccountAccessRole
IAM role in the member account and grant permission to the master account to assume the role.
Set up AWS Organizations by establishing cross-account access from the master account to all member AWS accounts of the company. The master account will automatically have full administrative control across all member accounts.
Set up AWS Organizations by sending an invitation to all member accounts of the company from the master account of your organization. Create an OrganizationAccountAccessRole
IAM role in the member account and grant permission to the master account to assume the role.
(Correct)
Explanation
After you create an Organization and verify that you own the email address associated with the master account, you can invite existing AWS accounts to join your organization. When you invite an account, AWS Organizations sends an invitation to the account owner, who decides whether to accept or decline the invitation. You can use the AWS Organizations console to initiate and manage invitations that you send to other accounts. You can send an invitation to another account only from the master account of your organization.
If you are the administrator of an AWS account, you also can accept or decline an invitation from an organization. If you accept, your account becomes a member of that organization. Your account can join only one organization, so if you receive multiple invitations to join, you can accept only one.
When an invited account joins your organization, you do not automatically have full administrator control over the account, unlike created accounts. If you want the master account to have full administrative control over an invited member account, you must create the OrganizationAccountAccessRole
IAM role in the member account and grant permission to the master account to assume the role.
Therefore, the correct answer is: Set up AWS Organizations by sending an invitation to all member accounts of the company from the master account of your organization. Create an OrganizationAccountAccessRole
IAM role in the member account and grant permission to the master account to assume the role.
The option that says: Set up AWS Organizations by establishing cross-account access from the master account to all member AWS accounts of the company. The master account will automatically have full administrative control across all member accounts is incorrect. Cross-account access is primarily used for scenarios where you need to grant your IAM users permission to switch to roles within your AWS account or to roles defined in other AWS accounts that you own.
The option that says: Set up AWS Organizations by sending an invitation to the master account of your organization from each of the member accounts of the company. Create an OrganizationAccountAccessRole
IAM role in the member account and grant permission to the master account to assume the role is incorrect. It entails a lot of effort to send an individual invitation to the master account from each of the member accounts of the company. It's stated in the scenario that you should achieve this requirement with the LEAST effort, and you can do this by sending an invitation to all member accounts of the company from the master account of your organization.
The option that says: Use AWS Control Tower from the master account and enroll all the member AWS accounts of the company. AWS Control Tower will automatically provision the needed IAM permissions to have full administrative control across all member accounts is incorrect. AWS Control Tower can be used to set up and manage multiple AWS accounts. However, it will not automatically provision IAM permissions for all member accounts.
References:
Check out this AWS Organizations Cheat Sheet:
Question 25: Skipped
A multinational corporation has recently acquired a smaller company. The solutions architect was instructed to consolidate the multiple AWS accounts of both entities using AWS Organizations. The solutions architect has set up the required service control policies (SCPs) to simplify the process of controlling access permissions for each individual account and Organizational Units (OUs). However, one account is having trouble creating a new S3 bucket, and it is required to investigate the cause of this issue. The account has the following SCP attached:
Each IAM user of the account has the following IAM policy attached:
Based on the provided SCP and IAM policy, which of the following options could be the possible root cause of this problem?
The IAM policy is the root cause because you have denied user permissions to execute any S3-related actions.
The SCP is the root cause since it does not explicitly allow the required action that would enable the account to create an S3 bucket.
(Correct)
The SCP is the root cause because it does not support whitelisting actions of the AWS resources.
Both the IAM policy and the SCP are the problem. The SCP should explicitly allow S3 bucket creation in its policy and the IAM policy should exactly match the permissions of the SCP.
Explanation
A service control policy (SCP) is a policy that specifies the services and actions that users and roles can use in the specified AWS accounts. SCPs are similar to IAM permission policies except that they don't grant any permissions. Instead, SCPs specify the maximum permissions for an organization, organizational unit (OU), or account. When you attach an SCP to your organization root or an OU, the SCP limits permissions for entities in member accounts. Even if a user is granted full administrator permissions with an IAM permission policy, any access that is not explicitly allowed or that is explicitly denied by the SCPs affecting that account is blocked.
For example, if you assign an SCP that allows only database service access to your "database" account, then any user, group, or role in that account is denied access to any other service's operations. SCPs are available only when you enable all features in your organization.
By default, an SCP named FullAWSAccess is attached to every root, OU, and account. This default SCP allows all actions and all services. So in a new organization, until you start creating or manipulating the SCPs, all of your existing IAM permissions continue to operate as they did. As soon as you apply a new or modified SCP to a root or OU that contains an account, the permissions that your users have in that account become filtered by the SCP. Permissions that used to work might now be denied if they're not allowed by the SCP at every level of the hierarchy down to the specified account.
Therefore, the correct answer is: The SCP is the root cause since it does not explicitly allow the required action that would enable the account to create an S3 bucket. The default service policy was changed which means that you would need to explicitly allow your account access to S3 to be able to create buckets. By removing the default FullAWSAccess SCP, all actions for all services are now implicitly denied. To use SCPs as a whitelist, you must replace the AWS-managed FullAWSAccess SCP with an SCP that explicitly permits only those services and actions that you want to allow. Your custom SCP then overrides the implicit Deny with an explicit Allow for only those actions that you want to permit.
The option that says: The SCP is the root cause because it does not support whitelisting actions of the AWS resources is incorrect. The SCP format is correct, and it definitely does support the whitelisting feature.
The option that says: The IAM policy is the root cause because you have denied user permissions to execute any S3-related actions is incorrect. The IAM policy allows the user to perform all actions under Amazon S3. The included Deny policy only affects actions on AWS resources that are not of S3, and therefore, should not hinder you from creating S3 buckets. Take note that the NotAction/NotResource
is an advanced policy element that explicitly matches everything except the list of action/resources that you specified.
The option that says: Both the IAM policy and the SCP are the problem. The SCP should explicitly allow S3 bucket creation in its policy and the IAM policy should exactly match the permissions of the SCP is incorrect. The IAM policy does not necessarily need to match the service control policy. Although it is true that you would have to explicitly allow S3 bucket creation on the SCP, an SCP does not grant any permissions. Users and roles must still be granted permissions with appropriate IAM permission policies. A user without any IAM permission policies has no access at all, even if the applicable SCPs allow all services and all actions.
References:
Service Control Policies (SCP) vs IAM Policies:
Question 27: Skipped
A company uses Amazon WorkSpaces to improve the productivity and security of its remote workers. Hundreds of remote workers log in to the virtual desktop service using the Amazon WorkSpaces client application on a regular basis. Users have reported that they cannot log in to their virtual desktops even though they have the correct credentials.
Upon investigation, the Solutions Architect discovered that the filesystem storing the user profiles has reached its capacity, which is the reason why users cannot establish a new session in Amazon WorkSpaces. The environment is configured with a 10 TB Amazon FSx for Windows File Server file system to store the user profiles.
Which of the following options should the Solutions Architect implement to solve the issue and prevent it from happening again?
Create a new Amazon FSx for Windows File Server file system with a larger capacity. Create a script to copy all user profiles to the new file system. Create an Amazon CloudWatch metric to monitor the FreeStorageCapacity of the filesystem and send a notification via Amazon SNS before it reaches capacity.
Create an Amazon CloudWatch Alarm using the FreeStorageCapacity
metric to monitor the file system. Once triggered, use AWS Steps Functions as the target. Run the Steps Function to create a new Amazon FSx for Windows File Server file system and migrate the user profiles.
Create an Amazon CloudWatch Alarm to monitor the FreeStorageCapacity
metric of the file system. Write an AWS Lambda Function to increase the capacity of the Amazon FSx for Windows File Server file system using the update-file-system command. Utilize Amazon EventBridge to invoke this Lambda function when the metric threshold is reached.
(Correct)
From the Amazon FSx console, select the desired file system to edit its attributes. Enable the option Dynamically Allocate to allow the file system to scale depending on the size of the data stored. This will present a large capacity drive to Amazon WorkSpaces clients and will grow automatically as users add more data to their profiles.
Explanation
Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system. With file storage on Amazon FSx, the code, applications, and tools that Windows developers and administrators use today can continue to work unchanged. Windows applications and workloads ideal for Amazon FSx include business applications, home directories, web serving, content management, data analytics, software build setups, and media processing workloads.
As you need additional storage, you can increase the storage capacity that is configured on your FSx for Windows File Server file system. You can do so using the Amazon FSx console, the Amazon FSx API, or the AWS Command Line Interface (AWS CLI).
You can only increase the amount of storage capacity for a file system; you cannot decrease storage capacity. When you increase the storage capacity of your Amazon FSx file system, behind the scenes, Amazon FSx adds a new, larger set of disks to your file system. Amazon FSx then runs a storage optimization process in the background to transparently migrate data from the old disks to the new disks.
The following illustration shows the four main steps of the process that Amazon FSx uses when increasing a file system's storage capacity.
EventBridge (CloudWatch Events) helps you to respond to state changes in your AWS resources. With EventBridge (CloudWatch Events), you can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.
Using the update-file-system command, you can use AWS SDK or CLI to programmatically increase the size of the FSx file system. You can use Amazon CloudWatch to monitor the metrics of the file system and trigger the Lambda function to perform an action.
Therefore, the correct answer is: Create an Amazon CloudWatch Alarm to monitor the FreeStorageCapacity metric of the file system. Write an AWS Lambda Function to increase the capacity of the Amazon FSx for Windows File Server file system using the update-file-system command. Utilize Amazon EventBridge to invoke this Lambda function when the metric threshold is reached.
The option that says: Create a new Amazon FSx for Windows File Server file system with a larger capacity. Create a script to copy all user profiles to the new file system. Create an Amazon CloudWatch metric to monitor the FreeStorageCapacity of the filesystem and send a notification via Amazon SNS before it reaches capacity is incorrect. You don't have to manually copy all user data to a new volume. You can increase the file system and Amazon FSx automatically migrates the data to a larger volume in the background.
The option that says: Create an Amazon CloudWatch Alarm using the FreeStorageCapacity metric to monitor the file system. Once triggered, use AWS Steps Functions as the target. Run the Steps Function to create a new Amazon FSx for Windows File Server file system and migrate the user profiles is incorrect. You don't need to create the Step Function to migrate the user profiles. Amazon FSx automatically migrates the data in the background when you increase the file system size.
The option that says: From the Amazon FSx console, select the desired file system to edit its attributes. Enable the option Dynamically Allocate to allow the file system to scale depending on the size of the data stored. This will present a large capacity drive to Amazon WorkSpaces clients and will grow automatically as users add more data to their profiles is incorrect. There is no option to Dynamically Allocate the file system size. You can manually adjust the file system size using the Amazon FSx console, the Amazon FSx API, or the AWS CLI.
References:
Check out this Amazon FSx Cheat Sheet and Amazon CloudWatch Cheat Sheets:
Question 34: Skipped
A leading financial company owns multiple AWS accounts that are consolidated under one AWS Organization. To properly manage all of the resources in your organization, the solutions architect has been tasked to ensure that the tags are always added when users create any resources across all the accounts.
Which of the following options are the recommended actions to achieve the company requirements? (Select TWO.)
Set up AWS Systems Manager Automation to automatically add tags to your provisioned resources.
Set up the CloudFormation Resource Tags property to apply tags to certain resource types upon creation.
(Correct)
Set up AWS generated tags by activating it in the Billing and Cost Management console of the member account.
Set up AWS Config to add the corresponding tags to your resources right from the very moment that they are created.
Set up AWS Service Catalog to tag the provisioned resources with corresponding unique identifiers for portfolio, product, and users.
(Correct)
Explanation
AWS offers a variety of tools to help you implement proactive tag governance practices by ensuring that tags are consistently applied when resources are created.
AWS CloudFormation provides a common language for provisioning all the infrastructure resources in your cloud environment. CloudFormation templates are simple text files that create AWS resources in an automated and secure manner. When you create AWS resources using AWS CloudFormation templates, you can use the CloudFormation Resource Tags property to apply tags to certain resource types upon creation.
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application environments. AWS Service Catalog enables a self-service capability for users, allowing them to provision the services they need while also helping you to maintain consistent governance – including the application of required tags and tag values.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow or deny their access to AWS resources. When you create IAM policies, you can specify resource-level permissions, which include specific permissions for creating and deleting tags. In addition, you can include condition keys, such as aws:RequestTag
and aws:TagKeys
, which will prevent resources from being created if specific tags or tag values are not present.
Therefore, the correct answers are:
- Set up AWS Service Catalog to tag the provisioned resources with corresponding unique identifiers for portfolio, product, and users.
- Set up the CloudFormation Resource Tags property to apply tags to certain resource types upon creation.
The option that says: Set up AWS config to add the corresponding tags to your resources right from the very moment that they are created is incorrect. Although you can use AWS Config to determine if your resources have tags or not, it does not have the capability to immediately add the corresponding tags to your resources across multiple AWS accounts by default. You usually issue the TagResource AWS Config API action to tag a resource in your current AWS account and not for multiple accounts. AWS Config supports Multi-Account Multi-Region Data Aggregation but you have to manually create an Aggregator, which is not mentioned in this option. You can use AWS Service Catalog in conjunction with AWS Config to satisfy the requirement.
The option that says: Set up AWS generated tags by activating it in the Billing and Cost Management console of the member account is incorrect. Although you can use the AWS generated tags feature in this scenario, you have to activate it using the master account and not on the member account.
The option that says: Set up AWS Systems Manager Automation to automatically add tags to your provisioned resources is incorrect because you cannot automatically add tags to your provisioned resources using AWS Systems Manager Automation.
References:
Check out this AWS Service Catalog Cheat Sheet:
Question 72: Skipped
A leading insurance firm has several new members in its development team. The solutions architect was instructed to provision access to certain IAM users who perform application development tasks in the VPC. The access should allow the users to create and configure various AWS resources such as deploying Windows EC2 servers. In addition, the users should be able to see the permissions in AWS Organizations to view information about the user's organization, including the master account email and organization limitations.
Which of the following should the solutions architect implement to follow the standard security advice of granting the least privilege?
Create a new IAM role and attach the AdministratorAccess
AWS managed policy to it. Assign the IAM Role to the IAM users.
Attach the PowerUserAccess
AWS managed policy to the IAM users.
(Correct)
Attach the AdministratorAccess
AWS managed policy to the IAM users.
Create a new IAM role and attach the SystemAdministrator
AWS managed policy to it. Assign the IAM Role to the IAM users.
Explanation
AWS managed policies for job functions are designed to closely align to common job functions in the IT industry. You can use these policies to easily grant the permissions needed to carry out the tasks expected of someone in a specific job function. These policies consolidate permissions for many services into a single policy that's easier to work with than having permissions scattered across many policies.
There are a lot of available AWS Managed Policies that you can directly attach to your IAM Users, such as Administrator, Billing, Database Administrator, Data Scientist, Developer Power User, Network Administrator, Security Auditor, System Administrator and many others.
For Administrators, you can use the AWS managed policy name: AdministratorAccess if you want to provision full access to a specific IAM User. This will enable the user to delegate permissions to every service and resource in AWS as this policy grants all actions for all AWS services and for all resources in the account.
For Developer Power Users, you can use the AWS managed policy name: PowerUserAccess if you have users who perform application development tasks. This policy will enable them to create and configure resources and services that support AWS aware application development. The first statement of this policy uses the NotAction element to allow all actions for all AWS services and for all resources except AWS Identity and Access Management and AWS Organizations. The second statement grants IAM permissions to create a service-linked role. This is required by some services that must access resources in another service, such as an Amazon S3 bucket. It also grants Organizations permissions to view information about the user's organization, including the master account email and organization limitations.
Therefore, the correct answer is: Attach the PowerUserAccess
AWS managed policy to the IAM users.
The options that say: Attach the AdministratorAccess AWS managed policy to the IAM users and Create a new IAM role and attach the AdministratorAccess
AWS managed policy to it. Assign the IAM Role to the IAM users are incorrect. Although an AdministratorAccess policy can meet the requirement, it is more suitable to attach a PowerUserAccess to the IAM users since this policy can provide the required access. Take note that you have to follow the standard security best practice of granting the least privilege. In addition, a managed policy can be directly attached to your IAM Users, which is one of the reasons why the latter option is incorrect.
The option that says: Create a new IAM role and attach the SystemAdministrator
AWS managed policy to it. Assign the IAM Role to the IAM users is incorrect because the SystemAdministrator managed policy does not have AWS Organizations permissions to view information about the user's organization such as the master account email or the organization limitations. In this scenario, you have to use PowerUserAccess instead.
References:
Check out this AWS IAM Cheat Sheet:
Question 2: Skipped
An AWS Partner company hosts all its infrastructure on the AWS cloud. All resources are currently deployed in the us-east-1 region. The company plans to expand its business to include deployments in Europe and Asia. The solutions architect has been tasked to provision the needed resources on multiple regions across multiple AWS accounts under the company’s AWS Organization.
Which of the following options is the recommended solution to meet the company requirements?
Write infrastructure-as-code to maintain consistency. Create nested stacks with AWS CloudFormation templates and use global parameters to specify which target region and accounts to provision the needed resources.
Write infrastructure-as-code to maintain consistency. Use AWS Organizations to centrally orchestrate the deployment of AWS CloudFormation template from the central account. Use CloudFormation StackSets to simplify permissions and automatic provisioning of resources across multiple regions and accounts.
(Correct)
Use AWS Organizations to centrally manage the deployment of AWS CloudFormation template from the central account. Use AWS Control Tower as an orchestration layer deploying resources across multiple accounts and regions.
Write infrastructure-as-code to maintain consistency. Create AWS CloudFormation templates and create IAM policies to control multiple accounts. Use regional parameters when deploying CloudFormation templates across multiple regions to provision the needed resources.
Explanation
AWS CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. With AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.
Using AWS Organizations you can centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (Amazon EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.
You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs).
Therefore, the correct answer is: Write infrastructure-as-code to maintain consistency. Use AWS Organizations to centrally orchestrate the deployment of AWS CloudFormation template from the central account. Use CloudFormation StackSets to simplify permissions and automatic provisioning of resources across multiple regions and accounts. With AWS Organizations, you can centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions.
The option that says: Write infrastructure-as-code to maintain consistency. Create AWS CloudFormation templates and create IAM policies to control multiple accounts. Use regional parameters when deploying CloudFormation templates across multiple regions to provision the needed resources is incorrect. This is not recommended as it entails more operational overhead. It is recommended to use CloudFormation StackSets to deploy stack across multiple regions and accounts in a single operation.
The option that says: Use AWS Organizations to centrally manage the deployment of AWS CloudFormation template from the central account. Use AWS Control Tower as an orchestration layer deploying resources across multiple accounts and regions is incorrect. It is possible to use AWS Control Tower as an orchestration layer for multi-account environments. However, for this solution to work, you still need to create CloudFormation StackSets for deployment on multiple regions.
The option that says: Write infrastructure-as-code to maintain consistency. Create nested stacks with AWS CloudFormation templates and use global parameters to specify which target region and accounts to provision the needed resources is incorrect. This is not possible, if you want to deploy CloudFormation stacks on multiple regions and accounts in a single operation, you should use CloudFormation StackSets.
References:
Check out these AWS Organizations and AWS CloudFormation Sheet:
Question 25: Skipped
A large company has multiple AWS accounts with multiple IAM Users that launch different types of Amazon EC2 instances and EBS volumes every day. As a result, most accounts quickly hit the service limit and IAM users can no longer create any new instances. When cleaning up the AWS accounts, the solutions architect noticed that the majority of the instances and volumes are untagged. Therefore, it is difficult to pinpoint the owner of these resources and verify if they are safe to terminate. Because of this, the management had issued a new protocol that requires adding a predefined set of tags before anyone can launch their EC2 instances.
Which of the following options is the simplest way to enforce this new requirement?
Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a Service Control Policy that restricts launching any AWS resources without a tag by including the Condition
element in the policy which uses the ForAllValues
qualifier and the aws:TagKeys
condition. This policy will require its principals to tag resources during creation. Apply the SCP to the OU which will automatically cascade the policy to individual member accounts.
(Correct)
Apply an IAM policy to the individual member accounts of the OU that includes a Condition
element in the policy containing the ForAllValues
qualifier and the aws:TagKeys
condition. This policy will require its principals to attach specific tags to their resources during creation.
Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a rule in AWS Config requiring users to tag specific resources and raise an alert whenever the rule is violated. The Config Rule should allow a user to launch EC2 instances only if the user adds all the tags defined in the rule. If the user applies any other tag then the action is denied.
Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a rule using AWS Systems Manager requiring users to tag specific resources and raise an alert whenever the rule is violated. This will allow a user to launch EC2 instances only if certain tags were defined. If the user applies any other tag then the action is denied.
Explanation
You can specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources. Using this principle, you can require users to tag specific resources by applying conditions to their IAM policy. Using AWS Organizations, you can consolidate all of your AWS accounts and group the business units into separate Organizational Units (OUs) with a custom service control policy (SCP).
You can configure your IAM policy to allow a user to launch an EC2 instance and create an EBS volume only if the user applies all the tags that are defined in the policy using the ForAllValues
qualifier. If the user applies any tag that's not included in the policy then the action is denied. To enforce case sensitivity, use the condition aws:TagKeys
.
You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts. For example, you can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the policy. You can create multiple OUs within a single organization, and you can create OUs within other OUs.
Therefore, the correct answer is: Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a Service Control Policy that restricts launching any AWS resources without a tag by including the Condition
element in the policy which uses the ForAllValues
qualifier and the aws:TagKeys
condition. This policy will require its principals to tag resources during creation. Apply the SCP to the OU which will automatically cascade the policy to individual member accounts.
The option that says: Apply an IAM policy to the individual member accounts of the OU that includes a Condition
element in the policy containing the ForAllValues
qualifier and the aws:TagKeys
condition. This policy will require its principals to attach specific tags to their resources during creation is incorrect as it requires a lot of effort to implement the policy in each and every AWS account of the organization. You should set up AWS Organizations, group different accounts into separate Organizational Units (OU), and use SCP instead.
The option that says: Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a rule in AWS Config requiring users to tag specific resources and raise an alert whenever the rule is violated. The Config Rule should allow a user to launch EC2 instances only if the user adds all the tags defined in the rule. If the user applies any other tag then the action is denied is incorrect. AWS Config only audits and evaluates if your instance and volume configurations match the rules you have created. Unlike an IAM policy, it does not permit nor restrict users from performing certain actions.
The option that says: Configure AWS Organizations to group different accounts into separate Organizational Units (OU) depending on the business function. Create a rule using AWS Systems Manager requiring users to tag specific resources and raise an alert whenever the rule is violated. This will allow a user to launch EC2 instances only if certain tags were defined. If the user applies any other tag then the action is denied is incorrect because you cannot create a rule using AWS Systems Manager requiring users to tag specific resources and raise an alert whenever the rule is violated. You have to use an IAM policy in order to do this.
References:
Check out this AWS IAM Cheat Sheet:
Service Control Policies (SCP) vs IAM Policies:
Comparison of AWS Services Cheat Sheets:
Question 70: Skipped
A multinational bank has recently set up AWS Organizations to manage its several AWS accounts from their various business units. The Senior Solutions Architect attached the SCP below to an Organizational Unit (OU) to define the services that its member accounts can use:
In one of the member accounts under that OU, an IAM user tried to create a new S3 bucket but was getting a permission denied error.
Which of the following options is the most likely cause of this issue?
An IAM policy that allows the use of S3 and EC2 services should be the one attached in the OU instead of an SCP.
The IAM user in the member account does not have IAM policies that explicitly grant EC2 or S3 service actions.
(Correct)
You should use the root user of the account to be able to create the new S3 bucket.
All accounts within the OU does not automatically inherit the policy attached to them. You still have to manually attach the SCP to the individual AWS accounts of the OU.
Explanation
A service control policy (SCP) determines what services and actions can be delegated by administrators to the users and roles in the accounts that the SCP is applied to. An SCP does not grant any permissions. Instead, SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU). The SCP limits permissions for entities in member accounts, including each AWS account root user.
If the SCP allows the actions for a service, the administrator of the account can grant permissions for those actions to the users and roles in that account, and the users and roles can perform the actions of the administrators grant those permissions. If the SCP denies actions for a service, the administrators in that account can't effectively grant permissions for those actions, and the users and roles in the account can't perform the actions even if granted by an administrator.
Users and roles must still be granted permissions using IAM permission policies attached to them or to groups. The SCPs filter the permissions granted by such policies, and the user can't perform any actions that the applicable SCPs don't allow. Actions allowed by the SCPs can be used if they are granted to the user or role by one or more IAM permission policies. Take note that the IAM user being used by the administrator does not have IAM policies that explicitly grant EC2 or S3 service actions.
Hence, the correct answer is: The IAM user in the member account does not have IAM policies that explicitly grant EC2 or S3 service actions.
The option that says: All accounts within the OU does not automatically inherit the policy attached to them. You still have to manually attach the SCP to the individual AWS accounts of the OU is incorrect because an SCP attached to an OU is automatically inherited by all accounts within that same OU. The main cause of this issue is the missing IAM policy in the account, which explicitly grants EC2 or S3 service actions to the IAM user.
The option that says: An IAM policy that allows the use of S3 and EC2 services should be the one attached in the OU instead of an SCP is incorrect because you cannot directly assign an IAM policy to an OU. In addition, there is no attached IAM policy that allows EC2 or S3 service actions to the IAM user.
The option that says: Using the root user of the account to be able to create the new S3 bucket is incorrect because SCPs do affect the root user along with all IAM users and standard IAM roles in any affected account. The issue lies in the missing IAM policy of the account and not with the SCP, OU, or its AWS Organizations settings.
References:
Check out these AWS Cheat Sheets:
AWS Identity and Access Management (IAM) is integrated with AWS CloudTrail, a service that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticate AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets.