25 Mar 2024 morning study
Last updated
Last updated
Question 43: Correct
A multinational financial company has a suite of web applications hosted in multiple VPCs in various AWS regions. As part of their security compliance, the company’s Solutions Architect has been tasked to set up a logging solution to track all of the changes made to their AWS resources in all regions, which host their enterprise accounting systems. The company is using different AWS services such as Amazon EC2 instances, Amazon S3 buckets, CloudFront web distributions, and AWS IAM. The logging solution must ensure the security, integrity, and durability of your log data in order to pass the compliance requirements. [-> CloudTrail] In addition, it should provide an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and API calls.
In this scenario, which of the following options is the best solution to use?
Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
(Correct)
Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --no-include-global-service-events and --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass the --include-global-service-events parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
Explanation
The accounting firm requires a secure and durable logging solution that will track all of the activities of all AWS resources (such as EC2, S3, CloudFront, and IAM) on all regions. CloudTrail can be used for this case with multi-region trail enabled. However, CloudTrail will only cover the activities of the regional services (EC2, S3, RDS etc.) and not for global services such as IAM, CloudFront, AWS WAF, and Route 53.
The option that says: Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is correct because it provides security, integrity, and durability to your log data. In addition, it has the -include-global-service-events parameter enabled which will also include activity from global services such as IAM, Route 53, AWS WAF, and CloudFront.
The option that says: Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch.
The option that says: Create a new Amazon CloudWatch trail in a new S3 bucket using the AWS CLI and also pass the --include-global-service-events parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. In addition, the --is-multi-region-trail parameter is also missing in this setup.
The option that says: Create a new AWS CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --no-include-global-service-events and --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi-Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect. The --is-multi-region-trail is not enough as you also need to add the --include-global-service-events parameter to track the global service events. The --no-include-global-service-events parameter actually prevents CloudTrail from publishing events from global services such as IAM to the log files.
References:
Check out this AWS CloudTrail Cheat Sheet:
Question 49: Correct
A company runs a sports web portal that covers the latest cricket news in Australia. The solutions architect manages the main AWS account which has resources in multiple AWS regions. The web portal is hosted on a fleet of on-demand EC2 instances and an RDS database which are also deployed to other AWS regions. The IT Security Compliance Officer has given the solutions architect the task of developing a reliable and durable logging solution to track changes made to all of your EC2, IAM, and RDS resources in all of the AWS regions. The solution must ensure the integrity and confidentiality of the log data.
Which of the following solutions would be the best option to choose?
Create a new trail in CloudTrail and assign it a new S3 bucket to store the logs. Configure AWS SNS to send delivery notifications to your management system. Secure the S3 bucket that stores your logs using IAM roles and S3 bucket policies.
Create three new CloudTrail trails, each with its own S3 bucket to store the logs: one for the AWS Management console, one for AWS SDKs, and one for command line tools. Then create IAM roles and S3 bucket policies for the S3 buckets storing your logs.
Create a new trail in AWS CloudTrail with the global services option selected, and create one new Amazon S3 bucket to store the logs. Create IAM roles, S3 bucket policies, and enable Multi Factor Authentication (MFA) Delete on the S3 bucket storing your logs.(Correct)
Create a new trail in AWS CloudTrail with the global services option selected, and assign it an existing S3 bucket to store the logs. Create S3 ACLs and enable Multi Factor Authentication (MFA) delete on the S3 bucket storing your logs.
Explanation
For most services, events are recorded in the region where the action occurred to its respective AWS CloudTrail. For global services such as AWS Identity and Access Management (IAM), AWS STS, Amazon CloudFront, and Route 53, events are delivered to any trail that includes global services (IncludeGlobalServiceEvents flag). AWS CloudTrail service should be your top choice for the scenarios where the application is tracking the changes made by any AWS service, resource, or API.
Therefore, the correct answer is: Create a new trail in AWS CloudTrail with the global services option selected, and create one new Amazon S3 bucket to store the logs. Create IAM roles, S3 bucket policies, and enable Multi Factor Authentication (MFA) Delete on the S3 bucket storing your logs. It uses AWS CloudTrail with (includeGlobalServiceEvents flag) Global Option enabled, a single new S3 bucket and IAM Roles so that it has the confidentiality, and MFA on Delete on S3 bucket so that it maintains the data integrity.
The option that says: Create a new trail in AWS CloudTrail with the global services option selected, and assign it an existing S3 bucket to store the logs. Create S3 ACLs and enable Multi Factor Authentication (MFA) delete on the S3 bucket storing your logs is incorrect. As an existing S3 bucket is used, it may already be accessed by the user, hence not maintaining the confidentiality, and it is not using IAM roles.
The option that says: Create three new CloudTrail trails, each with its own S3 bucket to store the logs: one for the AWS Management console, one for AWS SDKs, and one for command line tools. Then create IAM roles and S3 bucket policies for the S3 buckets storing your logs is incorrect. Although it uses AWS CloudTrail, the Global Option is not enabled, and three S3 buckets are not needed.
The option that says: Create a new trail in CloudTrail and assign it a new S3 bucket to store the logs. Configure AWS SNS to send delivery notifications to your management system. Secure the S3 bucket that stores your logs using IAM roles and S3 bucket policies is incorrect. Although it uses AWS CloudTrail, the Global Option is not enabled.
References:
Check out these AWS CloudTrail and IAM Cheat Sheets:
Question 54: Incorrect
A company has a fitness tracking app that accompanies its smartwatch. The primary customers are North American and Asian users. The application is read-heavy [-> Read Replicas] as it pings the servers at regular intervals for user-authorization [-> Cognito]. The company wants the infrastructure to have the following capabilities:
- The application must be fault-tolerant to problems in any Region. [-> Multi-region]
- The database writes must be highly-available in a single Region. [ -> Read Replica ?]
- The application tier must be able to read the database on multiple Regions. [-> Read Replica]
- The application tier must be resilient in each Region. [-> multi-region?]
- Relational database semantics must be reflected in the application. [ -> RDS, Aurora]
Which of the following options must the Solutions Architect implement to meet the company requirements? (Select TWO.)
Create a geoproximity routing policy on Amazon Route 53 to control traffic and direct users to their closest regional endpoint. Combine this with a multivalue answer routing policy with health checks to direct users to a healthy region at any given time.
(Incorrect)
Create a geolocation routing policy on Amazon Route 53 to point the global users to their designated regions. Combine this with a failover answer routing policy with health checks to direct users to a healthy region at any given time.
(Correct)
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region in an active-active configuration. Create a cluster of Amazon Aurora global database in both Regions. Configure the application to use the in-Region Aurora database endpoint for the read/write operations. Create snapshots of the application servers regularly. Store the snapshots in Amazon S3 buckets in both regions.
(Correct)
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable cross-Region replication for the database servers. Create snapshots of the application and database servers regularly. Store the snapshots in Amazon S3 buckets in both regions.
Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable Multi-AZ failover support for the EC2 Auto Scaling group and the RDS database. Enable cross-Region replication for the database servers.
Explanation
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.
By using an Amazon Aurora global database, you can have a single Aurora database that spans multiple AWS Regions to support your globally distributed applications.
An Aurora global database consists of one primary AWS Region where your data is mastered, and up to five read-only secondary AWS Regions. You issue write operations directly to the primary DB cluster in the primary AWS Region. Aurora replicates data to the secondary AWS Regions using dedicated infrastructure, with latency typically under a second.
You can also change the configuration of your Aurora global database while it's running to support various use cases. For example, you might want the read/write capabilities to move from one Region to another, say, in different time zones, to 'follow the sun.' Or, you might need to respond to an outage in one Region. With Aurora global database, you can promote one of the secondary Regions to the primary role to take full read/write workloads in under a minute.
On Amazon Route 53, after you create a hosted zone for your domain, such as tutorialsdojo.com, you can create records to tell the Domain Name System (DNS) how you want traffic to be routed for that domain. You can create a record that points to the DNS name of your Application Load Balancer on AWS.
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
Failover routing policy – Use when you want to configure active-passive failover.
Geolocation routing policy – Use when you want to route traffic based on the location of your users.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
You can use Route 53 health checks to configure active-active and active-passive failover configurations. You configure active-active failover using any routing policy (or combination of routing policies) other than failover, and you configure active-passive failover using the failover routing policy.
The option that says: Create a geolocation routing policy on Amazon Route 53 to point the global users to their designated regions. Combine this with a failover answer routing policy with health checks to direct users to a healthy region at any given time is correct. You can use geolocation routing policy to direct the North American users to your servers on the North America region and configure failover routing to the Asia region in case the North America region fails. You can configure the same for the Asian users pointed to the Asia region servers and have the North America region as its backup.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region in an active-active configuration. Create a cluster of Amazon Aurora global database in both Regions. Configure the application to use the in-Region Aurora database endpoint for the read/write operations. Create snapshots of the application servers regularly. Store the snapshots in Amazon S3 buckets in both regions is correct. The Amazon Aurora global database solves the problem on read/write as well as syncing the data across all the regions. With both regions in an active-active configuration, each region can accept the traffic from users around the world.
The option that says: Create a geoproximity routing policy on Amazon Route 53 to control traffic and direct users to their closest regional endpoint. Combine this with a multivalue answer routing policy with health checks to direct users to a healthy region at any given time is incorrect. Geoproximity routing policy is good to control the user traffic to specific regions. However, a multivalue answer routing policy may cause the users to be randomly sent to other healthy regions that may be far away from the user’s location.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable cross-Region replication for the database servers. Create snapshots of the application and database servers regularly. Store the snapshots in Amazon S3 buckets in both regions is incorrect. You can’t sync two MySQL master databases that both accept writes on their respective regions.
The option that says: Deploy the application tier on an Auto Scaling group of EC2 instances for each Region. Create an RDS for MySQL database on each region. Configure the application to perform read/write operations on the local RDS. Enable Multi-AZ failover support for the EC2 Auto Scaling group and the RDS database. Enable cross-Region replication for the database servers is incorrect. You can’t sync two MySQL master databases that both accept writes on their respective regions. Enabling Multi-AZ on the RDS MySQL server does not protect you from AWS Regional failures.
References:
Check out these Amazon Aurora and Route 53 Cheat Sheets:
Question 60: Incorrect
A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises [-> Transit Gateway / Direct Connect / VPN...? ] central office network in Washington, D.C. The central office requires inter-region VPC access over a private network [-> Transit Gateway?] that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.
Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?
Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN.
(Incorrect)
Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
(Correct)
Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.
Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.
Explanation
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS to achieve higher privacy benefits, additional data transfer bandwidth, and more predictable data transfer performance. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. Virtual interfaces can be reconfigured at any time to meet your changing needs. You can use an AWS Direct Connect gateway to connect your AWS Direct Connect connection over a private virtual interface to one or more VPCs in your account that are located in the same or different Regions. You associate a Direct Connect gateway with the virtual private gateway for the VPC. Then, create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway. You can attach multiple private virtual interfaces to your Direct Connect gateway.
With Direct Connect Gateway, you no longer need to establish multiple BGP sessions for each VPC; this reduces your administrative workload as well as the load on your network devices.
Therefore, the correct answer is: Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.
The option that says: Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway is incorrect. You only need to create private virtual interfaces to the Direct Connect gateway since you are only connecting to resources inside a VPC. Using a link aggregation group (LAG) is also irrelevant in this scenario because it is just a logical interface that uses the Link Aggregation Control Protocol (LACP) to aggregate multiple connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection.
The option that says: Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN is incorrect since the scenario requires a service that can provide a dedicated network between the VPCs and the on-premises network, as well as enhanced privacy and predictable data transfer performance. Simply using AWS Transit Gateway will not fulfill the conditions above. This option is best suited for customers who want to leverage AWS-provided, automated high availability network connectivity features and also optimize their investments in third-party product licensing such as VPN software.
The option that says: Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public internet is incorrect. This solution would require a lot of manual setup and management overhead to successfully build a functional, error-free inter-region VPC network compared with just using a Direct Connect Gateway. Although the Inter-Region VPC Peering provides a cost-effective way to share resources between regions or replicate data for geographic redundancy, its connections are not dedicated and highly available.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 61: Correct
A company is using AWS Organizations to manage their multi-account and multi-region AWS infrastructure. They are currently doing large-scale automation for their key daily processes to save costs. One of these key processes is sharing specified AWS resources [-> enable-sharing-with-aws-organization
command ], which an organizational account owns, with other AWS accounts of the company using AWS RAM [-> AWS RAM CLI[]. There is already an existing service which was previously managed by a separate organization account moderator, who also maintained the specific configuration details.
In this scenario, what could be a simple and effective solution that would allow the service to perform its tasks on the organization accounts on the moderator's behalf?
Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM.
Enable cross-account access with AWS Organizations in the Resource Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service.
Use trusted access by running the enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.
(Correct)
Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes.
Explanation
AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. To enable trusted access with AWS Organizations:
From the AWS RAM CLI, use the enable-sharing-with-aws-organizations
command.
Name of the IAM service-linked role that can be created in accounts when trusted access is enabled: AWSResourceAccessManagerServiceRolePolicy.
You can use trusted access to enable an AWS service that you specify, called the trusted service, to perform tasks in your organization and its accounts on your behalf. This involves granting permissions to the trusted service but does not otherwise affect the permissions for IAM users or roles. When you enable access, the trusted service can create an IAM role called a service-linked role in every account in your organization. That role has a permissions policy that allows the trusted service to do the tasks that are described in that service's documentation. This enables you to specify settings and configuration details that you would like the trusted service to maintain in your organization's accounts on your behalf.
Therefore the correct answer is: Use trusted access by running the enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.
The option that says: Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes is incorrect because this is not the simplest way to automate the interaction of AWS RAM with AWS Organizations. AWS Systems Manager is a tool that helps with the automation of EC2 instances, on-premises servers, and other virtual machines. It might not support all the services being used by the key processes.
The option that says: Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM is incorrect. This is not the simplest solution for integrating AWS RAM and AWS Organizations since using AWS Organization's trusted access will create the service-linked role for you. Also, the trust policy of a service-linked role cannot be modified. Only the linked AWS service can assume a service-linked role, which is why you cannot modify the trust policy of a service-linked role.
The option that says: Enable cross-account access with AWS Organizations in the Resources Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service is incorrect because you should enable trusted access to AWS RAM, not cross-account access.
References:
Check out this AWS Resource Access Manager Cheat Sheet:
Question 62: Correct
A company has just launched a new central employee registry application that contains all of the public employee registration information of each staff of the company. The application has a microservices architecture running in Docker in a single AWS Region. The management teams from other departments who have their servers located in different VPCs need to connect to the central repository application [-> ECR] to continue their work. The Solutions Architect must ensure that the traffic to the application does not traverse the public Internet. [-> NAT Gateway on public subnet? Transit Gateway? VPC Peering] The IT Security team must also be notified of any denied requests and be able to view the corresponding source IP. [-> CloudWatch ? VPC Flow Logs; CloudWatch Logs subscription]
How will the Architect implement the architecture of the new application given these circumstances?
Use AWS Direct Connect to create a dedicated connection between the central VPC and each of the teams' VPCs. Enable the VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to a CloudWatch Logs group. Set up an Amazon CloudWatch Logs subscription that streams the log data to the IT Security account.
Set up an IPSec Tunnel between the central VPC and each of the teams' VPCs. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Create a CloudWatch Logs subscription that streams the log data to the IT Security account.
Link each of the teams' VPCs to the central VPC using VPC Peering. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
(Correct)
Set up a Transit VPC by using third-party marketplace VPN appliances running on an On-Demand Amazon EC2 instance that dynamically routes the VPN connections to the virtual private gateways (VGWs) attached to each VPC. Set up an AWS Config rule on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
Explanation
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.
Flow logs can help you with a number of tasks, such as:
- Diagnosing overly restrictive security group rules
- Monitoring the traffic that is reaching your instance
- Determining the direction of the traffic to and from the network interfaces
Flow log data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance.
Hence, the correct answer is: Link each of the teams' VPCs to the central VPC using VPC Peering. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account.
The option that says: Set up an IPSec Tunnel between the central VPC and each of the teams' VPCs. Create VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Create a CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. It is mentiond in the scenario that the traffic to the application must not traverse the public Internet. Since an IPSec tunnel uses the Internet to transfer data from your VPC to a specified destination, this solution is definitely incorrect.
The option that says: Use AWS Direct Connect to create a dedicated connection between the central VPC and each of the teams' VPCs. Enable the VPC Flow Logs on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to a CloudWatch Logs group. Set up an Amazon CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. You cannot set up Direct Connect between different VPCs. AWS Direct Connect is primarily used to set up a dedicated connection between your on-premises data center and your Amazon VPC.
The option that says: Set up a Transit VPC by using third-party marketplace VPN appliances running on an On-Demand Amazon EC2 instance that dynamically routes the VPN connections to the virtual private gateways (VGWs) attached to each VPC. Set up an AWS Config rule on each VPC to capture rejected traffic requests, including the source IPs, that will be delivered to an Amazon CloudWatch Logs group. Set up a CloudWatch Logs subscription that streams the log data to the IT Security account is incorrect. An AWS Config rule is not capable of capturing the source IP of the incoming requests. A VPN appliance is using the public Internet to transfer data. Thus, it violates the requirement of ensuring that the data is securely within the AWS network.
References:
Check out these Amazon VPC and VPC Peering Cheat Sheets:
Question 74: Correct
The department of education just recently decided to leverage the AWS cloud infrastructure to supplement its current on-premises network [-> Direct Connect ?]. They are building a new learning portal that teaches kids basic computer science concepts and provides innovative gamified courses for teenagers where they can gain higher rankings, power-ups and badges. A Solutions Architect is instructed to build a highly available cloud infrastructure in AWS with multiple Availability Zones. The department wants to increase the application’s reliability and gain actionable insights using application logs. A Solutions Architect needs to aggregate logs, automate log analysis [-> CloudWatch Log Agents in on-prem servers; CloudWatch] for errors and immediately notify [-> CloudWatch Alarm, SNS] the IT Operations team when errors breached a certain threshold.
Which of the following is the MOST suitable solution that the Architect should implement?
Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon EventBridge. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon Athena to monitor the metric filter and immediately notify the IT Operations team for any issues.
Download and install the Amazon Managed Service for Prometheus in the on-premises servers and send the logs to AWS Lambda to turn log data into numerical metrics that identify and measure application errors. Write the processed metrics back to the time series database in Prometheus. Create a CloudWatch Alarm that monitors the metric and immediately notifies the IT Operations team for any issues.
Download and install the Amazon Kinesis agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon QuickSight to monitor the metric filter in CloudWatch and immediately notify the IT Operations team for any issues.
Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Create a CloudWatch Alarm that monitors the metric filter and immediately notify the IT Operations team for any issues.
(Correct)
Explanation
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. The CloudWatch home page automatically displays metrics about every AWS service you use. You can additionally create custom dashboards to display metrics about your custom applications and display custom collections of metrics that you choose.
After the CloudWatch Logs agent begins publishing log data to Amazon CloudWatch, you can begin searching and filtering the log data by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. You can use any type of CloudWatch statistic, including percentile statistics when viewing these metrics or setting alarms.
You can create alarms that watch metrics and send notifications or automatically make changes to the resources you are monitoring when a threshold is breached. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether you should launch additional instances to handle the increased load. You can also use this data to stop under-used instances to save money. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.
Hence, the correct answer is: Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Create a CloudWatch Alarm that monitors the metric filter and immediately notify the IT Operations team for any issues.
The option that says: Download and install the Amazon Kinesis agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon QuickSight to monitor the metric filter in CloudWatch and immediately notify the IT Operations team for any issues is incorrect. You have to use an Amazon CloudWatch agent instead of an Amazon Kinesis agent to send the logs to Amazon CloudWatch Logs. It is also better to use CloudWatch Alarms to monitor the metric filter than to use Amazon QuickSight.
The option that says: Download and install the Amazon Managed Service for Prometheus in the on-premises servers and send the logs to AWS Lambda to turn log data into numerical metrics that identify and measure application errors. Write the processed metrics back to the time series database in Prometheus. Create a CloudWatch Alarm that monitors the metric and immediately notifies the IT Operations team for any issues is incorrect. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics. You don't have to create a custom Lambda function to process the logs. Amazon Managed Service for Prometheus is integrated with CloudWatch to monitor metrics and Logs.
The option that says: Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon EventBridge. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon Athena to monitor the metric filter and immediately notify the IT Operations team for any issues is incorrect. You have to send the logs to CloudWatch Logs and not CloudWatch Events. It is also better to use CloudWatch Alarm to monitor the metric filter and immediately notify the IT Operations team for any issues.
References:
Check out this Amazon CloudWatch Cheat Sheet:
Question 75: Incorrect
ChatGPT - Perfectly Correct
An IT consultancy company has multiple offices located in San Francisco, Frankfurt, Tokyo, and Manila. The company is using AWS Organizations to easily manage its several AWS accounts which are being used by its regional offices and subsidiaries. A new AWS account was recently added to a specific organizational unit (OU) which is responsible for the overall systems administration. The solutions architect noticed that the account is using a root-created Amazon ECS Cluster with an attached service-linked role. For regulatory purposes, the solutions architect created a custom SCP that would deny the new account from performing certain actions in relation to using ECS. However, after applying the policy, the new account could still perform the actions that it was supposed to be restricted from doing.
Which of the following is the most likely reason for this problem?
The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified.
The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization.
There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator.
(Incorrect)
SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.
(Correct)
Explanation
Users and roles must still be granted permissions using IAM permission policies attached to them or to groups. The SCPs filter the permissions granted by such policies, and the user can't perform any actions that the applicable SCPs don't allow. Actions allowed by the SCPs can be used if they are granted to the user or role by one or more IAM permission policies.
When you attach SCPs to the root, OUs, or directly to accounts, all policies that affect a given account are evaluated together using the same rules that govern IAM permission policies:
- Any action that has an explicit Deny
in an SCP can't be delegated to users or roles in the affected accounts. An explicit Deny
statement overrides any Allow
that other SCPs might grant.
- Any action that has an explicit Allow
in an SCP (such as the default "*" SCP or by any other SCP that calls out a specific service or action) can be delegated to users and roles in the affected accounts.
- Any action that isn't explicitly allowed by an SCP is implicitly denied and can't be delegated to users or roles in the affected accounts.
By default, an SCP named FullAWSAccess
is attached to every root, OU, and account. This default SCP allows all actions and all services. So in a new organization, until you start creating or manipulating the SCPs, all of your existing IAM permissions continue to operate as they did. As soon as you apply a new or modified SCP to a root or OU that contains an account, the permissions that your users have in that account become filtered by the SCP. Permissions that used to work might now be denied if they're not allowed by the SCP at every level of the hierarchy down to the specified account.
As stated in the documentation of AWS Organizations, SCPs DO NOT affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.
The option that says: The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified is incorrect. The scenario already implied that the administrator created a Deny policy. By default, an SCP named FullAWSAccess is attached to every root, OU, and account. This default SCP allows all actions and all services. However, you specify a Deny policy if you want to create a blacklist that blocks all access to the specified services and actions. The explicit Deny on specific actions in the blacklist policy overrides the Allow in any other policy, such as the one in the default SCP.
The option that says: There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator is incorrect because even if a higher-level OU has an SCP attached with an Allow policy for the service, the current set up should still have restricted access to the service. Creating and attaching a new Deny SCP to the new account's OU will not be affected by the pre-existing Allow policy in the same OU.
The option that says: The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization is incorrect because the service-linked role must have been created within the organization, most notably by the root account of the organization. It also does not make sense if we make the assumption that the service is indeed outside of the organization's jurisdiction because the Principal element of a policy specifies which entity will have limited permissions. But the scenario tells us that it should be the new account that is denied certain actions, not the service itself.
References:
Service Control Policies (SCP) vs IAM Policies:
Comparison of AWS Services Cheat Sheets:
AWS Identity and Access Management (IAM) is integrated with AWS CloudTrail, a service that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticate AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets.