23 Mar 2024 Morning study
Last updated
Last updated
Question 65: Skipped
A company manages more than 50 AWS accounts under its AWS Organization. All AWS accounts deploy resources on a single AWS region only. To enable routing across all accounts, each VPC has a Transit Gateway Attachment to a centralized AWS Transit Gateway. Each VPC also has an internet gateway and NAT gateway to provide outbound internet connectivity for its resources. As a security requirement, the company must have a centrally managed rule-based filtering solution for outbound internet traffic on all AWS accounts under its organization [_> Dedicated VPC + NAT]. It is expected that peak outbound traffic for each Availability Zone will not exceed 25 Gbps. [-> Network Firewall rule-based filtering]
Which of the following options should the solutions architect implement to fulfill the company requirements?
Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. On this VPC, create an Auto Scaling group of Amazon EC2 instances running with an open-source internet proxy software for rule-based filtering across all AZ in the region. Configure the route tables on each VPC to point to this Auto Scaling group.
Provision an Auto Scaling group of Amazon EC2 instances with network-optimized instance type on each AWS account. Install an open-source internet proxy software for rule-based filtering. Configure the route tables on each VPC to point to the Auto Scaling group.
Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. Configure an AWS Network Firewall firewall for the rule-based filtering. Modify all the default routes in each account to point to the Network Firewall endpoint.
(Correct)
Use AWS Network Firewall service to create firewall rule groups and firewall policies for rule-based filtering. Attach the firewall policy to a new Network Firewall firewall on each account. Modify all the default routes in each account to point to their corresponding Network Firewall firewall.
Explanation
AWS Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service [IDS IPS] for your virtual private cloud (VPC) that you created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect.
Once AWS Network Firewall is deployed, you will see a firewall endpoint in each firewall subnet. Firewall endpoint is similar to interface endpoint and it shows up as vpce-id in your VPC route table target selection. You have multiple deployment models for Network Firewall.
For a centralized egress deployment model, an AWS Transit Gateway is a prerequisite. AWS Transit Gateway acts as a network hub and simplifies the connectivity between VPCs. For this model, we have a dedicated, central egress VPC which has a NAT gateway configured in a public subnet with access to IGW.
Traffic originating from spoke VPCs is forwarded to inspection VPC for processing. It is then forwarded to central egress VPC using a default route in the Transit Gateway firewall route table. The default route is set to target central egress VPC Attachment (pointing to the AWS Network Firewall endpoint).
Therefore, the correct answer is: Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. Configure an AWS Network Firewall firewall for the rule-based filtering. Modify all the default routes in each account to point to the Network Firewall endpoint. This solution provides a dedicated VPC for rule-based inspection and controlling of egress traffic. Please check the references section for more details.
The option that says: Use AWS Network Firewall service to create firewall rule groups and firewall policies for rule-based filtering. Attach the firewall policy to a new Network Firewall firewall on each account. Modify all the default routes in each account to point to their corresponding Network Firewall firewall is incorrect. For centralized rule-based filtering with a Network Firewall, you will need an AWS Transit Gateway to act as a network hub and allow the connectivity between VPCs.
The option that says: Provision an Auto Scaling group of Amazon EC2 instances with network-optimized instance type on each AWS account. Install an open-source internet proxy software for rule-based filtering. Configure the route tables on each VPC to point to the Auto Scaling group is incorrect. This will be difficult to manage since you will have a proxy cluster essentially on each AWS account.
The option that says: Create a dedicated VPC for outbound internet traffic with a NAT gateway on it. Connect this VPC to the existing AWS Transit Gateway. On this VPC, create an Auto Scaling group of Amazon EC2 instances running with an open-source internet proxy software for rule-based filtering across all AZ in the region. Configure the route tables on each VPC to point to this Auto Scaling group is incorrect. This may be possible but is not recommended. AWS Network Firewall can offer centralized rule-based traffic filtering and inspection across your VPCs.
Takeaway:
Centrally manage all outbound among all VPC -> Dedicated VPC
References:
Check out this AWS Transit Gateway Cheat Sheet:
Question 69: Skipped
A company wants to have a secure content management solution that can be accessed by its external custom applications via API calls. [-> API Gateway] The solutions architect has been instructed to create the infrastructure design. The solution should enable users to upload documents as well as download a specific version or the latest version of a document. [->S3] There is also a requirement to enable customer administrators to simply submit an API call that can roll back changes to existing files sent to the system. [-> ?]
Which of the following options is the MOST secure and suitable solution that the solutions architect should implement?
Use Amazon WorkDocs for document storage and utilize its user access management, version control, and built-in encryption. Integrate the Amazon WorkDocs Content Manager to the external custom applications. Develop a rollback feature to replace the current document version with the previous version from Amazon WorkDocs.
(Correct)
Use Amazon EFS for object storage and enable data encryption in transit with TLS. Store unique customer managed keys in AWS KMS. Set up IAM roles and IAM access policies for EFS to specify separate encryption keys for each customer application. Utilize file locking and file versioning features in EFS to roll back changes to existing files stored in the CMS.
Use S3 with Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Use client-side encryption to encrypt customer files then share the Customer Master Key (CMK) ID and the client-side master key to all customers in order to access the CMS.
Use Amazon S3 with Versioning and Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Encrypt all documents using client-side encryption for enhanced data security. Share the encryption keys to all customers to unlock the documents. Develop a rollback feature to replace the current document version with the previous version from Amazon S3.
Explanation
Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you can easily create, edit, and share content, and because it’s stored centrally on AWS, access it from anywhere on any device. Amazon WorkDocs makes it easy to collaborate with others, and lets you easily share content, provide rich feedback, and collaboratively edit documents. You can use Amazon WorkDocs to retire legacy file share infrastructure by moving file shares to the cloud. Amazon WorkDocs lets you integrate with your existing systems, and offers a rich API so that you can develop your own content-rich applications. Amazon WorkDocs is built on AWS, where your content is secured on the world's largest cloud infrastructure.
Amazon WorkDocs Content Manager is a high-level utility tool that uploads content or downloads it from an Amazon WorkDocs site. It can be used for both administrative and user applications. For user applications, a developer must construct the Amazon WorkDocs Content Manager with anonymous AWS credentials and an authentication token. For administrative applications, the Amazon WorkDocs client must be initialized with AWS Identity and Access Management (IAM) credentials. In addition, the authentication token must be omitted in subsequent API calls.
Hence, the correct answer in this scenario is: Use Amazon WorkDocs for document storage and utilize its user access management, version control, and built-in encryption. Integrate the Amazon WorkDocs Content Manager to the external custom applications. Develop a rollback feature to replace the current document version with the previous version from Amazon WorkDocs.
The option that says: Use Amazon S3 with Versioning and Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Encrypt all documents using client-side encryption for enhanced data security. Share the encryption keys to all customers to unlock the documents. Develop a rollback feature to replace the current document version with the previous version from Amazon S3 is incorrect. Although you can use Amazon S3, sharing the encryption keys to all customers just to unlock the documents is a security risk.
The option that says: Use S3 with Server Access Logging enabled. Set up an IAM role and access policy for each customer application. Use client-side encryption to encrypt customer files then share the Customer Master Key (CMK) ID and the client-side master key to all customers in order to access the CMS is incorrect because the S3 bucket being used has only enabled Server Access Logging and not Versioning. In addition, it is also a security risk to share the Customer Master Key (CMK) ID and the client-side master key to all customers.
The option that says: Use Amazon EFS for object storage and enable data encryption in transit with TLS. Store unique customer managed keys in AWS KMS. Set up IAM roles and IAM access policies for EFS to specify separate encryption keys for each customer application. Utilize file locking and file versioning features in EFS to roll back changes to existing files stored in the CMS is incorrect because EFS is primarily used as a file system and not for object storage. Although EFS has file locking capabilities, it does not have file versioning features.
File sharing, rollback, with rich API -> WorkDocs
References:
Question 71: Skipped
A logistics company is running its business application on Amazon EC2 instances. The web application is running on an Auto Scaling group of EC2 instances behind an Application Load Balancer. The self-managed MySQL database is also running on a large EC2 instance to handle the heavy I/O operations needed by the application. The application is able to handle the amount of traffic during normal hours. However, the performance slows down significantly during the last four days of the month as more users run their month-end reports [-> heavy read - RDS Read Replicas] simultaneously. The Solutions Architect was tasked to improve the performance of the application, especially during the peak days. [-> RDS?]
Which of the following should the Solutions Architect implement to improve the application performance with the LEAST impact on availability?
Migrate the Amazon EC2 database instance to Amazon RDS for MySQL. Add more read replicas to the database cluster during the end of the month to handle the spike in traffic.
(Correct)
Create Amazon CloudWatch metrics based on EC2 instance CPU usage or response time on the ALB. Trigger an AWS Lambda function to change the instance size, type, and the allocated IOPS of the EBS volumes based on the breached threshold.
Convert all EBS volumes of the EC2 instances to GP2 volumes to improve I/O performance. Scale up the EC2 instances into bigger instance types. Pre-warm the Application Load Balancer to handle sudden spikes in traffic.
Take a snapshot of the EBS volumes with I/O heavy operations and replace them with Provisioned IOPS volumes during the end of the month. Revert to the old EBS volume type afterward to save on costs.
Explanation
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
Amazon RDS supports the most demanding database applications. You can choose between two SSD-backed storage options: one optimized for high-performance OLTP applications, and the other for cost-effective general-purpose use. You can scale your database's compute and storage resources with only a few mouse clicks or an API call, often with no downtime.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. Because read replicas can be promoted to master status, they are useful as part of a sharding implementation.
In this scenario, the Amazon EC2 instances are in an Auto Scaling group already which means that the database read operations is the possible bottleneck especially during the month-end wherein the reports are generated. This can be solved by creating RDS read replicas.
Therefore, the correct answer is: Migrate the Amazon EC2 database instance to Amazon RDS for MySQL. Add more read replicas to the database cluster during the end of the month to handle the spike in traffic.
The option that says: Convert all EBS volumes of the EC2 instances to GP2 volumes to improve I/O performance. Scale up the EC2 instances into bigger instance types. Pre-warm the Application Load Balancer to handle sudden spikes in traffic is incorrect. The Amazon EC2 instances are in an Auto Scaling group already which means that the database read operations is the possible bottleneck so changing to bigger EC2 instances will just increase the costs unnecessarily.
The option that says: Create Amazon CloudWatch metrics based on EC2 instance CPU usage or response time on the ALB. Trigger an AWS Lambda function to change the instances size, type, and the allocated IOPS of the EBS volumes based on the breached threshold is incorrect. This will cause a lot of interruption on the application when you change the EBS volumes and the instance type for the EC2 database instance. You will have to reboot the database instances when you change the instance type.
The option that says: Take a snapshot of the EBS volumes with I/O heavy operations and replace them with Provisioned IOPS volumes during the end of the month. Revert to the old EBS volume type afterward to save on costs is incorrect. Creating snapshots will cause the disk write operations to temporarily stop in which the database will stop responding to write requests. Changing the disk types every month is not ideal as this also causes downtime on the application during the switch. A better solution for heavy read operations is to provision an Amazon RDS database with Read Replicas.
References:
Check out this Amazon RDS Cheat Sheet:
Question 72: Skipped
A tech company uses AWS CloudFormation to deploy a three-tier web application that consists of a web tier, application tier, and database tier. The application will utilize an Amazon DynamoDB table for database storage. All resources will be created using a CloudFormation template.
Which of the following options would allow the application instances access to the DynamoDB tables without exposing the API credentials? [-> IAM Role - InstanceProfile]
Launch an IAM Role that has the required permissions to read and write from the required DynamoDB table. Associate the Role to the application instances by referencing it to the AWS::IAM::InstanceRoleName
Property.
Launch an IAM Role that has the required permissions to read and write from the DynamoDB table. Reference the IAM Role as a property inside the AWS::IAM::InstanceProfile
of the application instance.
(Correct)
Launch an IAM user in the CloudFormation template that has permissions to read and write from the DynamoDB table. Use the GetAtt function to retrieve the Access and secret keys and pass them to the web application instance through the use of its instance user-data.
In the CloudFormation template, use the Parameter section to have the user input the AWS Access and Secret Keys from an already created IAM user that has the permissions required to interact with the DynamoDB table.
Explanation
Identity and Access Management is an AWS service that you can use to manage users and their permissions in AWS. You can use IAM with AWS CloudFormation to specify what AWS CloudFormation actions users can perform, such as viewing stack templates, creating stacks, or deleting stacks. Furthermore, anyone managing AWS CloudFormation stacks will require permissions to resources within those stacks. For example, if users want to use AWS CloudFormation to launch, update, or terminate Amazon EC2 instances, they must have permission to call the relevant Amazon EC2 actions.
The instance profile contains the role and can provide the role's temporary credentials to an application that runs on the instance. Those temporary credentials can then be used in the application's API calls to access resources and to limit access to only those resources that the role specifies. Note that only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions.
Using roles in this way has several benefits. Because role credentials are temporary and rotated automatically, you don't have to manage credentials, and you don't have to worry about long-term security risks. In addition, if you use a single role for multiple instances, you can make a change to that one role and the change is propagated automatically to all the instances.
An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. If you use the AWS Management Console to create a role for Amazon EC2, the console automatically creates an instance profile and gives it the same name as the role.
Hence, the correct answer is: Launch an IAM Role that has the required permissions to read and write from the DynamoDB table. Reference the IAM Role as a property inside the AWS::IAM::InstanceProfile
of the application instance.
The option that says: Launch an IAM Role that has the required permissions to read and write from the required DynamoDB table. Associate the Role to the application instances by referencing it to the AWS::IAM::InstanceRoleName
property is incorrect because you have to use the InstanceProfile
property instead. Take note that there is no InstanceRoleName
property in IAM.
The option that says: In the CloudFormation template, use the Parameter section to have the user input the AWS Access and Secret Keys from an already created IAM user that has the permissions required to interact with the DynamoDB table is incorrect because it is a security risk to include the IAM access keys in a CloudFormation template. You have to use an IAM Role instead.
The option that says: Launch an IAM user in the CloudFormation template that has permissions to read and write from the DynamoDB table. Use the GetAtt function to retrieve the Access and secret keys and pass them to the web application instance through the use of its instance user-data is incorrect because it is inappropriate to use an IAM User to provide an EC2 instance the required access to the DynamoDB table. Attaching an IAM Role to the EC2 instances is the most suitable solution in this scenario.
References:
Check out this AWS CloudFormation Cheat Sheet:
Using roles to grant permissions to applications that run on EC2 instances requires a bit of extra configuration. An application running on an EC2 instance is abstracted from AWS by the virtualized operating system. Because of this extra separation, an additional step is needed to assign an AWS role and its associated permissions to an EC2 instance and make them available to its applications. This extra step is the creation of an that is attached to the instance.