Troubleshooting and Optimization
Last updated
Last updated
Question 1: Correct
A CRM application is hosted on Amazon EC2 instances with the database tier using DynamoDB. The customers have raised privacy and security concerns regarding sending and receiving data across the public internet.
As a developer associate, which of the following would you suggest as an optimal solution for providing communication between EC2 instances and DynamoDB without using the public internet?
Create an Internet Gateway to provide the necessary communication channel between EC2 instances and DynamoDB
Configure VPC endpoints for DynamoDB that will provide required internal access without using public internet
(Correct)
Create a NAT Gateway to provide the necessary communication channel between EC2 instances and DynamoDB
The firm can use a virtual private network (VPN) to route all DynamoDB network traffic through their own corporate network infrastructure
Explanation
Correct option:
Configure VPC endpoints for DynamoDB that will provide required internal access without using public internet
When you create a VPC endpoint for DynamoDB, any requests to a DynamoDB endpoint within the Region (for example, dynamodb.us-west-2.amazonaws.com) are routed to a private DynamoDB endpoint within the Amazon network. You don't need to modify your applications running on EC2 instances in your VPC. The endpoint name remains the same, but the route to DynamoDB stays entirely within the Amazon network, and does not access the public internet. You use endpoint policies to control access to DynamoDB. Traffic between your VPC and the AWS service does not leave the Amazon network.
Using Amazon VPC Endpoints to Access DynamoDB: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Incorrect options:
The firm can use a virtual private network (VPN) to route all DynamoDB network traffic through their own corporate network infrastructure - You can address the requested security concerns by using a virtual private network (VPN) to route all DynamoDB network traffic through your own corporate network infrastructure. However, this approach can introduce bandwidth and availability challenges and hence is not an optimal solution here.
Create a NAT Gateway to provide the necessary communication channel between EC2 instances and DynamoDB - You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. NAT Gateway is not useful here since the instance and DynamoDB are present in AWS network and do not need NAT Gateway for communicating with each other.
Create an Internet Gateway to provide the necessary communication channel between EC2 instances and DynamoDB - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. Using an Internet Gateway would imply that the EC2 instances are connecting to DynamoDB using the public internet. Therefore, this option is incorrect.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
https://docs.aws.amazon.com/vpc/latest/userguide/Carrier_Gateway.html
Question 8: Skipped
An application running on EC2 instances processes messages from an SQS queue. However, sometimes the messages are not processed and they end up in errors. These messages need to be isolated for further processing and troubleshooting.
Which of the following options will help achieve this?
Reduce the VisibilityTimeout
Use DeleteMessage
Increase the VisibilityTimeout
Implement a Dead-Letter Queue
(Correct)
Explanation
Correct option:
Implement a Dead-Letter Queue - Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed. Amazon SQS does not create the dead-letter queue automatically. You must first create the queue before using it as a dead-letter queue.
Incorrect options:
Increase the VisibilityTimeout - When a consumer receives and processes a message from a queue, the message remains in the queue. Amazon SQS doesn't automatically delete the message. Immediately after a message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. Increasing visibility timeout will not help in troubleshooting the messages running into error or isolating them from the rest. Hence this is an incorrect option for the current use case.
Use DeleteMessage - Deletes the specified message from the specified queue. This will not help understand the reason for error or isolate messages ending with the error.
Reduce the VisibilityTimeout - As explained above, VisibilityTimeout makes sure that the message is not read by any other consumer while it is being processed by one consumer. By reducing the VisibilityTimeout, more consumers will receive the same failed message. Hence, this is an incorrect option for this use case.
References:
Question 9: Skipped
You are a development team lead setting permissions for other IAM users with limited permissions. On the AWS Management Console, you created a dev group where new developers will be added, and on your workstation, you configured a developer profile. You would like to test that this user cannot terminate instances.
Which of the following options would you execute?
Use the AWS CLI --test option
Retrieve the policy using the EC2 metadata service and use the IAM policy simulator
Using the CLI, create a dummy EC2 and delete it using another CLI call
Use the AWS CLI --dry-run option
(Correct)
Explanation
Correct option:
Use the AWS CLI --dry-run option: The --dry-run option checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation, otherwise, it is UnauthorizedOperation.
Incorrect options:
Use the AWS CLI --test option - This is a made-up option and has been added as a distractor.
Retrieve the policy using the EC2 metadata service and use the IAM policy simulator - EC2 metadata service is used to retrieve dynamic information such as instance-id, local-hostname, public-hostname. This cannot be used to check whether you have the required permissions for the action.
Using the CLI, create a dummy EC2 and delete it using another CLI call - That would not work as the current EC2 may have permissions that the dummy instance does not have. If permissions were the same it can work but it's not as elegant as using the dry-run option.
References:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html
https://docs.aws.amazon.com/cli/latest/reference/ec2/terminate-instances.html
Question 12: Skipped
A serverless application built on AWS processes customer orders 24/7 using an AWS Lambda function and communicates with an external vendor's HTTP API for payment processing. The development team wants to notify the support team in near real-time using an existing Amazon Simple Notification Service (Amazon SNS) topic, but only when the external API error rate exceeds 5% of the total transactions processed in an hour.
As an AWS Certified Developer Associate, which option will you suggest as the most efficient solution?
Configure and push high-resolution custom metrics to CloudWatch that record the failures of the external payment processing API calls. Create a CloudWatch alarm that sends a notification via the existing SNS topic when the error rate exceeds the specified rate
(Correct)
Configure CloudWatch metrics with detailed monitoring for the external payment processing API calls. Create a CloudWatch alarm that sends a notification via the existing SNS topic when the error rate exceeds the specified rate
Log the results of payment processing API calls to Amazon CloudWatch. Leverage Amazon CloudWatch Metric Filter to look at the CloudWatch logs. Set up the Lambda function to check the output from CloudWatch Metric Filter on a schedule and send notification via the existing SNS topic when the error rate exceeds the specified rate
Log the results of payment processing API calls to Amazon CloudWatch. Leverage Amazon CloudWatch Logs Insights to query the CloudWatch logs. Set up the Lambda function to check the output from CloudWatch Logs Insights on a schedule and send notification via the existing SNS topic when the error rate exceeds the specified rate
Explanation
Correct option:
Configure and push high-resolution custom metrics to CloudWatch that record the failures of the external payment processing API calls. Create a CloudWatch alarm that sends a notification via the existing SNS topic when the error rate exceeds the specified rate
You can publish your own metrics, known as custom metrics, to CloudWatch using the AWS CLI or an API.
Each metric is one of the following:
Standard resolution, with data having a one-minute granularity
High resolution, with data at a granularity of one second
Metrics produced by AWS services are standard resolution by default. When you publish a custom metric, you can define it as either standard resolution or high resolution. When you publish a high-resolution metric, CloudWatch stores it with a resolution of 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds.
High-resolution metrics can give you more immediate insight into your application's sub-minute activity. Keep in mind that every PutMetricData call for a custom metric is charged, so calling PutMetricData more often on a high-resolution metric can lead to higher charges.
You can create metric and composite alarms in Amazon CloudWatch. For the given use case, you can set up a CloudWatch metric alarm that watches the custom metric that captures the API errors and then triggers the alarm when the API error rate exceeds the 5% threshold. The alarm then sends a notification via the existing SNS topic.
Incorrect options:
Configure CloudWatch metrics with detailed monitoring for the external payment processing API calls. Create a CloudWatch alarm that sends a notification via the existing SNS topic when the error rate exceeds the specified rate - CloudWatch provides two categories of monitoring: basic monitoring and detailed monitoring. Detailed monitoring options differ based on the services that offer it. For example, Amazon EC2 detailed monitoring provides more frequent metrics, published at one-minute intervals, instead of the five-minute intervals used in Amazon EC2 basic monitoring. Detailed monitoring is offered by only some services. As explained above, you need to use custom metrics to capture data for the external payment processing API calls since detailed monitoring for the standard CloudWatch metrics cannot be used for this scenario.
Log the results of payment processing API calls to Amazon CloudWatch. Leverage Amazon CloudWatch Logs Insights to query the CloudWatch logs. Set up the Lambda function to check the output from CloudWatch Logs Insights on a schedule and send notification via the existing SNS topic when the error rate exceeds the specified rate - CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you more efficiently and effectively respond to operational issues. This option is not the right fit for the given use case since Lambda cannot monitor the output of the CloudWatch Logs Insights on a real-time basis since it is being invoked on a schedule. Also, it is not an efficient solution since Lambda will need significant custom code to parse and compute the external API error rate from the CloudWatch Logs Insights data.
Log the results of payment processing API calls to Amazon CloudWatch. Leverage Amazon CloudWatch Metric Filter to look at the CloudWatch logs. Set up the Lambda function to check the output from CloudWatch Metric Filter on a schedule and send notification via the existing SNS topic when the error rate exceeds the specified rate - You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. This option is not the best fit for the given use case since Lambda cannot monitor the output of the CloudWatch Metric Filter on a real-time basis since it is being invoked on a schedule. Also, it is not an efficient solution since Lambda will need significant custom code to parse and compute the external API error rate from the CloudWatch Metric Filter data.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-push-custom-metrics/
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html
Question 13: Skipped
A company uses AWS CodeDeploy to deploy applications from GitHub to EC2 instances running Amazon Linux. The deployment process uses a file called appspec.yml for specifying deployment hooks. A final lifecycle event should be specified to verify the deployment success.
Which of the following hook events should be used to verify the success of the deployment?
AllowTraffic
ApplicationStart
ValidateService
(Correct)
AfterInstall
Explanation
Correct option:
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
An EC2/On-Premises deployment hook is executed once per deployment to an instance. You can specify one or more scripts to run in a hook.
ValidateService: ValidateService is the last deployment lifecycle event. It is used to verify the deployment was completed successfully.
Incorrect options:
AfterInstall - You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions
ApplicationStart - You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop
AllowTraffic - During this deployment lifecycle event, internet traffic is allowed to access instances after a deployment. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts
Reference:
Question 15: Skipped
A data analytics company is processing real-time Internet-of-Things (IoT) data via Kinesis Producer Library (KPL) and sending the data to a Kinesis Data Streams driven application. The application has halted data processing because of a ProvisionedThroughputExceeded exception.
Which of the following actions would help in addressing this issue? (Select two)
Use Amazon SQS instead of Kinesis Data Streams
Use Kinesis enhanced fan-out for Kinesis Data Streams
Configure the data producer to retry with an exponential backoff
(Correct)
Increase the number of shards within your data streams to provide enough capacity
(Correct)
Use Amazon Kinesis Agent instead of Kinesis Producer Library (KPL) for sending data to Kinesis Data Streams
Explanation
Correct option:
Configure the data producer to retry with an exponential backoff
Increase the number of shards within your data streams to provide enough capacity
Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources.
The capacity limits of an Amazon Kinesis data stream are defined by the number of shards within the data stream. The limits can be exceeded by either data throughput or the number of PUT records. While the capacity limits are exceeded, the put data call will be rejected with a ProvisionedThroughputExceeded exception.
If this is due to a temporary rise of the data stream’s input data rate, retry (with exponential backoff) by the data producer will eventually lead to the completion of the requests.
If this is due to a sustained rise of the data stream’s input data rate, you should increase the number of shards within your data stream to provide enough capacity for the put data calls to consistently succeed.
Incorrect options:
Use Amazon Kinesis Agent instead of Kinesis Producer Library (KPL) for sending data to Kinesis Data Streams - Kinesis Agent works with data producers. Using Kinesis Agent instead of KPL will not help as the constraint is the capacity limit of the Kinesis Data Stream.
Use Amazon SQS instead of Kinesis Data Streams - This is a distractor as using SQS will not help address the ProvisionedThroughputExceeded exception for the Kinesis Data Stream. This option does not address the issues in the use-case.
Use Kinesis enhanced fan-out for Kinesis Data Streams - You should use enhanced fan-out if you have, or expect to have, multiple consumers retrieving data from a stream in parallel. Therefore, using enhanced fan-out will not help address the ProvisionedThroughputExceeded exception as the constraint is the capacity limit of the Kinesis Data Stream.
References:
https://aws.amazon.com/kinesis/data-streams/
https://aws.amazon.com/kinesis/data-streams/faqs/
Question 21: Skipped
An e-commerce company uses AWS CloudFormation to implement Infrastructure as Code for the entire organization. Maintaining resources as stacks with CloudFormation has greatly reduced the management effort needed to manage and maintain the resources. However, a few teams have been complaining of failing stack updates owing to out-of-band fixes running on the stack resources.
Which of the following is the best solution that can help in keeping the CloudFormation stack and its resources in sync with each other?
Use CloudFormation in Elastic Beanstalk environment to reduce direct changes to CloudFormation resources
Use Drift Detection feature of CloudFormation
(Correct)
Use Tag feature of CloudFormation to monitor the changes happening on specific resources
Use Change Sets feature of CloudFormation
Explanation
Correct option:
Use Drift Detection feature of CloudFormation
Drift detection enables you to detect whether a stack's actual configuration differs, or has drifted, from its expected configuration. Use CloudFormation to detect drift on an entire stack, or individual resources within the stack. A resource is considered to have drifted if any of its actual property values differ from the expected property values. This includes if the property or resource has been deleted. A stack is considered to have drifted if one or more of its resources have drifted.
To determine whether a resource has drifted, CloudFormation determines the expected resource property values, as defined in the stack template and any values specified as template parameters. CloudFormation then compares those expected values with the actual values of those resource properties as they currently exist in the stack. A resource is considered to have drifted if one or more of its properties have been deleted, or had their value changed.
You can then take corrective action so that your stack resources are again in sync with their definitions in the stack template, such as updating the drifted resources directly so that they agree with their template definition. Resolving drift helps to ensure configuration consistency and successful stack operations.
Incorrect options:
Use CloudFormation in Elastic Beanstalk environment to reduce direct changes to CloudFormation resources - Elastic Beanstalk environment provides full access to the resources created. So, it is possible to edit the resources and hence does not solve the issue mentioned for the given use case.
Use Tag feature of CloudFormation to monitor the changes happening on specific resources - Tags help you identify and categorize the resources created as part of CloudFormation template. This feature is not helpful for the given use case.
Use Change Sets feature of CloudFormation - When you need to update a stack, understanding how your changes will affect running resources before you implement them can help you update stacks with confidence. Change sets allow you to preview how proposed changes to a stack might impact your running resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set. Change sets are not useful for the given use-case.
References:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/detect-drift-stack.html
Question 22: Skipped
A company uses Amazon Simple Email Service (SES) to cost-effectively send susbscription emails to the customers. Intermittently, the SES service throws the error: Throttling – Maximum sending rate exceeded
.
As a developer associate, which of the following would you recommend to fix this issue?
Configure Timeout mechanism for each request made to the SES service
Implement retry mechanism for all 4xx errors to avoid throttling error
Use Exponential Backoff technique to introduce delay in time before attempting to execute the operation again
(Correct)
Raise a service request with Amazon to increase the throttling limit for the SES API
Explanation
Correct option:
Use Exponential Backoff technique to introduce delay in time before attempting to execute the operation again - A “Throttling – Maximum sending rate exceeded” error is retriable. This error is different than other errors returned by Amazon SES. A request rejected with a “Throttling” error can be retried at a later time and is likely to succeed.
Retries are “selfish.” In other words, when a client retries, it spends more of the server's time to get a higher chance of success. Where failures are rare or transient, that's not a problem. This is because the overall number of retried requests is small, and the tradeoff of increasing apparent availability works well. When failures are caused by overload, retries that increase load can make matters significantly worse. They can even delay recovery by keeping the load high long after the original issue is resolved.
The preferred solution is to use a backoff. Instead of retrying immediately and aggressively, the client waits some amount of time between tries. The most common pattern is an exponential backoff, where the wait time is increased exponentially after every attempt.
A variety of factors can affect your send rate, e.g. message size, network performance or Amazon SES availability. The advantage of the exponential backoff approach is that your application will self-tune and it will call Amazon SES at close to the maximum allowed rate.
Incorrect options:
Configure Timeout mechanism for each request made to the SES service - Requests are configured to timeout if they do not complete successfully in a given time. This helps free up the database, application and any other resource that could potentially keep on waiting to eventually succeed. But, if errors are caused by load, retries can be ineffective if all clients retry at the same time. Throttling error signifies that load is high on SES and it does not make sense to keep retrying.
Raise a service request with Amazon to increase the throttling limit for the SES API - If throttling error is persistent, then it indicates a high load on the system consistently and increasing the throttling limit will be the right solution for the problem. But, the error is only intermittent here, signifying that decreasing the rate of requests will handle the error.
Implement retry mechanism for all 4xx errors to avoid throttling error - 4xx status codes indicate that there was a problem with the client request. Common client request errors include providing invalid credentials and omitting required parameters. When you get a 4xx error, you need to correct the problem and resubmit a properly formed client request. Throttling is a server error and not a client error, hence retry on 4xx errors does not make sense here.
References:
https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/
Question 32: Skipped
A Developer is configuring Amazon EC2 Auto Scaling group to scale dynamically.
Which metric below is NOT part of Target Tracking Scaling Policy?
ASGAverageCPUUtilization
ASGAverageNetworkOut
ApproximateNumberOfMessagesVisible
(Correct)
ALBRequestCountPerTarget
Explanation
Correct option:
ApproximateNumberOfMessagesVisible - This is a CloudWatch Amazon SQS queue metric. The number of messages in a queue might not change proportionally to the size of the Auto Scaling group that processes messages from the queue. Hence, this metric does not work for target tracking.
Incorrect options:
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value.
It is important to note that a target tracking scaling policy assumes that it should scale out your Auto Scaling group when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out your Auto Scaling group when the specified metric is below the target value.
ASGAverageCPUUtilization - This is a predefined metric for target tracking scaling policy. This represents the Average CPU utilization of the Auto Scaling group.
ASGAverageNetworkOut - This is a predefined metric for target tracking scaling policy. This represents the Average number of bytes sent out on all network interfaces by the Auto Scaling group.
ALBRequestCountPerTarget - This is a predefined metric for target tracking scaling policy. This represents the Number of requests completed per target in an Application Load Balancer target group.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
Question 34: Skipped
Your team lead has asked you to learn AWS CloudFormation to create a collection of related AWS resources and provision them in an orderly fashion. You decide to provide AWS-specific parameter types to catch invalid values.
When specifying parameters which of the following is not a valid Parameter type?
DependentParameter
(Correct)
String
AWS::EC2::KeyPair::KeyName
CommaDelimitedList
Explanation
Correct option:
AWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion.
Parameter types enable CloudFormation to validate inputs earlier in the stack creation process.
CloudFormation currently supports the following parameter types:
String – A literal string
Number – An integer or float
List<Number> – An array of integers or floats
CommaDelimitedList – An array of literal strings that are separated by commas
AWS::EC2::KeyPair::KeyName – An Amazon EC2 key pair name
AWS::EC2::SecurityGroup::Id – A security group ID
AWS::EC2::Subnet::Id – A subnet ID
AWS::EC2::VPC::Id – A VPC ID
List<AWS::EC2::VPC::Id> – An array of VPC IDs
List<AWS::EC2::SecurityGroup::Id> – An array of security group IDs
List<AWS::EC2::Subnet::Id> – An array of subnet IDs
DependentParameter
In CloudFormation, parameters are all independent and cannot depend on each other. Therefore, this is an invalid parameter type.
Incorrect options:
String
CommaDelimitedList
AWS::EC2::KeyPair::KeyName
As mentioned in the explanation above, these are valid parameter types.
Reference:
https://aws.amazon.com/blogs/devops/using-the-new-cloudformation-parameter-types/
Question 36: Skipped
As an AWS Certified Developer Associate, you have been hired to work with the development team at a company to create a REST API using the serverless architecture.
Which of the following solutions will you choose to move the company to the serverless architecture paradigm?
Route 53 with EC2 as backend
Public-facing Application Load Balancer with ECS on Amazon EC2
API Gateway exposing Lambda Functionality
(Correct)
Fargate with Lambda at the front
Explanation
Correct option:
API Gateway exposing Lambda Functionality
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services.
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
API Gateway can expose Lambda functionality through RESTful APIs. Both are serverless options offered by AWS and hence the right choice for this scenario, considering all the functionality they offer.
Incorrect options:
Fargate with Lambda at the front - Lambda cannot directly handle RESTful API requests. You can invoke a Lambda function over HTTPS by defining a custom RESTful API using Amazon API Gateway. So, Fargate with Lambda as the front-facing service is a wrong combination, though both Fargate and Lambda are serverless.
Public-facing Application Load Balancer with ECS on Amazon EC2 - ECS on Amazon EC2 does not come under serverless and hence cannot be considered for this use case.
Route 53 with EC2 as backend - Amazon EC2 is not a serverless service and hence cannot be considered for this use case.
References:
https://aws.amazon.com/serverless/
https://aws.amazon.com/api-gateway/
Question 37: Skipped
You create an Auto Scaling group to work with an Application Load Balancer. The scaling group is configured with a minimum size value of 5, a maximum value of 20, and the desired capacity value of 10. One of the 10 EC2 instances has been reported as unhealthy.
Which of the following actions will take place?
The ASG will terminate the EC2 Instance
(Correct)
The ASG will detach the EC2 instance from the group, and leave it running
The ASG will format the root EBS drive on the EC2 instance and run the User Data again
The ASG will keep the instance running and re-start the application
Explanation
Correct option:
The ASG will terminate the EC2 Instance
To maintain the same number of instances, Amazon EC2 Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When it finds that an instance is unhealthy, it terminates that instance and launches a new one. Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance.
Incorrect options:
The ASG will detach the EC2 instance from the group, and leave it running - The goal of the auto-scaling group is to get rid of the bad instance and replace it
The ASG will keep the instance running and re-start the application - The ASG does not have control of your application
The ASG will format the root EBS drive on the EC2 instance and run the User Data again - This will not happen, the ASG cannot assume the format of your EBS drive, and User Data only runs once at instance first boot.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-terminate-instance
Question 38: Skipped
While troubleshooting, a developer realized that the Amazon EC2 instance is unable to connect to the Internet using the Internet Gateway.
Which conditions should be met for Internet connectivity to be established? (Select two)
The route table in the instance’s subnet should have a route to an Internet Gateway
(Correct)
The network ACLs associated with the subnet must have rules to allow inbound and outbound traffic
(Correct)
The subnet has been configured to be Public and has no access to the internet
The instance's subnet is associated with multiple route tables with conflicting configurations
The instance's subnet is not associated with any route table
Explanation
Correct options:
The network ACLs associated with the subnet must have rules to allow inbound and outbound traffic - The network access control lists (ACLs) that are associated with the subnet must have rules to allow inbound and outbound traffic on port 80 (for HTTP traffic) and port 443 (for HTTPs traffic). This is a necessary condition for Internet Gateway connectivity
The route table in the instance’s subnet should have a route to an Internet Gateway - A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. The route table in the instance’s subnet should have a route defined to the Internet Gateway.
Incorrect options:
The instance's subnet is not associated with any route table - This is an incorrect statement. A subnet is implicitly associated with the main route table if it is not explicitly associated with a particular route table. So, a subnet is always associated with some route table.
The instance's subnet is associated with multiple route tables with conflicting configurations - This is an incorrect statement. A subnet can only be associated with one route table at a time.
The subnet has been configured to be Public and has no access to internet - This is an incorrect statement. Public subnets have access to the internet via Internet Gateway.
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html
Question 40: Skipped
A company is using a Border Gateway Protocol (BGP) based AWS VPN connection to connect from its on-premises data center to Amazon EC2 instances in the company’s account. The development team can access an EC2 instance in subnet A but is unable to access an EC2 instance in subnet B in the same VPC.
Which logs can be used to verify whether the traffic is reaching subnet B?
Subnet logs
VPN logs
VPC Flow Logs
(Correct)
BGP logs
Explanation
Correct option:
VPC Flow Logs - VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.
You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored.
Flow log data for a monitored network interface is recorded as flow log records, which are log events consisting of fields that describe the traffic flow.
To create a flow log, you specify:
The resource for which to create the flow log
The type of traffic to capture (accepted traffic, rejected traffic, or all traffic)
The destinations to which you want to publish the flow log data
Incorrect options:
VPN logs
Subnet logs
BGP logs
These three options are incorrect and have been added as distractors.
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Question 41: Skipped
Recently in your organization, the AWS X-Ray SDK was bundled into each Lambda function to record outgoing calls for tracing purposes. When your team leader goes to the X-Ray service in the AWS Management Console to get an overview of the information collected, they discover that no data is available.
What is the most likely reason for this issue?
X-Ray only works with AWS Lambda aliases
Fix the IAM Role
(Correct)
Change the security group rules
Enable X-Ray sampling
Explanation
Correct option:
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
Fix the IAM Role
Create an IAM role with write permissions and assign it to the resources running your application. You can use AWS Identity and Access Management (IAM) to grant X-Ray permissions to users and compute resources in your account. This should be one of the first places you start by checking that your permissions are properly configured before exploring other troubleshooting options.
Here is an example of X-Ray Read-Only permissions via an IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"xray:GetSamplingRules",
"xray:GetSamplingTargets",
"xray:GetSamplingStatisticSummaries",
"xray:BatchGetTraces",
"xray:GetServiceGraph",
"xray:GetTraceGraph",
"xray:GetTraceSummaries",
"xray:GetGroups",
"xray:GetGroup"
],
"Resource": [
"*"
]
}
]
}
Another example of write permissions for using X-Ray via an IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"xray:GetSamplingRules",
"xray:GetSamplingTargets",
"xray:GetSamplingStatisticSummaries"
],
"Resource": [
"*"
]
}
]
}
Incorrect options:
Enable X-Ray sampling - If permissions are not configured correctly sampling will not work, so this option is not correct.
X-Ray only works with AWS Lambda aliases - This is not true, aliases are pointers to specific Lambda function versions. To use the X-Ray SDK on Lambda, bundle it with your function code each time you create a new version.
Change the security group rules - You grant permissions to your Lambda function to access other resources using an IAM role and not via security groups.
Reference:
https://docs.aws.amazon.com/xray/latest/devguide/security_iam_troubleshoot.html
Question 43: Skipped
The development team at a retail company is gearing up for the upcoming Thanksgiving sale and wants to make sure that the application's serverless backend running via Lambda functions does not hit latency bottlenecks as a result of the traffic spike.
As a Developer Associate, which of the following solutions would you recommend to address this use-case?
Add an Application Load Balancer in front of the Lambda functions
No need to make any special provisions as Lambda is automatically scalable because of its serverless nature
Configure Application Auto Scaling to manage Lambda reserved concurrency on a schedule
Configure Application Auto Scaling to manage Lambda provisioned concurrency on a schedule
(Correct)
Explanation
Correct option:
Configure Application Auto Scaling to manage Lambda provisioned concurrency on a schedule
Concurrency is the number of requests that a Lambda function is serving at any given time. If a Lambda function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency.
Due to a spike in traffic, when Lambda functions scale, this causes the portion of requests that are served by new instances to have higher latency than the rest. To enable your function to scale without fluctuations in latency, use provisioned concurrency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency.
You can configure Application Auto Scaling to manage provisioned concurrency on a schedule or based on utilization. Use scheduled scaling to increase provisioned concurrency in anticipation of peak traffic. To increase provisioned concurrency automatically as needed, use the Application Auto Scaling API to register a target and create a scaling policy.
Incorrect options:
Configure Application Auto Scaling to manage Lambda reserved concurrency on a schedule - To ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency. More importantly, reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole, including versions and aliases.
You cannot configure Application Auto Scaling to manage Lambda reserved concurrency on a schedule.
Add an Application Load Balancer in front of the Lambda functions - This is a distractor as just adding the Application Load Balancer will not help in scaling the Lambda functions to address the surge in traffic.
No need to make any special provisions as Lambda is automatically scalable because of its serverless nature - It's true that Lambda is serverless, however, due to the surge in traffic the Lambda functions can still hit the concurrency limits. So this option is incorrect.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Question 44: Skipped
A junior developer has been asked to configure access to an Amazon EC2 instance hosting a web application. The developer has configured a new security group to permit incoming HTTP traffic from 0.0.0.0/0 and retained any default outbound rules. A custom Network Access Control List (NACL) connected with the instance's subnet is configured to permit incoming HTTP traffic from 0.0.0.0/0 and retained any default outbound rules.
Which of the following solutions would you suggest if the EC2 instance needs to accept and respond to requests from the internet?
Outbound rules need to be configured both on the security group and on the NACL for sending responses to the Internet Gateway
The configuration is complete on the EC2 instance for accepting and responding to requests
An outbound rule on the security group has to be configured, to allow the response to be sent to the client on the HTTP port
An outbound rule must be added to the Network ACL (NACL) to allow the response to be sent to the client on the ephemeral port range
(Correct)
Explanation
Correct option:
An outbound rule must be added to the Network ACL (NACL) to allow the response to be sent to the client on the ephemeral port range
Security groups are stateful, so allowing inbound traffic to the necessary ports enables the connection. Network ACLs are stateless, so you must allow both inbound and outbound traffic. By default, each custom Network ACL denies all inbound and outbound traffic until you add rules.
To enable the connection to a service running on an instance, the associated network ACL must allow both: 1. Inbound traffic on the port that the service is listening on 2. Outbound traffic to ephemeral ports
When a client connects to a server, a random port from the ephemeral port range (1024-65535) becomes the client's source port.
The designated ephemeral port becomes the destination port for return traffic from the service. Outbound traffic to the ephemeral port must be allowed in the network ACL.
Incorrect options:
The configuration is complete on the EC2 instance for accepting and responding to requests - As explained above, this is an incorrect statement.
An outbound rule on the security group has to be configured, to allow the response to be sent to the client on the HTTP port - Security groups are stateful. Therefore you don't need a rule that allows responses to inbound traffic.
Outbound rules need to be configured both on the security group and on the NACL for sending responses to the Internet Gateway* - Security Groups are stateful. Hence, return traffic is automatically allowed, so there is no need to configure an outbound rule on the security group.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/resolve-connection-sg-acl-inbound/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports
Question 46: Skipped
A company has created an Amazon S3 bucket that holds customer data. The team lead has just enabled access logging to this bucket. The bucket size has grown substantially after starting access logging. Since no new files have been added to the bucket, the perplexed team lead is looking for an answer.
Which of the following reasons explains this behavior?
S3 access logging is pointing to the same bucket and is responsible for the substantial growth of bucket size
(Correct)
A DDoS attack on your S3 bucket can potentially blow up the size of data in the bucket if the bucket security is compromised during the attack
Object Encryption has been enabled and each object is stored twice as part of this configuration
Erroneous Bucket policies for batch uploads can sometimes be responsible for the exponential growth of S3 Bucket size
Explanation
Correct option:
S3 access logging is pointing to the same bucket and is responsible for the substantial growth of bucket size - When your source bucket and target bucket are the same bucket, additional logs are created for the logs that are written to the bucket. The extra logs about logs might make it harder to find the log that you are looking for. This configuration would drastically increase the size of the S3 bucket.
Incorrect options:
Erroneous Bucket policies for batch uploads can sometimes be responsible for the exponential growth of S3 Bucket size - This is an incorrect statement. A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. A bucket policy, for batch processes or normal processes, will not increase the size of the bucket or the objects in it.
A DDOS attack on your S3 bucket can potentially blow up the size of data in the bucket if the bucket security is compromised during the attack - This is an incorrect statement. AWS handles DDoS attacks on all of its managed services. However, a DDoS attack will not increase the size of the bucket.
Object Encryption has been enabled and each object is stored twice as part of this configuration - Encryption does not increase a bucket's size, that too, on daily basis, as if the case in the current scenario
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
Question 53: Skipped
After a code review, a developer has been asked to make his publicly accessible S3 buckets private, and enable access to objects with a time-bound constraint.
Which of the following options will address the given use-case?
Share pre-signed URLs with resources that need access
(Correct)
Use Routing policies to re-route unintended access
It is not possible to implement time constraints on Amazon S3 Bucket access
Use Bucket policy to block the unintended access
Explanation
Correct option:
Share pre-signed URLs with resources that need access - All objects by default are private, with the object owner having permission to access the objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object), and expiration date and time. The pre-signed URLs are valid only for the specified duration.
Incorrect options:
Use Bucket policy to block the unintended access - A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Bucket policy can be used to block off unintended access, but it's not possible to provide time-based access, as is the case in the current use case.
Use Routing policies to re-route unintended access - There is no such facility directly available with Amazon S3.
It is not possible to implement time constraints on Amazon S3 Bucket access - This is an incorrect statement. As explained above, it is possible to give time-bound access permissions on S3 buckets and objects.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html
Question 57: Skipped
The app development team at a social gaming mobile app wants to simplify the user sign up process for the app. The team is looking for a fully managed scalable solution for user management in anticipation of the rapid growth that the app foresees.
As a Developer Associate, which of the following solutions would you suggest so that it requires the LEAST amount of development effort?
Create a custom solution using Lambda and DynamoDB to facilitate sign up and user management for the mobile app
Create a custom solution using EC2 and DynamoDB to facilitate sign up and user management for the mobile app
Use Cognito User pools to facilitate sign up and user management for the mobile app
(Correct)
Use Cognito Identity pools to facilitate sign up and user management for the mobile app
Explanation
Correct option:
Use Cognito User pools to facilitate sign up and user management for the mobile app
Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, Google or Apple.
A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.
Cognito is fully managed by AWS and works out of the box so it meets the requirements for the given use-case.
Incorrect options:
Use Cognito Identity pools to facilitate sign up and user management for the mobile app - You can use Identity pools to grant your users access to other AWS services. With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the specific identity providers that you can use to authenticate users for identity pools.
Exam Alert:
Create a custom solution with EC2 and DynamoDB to facilitate sign up and user management for the mobile app
Create a custom solution with Lambda and DynamoDB to facilitate sign up and user management for the mobile app
As the problem statement mentions that the solution needs to be fully managed and should require the least amount of development effort, so you cannot use EC2 or Lambda functions with DynamoDB to create a custom solution.
Reference:
https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
Question 61: Skipped
You have created a continuous delivery service model with automated steps using AWS CodePipeline. Your pipeline uses your code, maintained in a CodeCommit repository, AWS CodeBuild, and AWS Elastic Beanstalk to automatically deploy your code every time there is a code change. However, the deployment to Elastic Beanstalk is taking a very long time due to resolving dependencies on all of your 100 target EC2 instances.
Which of the following actions should you take to improve performance with limited code changes?
Store the dependencies in S3, to be used while deploying to Beanstalk
Bundle the dependencies in the source code in CodeCommit
Bundle the dependencies in the source code during the build stage of CodeBuild
(Correct)
Create a custom platform for Elastic Beanstalk
Explanation
Correct option:
Bundle the dependencies in the source code during the build stage of CodeBuild
AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate.
A typical application build process includes phases like preparing the environment, updating the configuration, downloading dependencies, running unit tests, and finally, packaging the built artifact.
Downloading dependencies is a critical phase in the build process. These dependent files can range in size from a few KBs to multiple MBs. Because most of the dependent files do not change frequently between builds, you can noticeably reduce your build time by caching dependencies.
This will allow the code bundle to be deployed to Elastic Beanstalk to have both the dependencies and the code, hence speeding up the deployment time to Elastic Beanstalk
Incorrect options:
Bundle the dependencies in the source code in CodeCommit - This is not the best practice and could make the CodeCommit repository huge.
Store the dependencies in S3, to be used while deploying to Beanstalk - This option acts as a distractor. S3 can be used as a storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk. Dependencies are used during the process of building code, not while deploying to Beanstalk.
Create a custom platform for Elastic Beanstalk - This is a more advanced feature that requires code changes, so does not fit the use-case.
Reference:
https://aws.amazon.com/blogs/devops/how-to-enable-caching-for-aws-codebuild/
Question 62: Skipped
A company has a cloud system in AWS with components that send and receive messages using SQS queues. While reviewing the system you see that it processes a lot of information and would like to be aware of any limits of the system.
Which of the following represents the maximum number of messages that can be stored in an SQS queue?
no limit
(Correct)
10000000
10000
100000
Explanation
Correct option:
"no limit": There are no message limits for storing in SQS, but 'in-flight messages' do have limits. Make sure to delete messages after you have processed them. There can be a maximum of approximately 120,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue).
Incorrect options:
"10000"
"100000"
"10000000"
These three options contradict the details provided in the explanation above, so these are incorrect.
Reference:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-limits.html
Question 64: Skipped
An Accounting firm extensively uses Amazon EBS volumes for persistent storage of application data of Amazon EC2 instances. The volumes are encrypted to protect the critical data of the clients. As part of managing the security credentials, the project manager has come across a policy snippet that looks like the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow for use of this Key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:role/UserRole"
},
"Action": [
"kms:GenerateDataKeyWithoutPlaintext",
"kms:Decrypt"
],
"Resource": "*"
},
{
"Sid": "Allow for EC2 Use",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:role/UserRole"
},
"Action": [
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "ec2.us-west-2.amazonaws.com"
}
}
]
}
Which of the following options are correct regarding the policy?
The first statement provides the security group the ability to generate a data key and decrypt that data key from the CMK when necessary
The second statement in this policy provides the security group (mentioned in first statement of the policy), the ability to create, list, and revoke grants for Amazon EC2
The first statement provides a specified IAM principal the ability to generate a data key and decrypt that data key from the CMK when necessary
(Correct)
The second statement in the policy mentions that all the resources stated in the first statement can take the specified role which will provide the ability to create, list, and revoke grants for Amazon EC2
Explanation
Correct option:
The first statement provides a specified IAM principal the ability to generate a data key and decrypt that data key from the CMK when necessary - To create and use an encrypted Amazon Elastic Block Store (EBS) volume, you need permissions to use Amazon EBS. The key policy associated with the CMK would need to include these. The above policy is an example of one such policy.
In this CMK policy, the first statement provides a specified IAM principal the ability to generate a data key and decrypt that data key from the CMK when necessary. These two APIs are necessary to encrypt the EBS volume while it’s attached to an Amazon Elastic Compute Cloud (EC2) instance.
The second statement in this policy provides the specified IAM principal the ability to create, list, and revoke grants for Amazon EC2. Grants are used to delegate a subset of permissions to AWS services, or other principals, so that they can use your keys on your behalf. In this case, the condition policy explicitly ensures that only Amazon EC2 can use the grants. Amazon EC2 will use them to re-attach an encrypted EBS volume back to an instance if the volume gets detached due to a planned or unplanned outage. These events will be recorded within AWS CloudTrail when, and if, they do occur for your auditing.
Incorrect options:
The first statement provides the security group the ability to generate a data key and decrypt that data key from the CMK when necessary
The second statement in this policy provides the security group (mentioned in the first statement of the policy), the ability to create, list, and revoke grants for Amazon EC2
The second statement in the policy mentions that all the resources stated in the first statement can take the specified role which will provide the ability to create, list, and revoke grants for Amazon EC2
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
https://d0.awsstatic.com/whitepapers/aws-kms-best-practices.pdf
Practice 3
Question 1: Skipped
As a developer, you are looking at creating a custom configuration for Amazon EC2 instances running in an Auto Scaling group. The solution should allow the group to auto-scale based on the metric of 'average RAM usage' for your Amazon EC2 instances.
Which option provides the best solution?
Create a custom alarm for your ASG and make your instances trigger the alarm using PutAlarmData API
Enable detailed monitoring for EC2 and ASG to get the RAM usage data and create a CloudWatch Alarm on top of it
Migrate your application to AWS Lambda
Create a custom metric in CloudWatch and make your instances send data to it using PutMetricData. Then, create an alarm based on this metric
(Correct)
Explanation
Correct option:
Create a custom metric in CloudWatch and make your instances send data to it using PutMetricData. Then, create an alarm based on this metric - You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch.
You can publish your own metrics to CloudWatch using the AWS CLI or an API. Metrics produced by AWS services are standard resolution by default. When you publish a custom metric, you can define it as either standard resolution or high resolution. When you publish a high-resolution metric, CloudWatch stores it with a resolution of 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds.
High-resolution metrics can give you more immediate insight into your application's sub-minute activity. But, every PutMetricData call for a custom metric is charged, so calling PutMetricData more often on a high-resolution metric can lead to higher charges.
Incorrect options:
Create a custom alarm for your ASG and make your instances trigger the alarm using PutAlarmData API - This solution will not work, your instances must be aware of each other's RAM utilization status, to know when the average RAM would be too high to trigger the alarm.
Enable detailed monitoring for EC2 and ASG to get the RAM usage data and create a CloudWatch Alarm on top of it - By enabling detailed monitoring you define the frequency at which the metric data has to be sent to CloudWatch, from 5 minutes to 1-minute frequency window. But, you still need to create and collect the custom metric you wish to track.
Migrate your application to AWS Lambda - This option has been added as a distractor. You cannot use Lambda for the given use-case.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-monitoring.html#CloudWatchAlarm
Question 3: Skipped
A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster.
As a Developer Associate, which of the following solutions would you recommend to fix this behavior?
The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues
You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues
(Correct)
You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues
A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again
Explanation
Correct option:
You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues - If you terminate a container instance while it is in the STOPPED state, that container instance isn't automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface. Once deregistered, the container instance will no longer appear as a resource in your Amazon ECS cluster.
Incorrect options:
You terminated the container instance while it was in RUNNING state, that lead to this synchronization issues - This is an incorrect statement. If you terminate a container instance in the RUNNING state, that container instance is automatically removed, or deregistered, from the cluster.
The container instance has been terminated with AWS CLI, whereas, for ECS instances, Amazon ECS CLI should be used to avoid any synchronization issues - This is incorrect and has been added as a distractor.
A custom software on the container instance could have failed and resulted in the container hanging in an unhealthy state till restarted again - This is an incorrect statement. It is already mentioned in the question that the developer has terminated the instance.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/deregister-ecs-instance/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html
Question 5: Skipped
A banking application needs to send real-time alerts and notifications based on any updates from the backend services. The company wants to avoid implementing complex polling mechanisms for these notifications.
Which of the following types of APIs supported by the Amazon API Gateway is the right fit?
REST or HTTP APIs
HTTP APIs
WebSocket APIs
(Correct)
REST APIs
Explanation
Correct option:
WebSocket APIs
In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.
For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content.
You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following:
Chat applications
Real-time dashboards such as stock tickers
Real-time alerts and notifications
API Gateway provides WebSocket API management functionality such as the following:
Monitoring and throttling of connections and messages
Using AWS X-Ray to trace messages as they travel through the APIs to backend services
Easy integration with HTTP/HTTPS endpoints
Incorrect options:
REST or HTTP APIs
REST APIs - An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user.
For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller.
HTTP APIs - HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint.
For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client.
Server push mechanism is not possible in REST and HTTP APIs.
Reference:
Question 8: Skipped
You company runs business logic on smaller software components that perform various functions. Some functions process information in a few seconds while others seem to take a long time to complete. Your manager asked you to decouple components that take a long time to ensure software applications stay responsive under load. You decide to configure Amazon Simple Queue Service (SQS) to work with your Elastic Beanstalk configuration.
Which of the following Elastic Beanstalk environment should you choose to meet this requirement?
Single Instance Worker node
Load-balancing, Autoscaling environment
Dedicated worker environment
(Correct)
Single Instance with Elastic IP
Explanation
Correct option:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Dedicated worker environment - If your AWS Elastic Beanstalk application performs operations or workflows that take a long time to complete, you can offload those tasks to a dedicated worker environment. Decoupling your web application front end from a process that performs blocking operations is a common way to ensure that your application stays responsive under load.
A long-running task is anything that substantially increases the time it takes to complete a request, such as processing images or videos, sending emails, or generating a ZIP archive. These operations can take only a second or two to complete, but a delay of a few seconds is a lot for a web request that would otherwise complete in less than 500 ms.
Incorrect options:
Single Instance Worker node - Worker machines in Kubernetes are called nodes. Amazon EKS worker nodes are standard Amazon EC2 instances. EKS worker nodes are not to be confused with the Elastic Beanstalk worker environment. Since we are talking about the Elastic Beanstalk environment, this is not the correct choice.
Load-balancing, Autoscaling environment - A load-balancing and autoscaling environment uses the Elastic Load Balancing and Amazon EC2 Auto Scaling services to provision the Amazon EC2 instances that are required for your deployed application. Amazon EC2 Auto Scaling automatically starts additional instances to accommodate increasing load on your application. If your application requires scalability with the option of running in multiple Availability Zones, use a load-balancing, autoscaling environment. This is not the right environment for the given use-case since it will add costs to the overall solution.
Single Instance with Elastic IP - A single-instance environment contains one Amazon EC2 instance with an Elastic IP address. A single-instance environment doesn't have a load balancer, which can help you reduce costs compared to a load-balancing, autoscaling environment. This is not a highly available architecture, because if that one instance goes down then your application is down. This is not recommended for production environments.
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html
Question 9: Skipped
Your company hosts a static website on Amazon Simple Storage Service (S3) written in HTML5. The website targets aviation enthusiasts and it has grown a worldwide audience with hundreds of thousands of visitors accessing the website now on a monthly basis. While users in the United States have a great user experience, users from other parts of the world are experiencing slow responses and lag.
Which service can mitigate this issue?
Use Amazon S3 Transfer Acceleration
Use Amazon CloudFront
(Correct)
Use Amazon S3 Caching
Use Amazon ElastiCache for Redis
Explanation
Correct option:
Use Amazon CloudFront
Storing your static content with S3 provides a lot of advantages. But to help optimize your application’s performance and security while effectively managing cost, AWS recommends that you also set up Amazon CloudFront to work with your S3 bucket to serve and protect the content. CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.
By caching your content in Edge Locations, CloudFront reduces the load on your S3 bucket and helps ensure a faster response for your users when they request content. In addition, data transfer out for content by using CloudFront is often more cost-effective than serving files directly from S3, and there is no data transfer fee from S3 to CloudFront.
A security feature of CloudFront is Origin Access Identity (OAI), which restricts access to an S3 bucket and its content to only CloudFront and operations it performs.
Incorrect options:
Use Amazon ElastiCache for Redis - Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, and Q&A portals). ElastiCache is often used with databases that need millisecond latency. For the current scenario, we do not need a caching layer since the data load is not that heavy.
Use Amazon S3 Caching - This is a made-up option, given as a distractor.
Use Amazon S3 Transfer Acceleration - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. However, S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. Each time S3 Transfer Acceleration is used to upload an object, AWS checks whether S3 Transfer Acceleration is likely to be faster than a regular Amazon S3 transfer. If it finds that S3 Transfer Acceleration might not be significantly faster, AWS shifts back to normal Amazon S3 transfer mode. So, this is not the right option for our use case.
References:
https://aws.amazon.com/elasticache/
Question 16: Skipped
A developer is working on an AWS Lambda function that reads data from Amazon S3 objects and writes the data to an Amazon DynamoDB table. Although the function triggers successfully from an S3 event notification upon object creation, it encounters a failure while attempting to write data to the DynamoDB table.
What is the probable reason for the failure?
The Lambda function's reserved concurrency limit has been exceeded
DynamoDB table does not have a Gateway VPC Endpoint, which is required by the Lambda function for a successful write
The Lambda function's provisioned concurrency limit has been exceeded
The Lambda function does not have IAM permissions to write to DynamoDB
(Correct)
Explanation
Correct option:
The Lambda function does not have IAM permissions to write to DynamoDB
You need to use an identity-based policy that allows read and write access to a specific Amazon DynamoDB table. To use this policy, attach the policy to a Lambda service role. A service role is a role that you create in your account to allow a service to perform actions on your behalf. That service role must include AWS Lambda as the principal in the trust policy. The role is then used to grant a Lambda function access to a DynamoDB table. By using an IAM policy and role to control access, you don’t need to embed credentials in code and can tightly control which services the Lambda function can access.
Incorrect options:
The Lambda function's provisioned concurrency limit has been exceeded
The Lambda function's reserved concurrency limit has been exceeded
Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function.
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
Neither reserved concurrency nor provisioned concurrency has any relevance to the given use case. Both options have been added as distractors.
DynamoDB table does not have a Gateway VPC Endpoint, which is required by the Lambda function for a successful write - Gateway endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT device for your VPC. Gateway endpoints do not enable AWS PrivateLink. This option acts as a distractor since the Lambda function is not provisioned within a VPC by default, so there is no need of a Gateway VPC Endpoint to access DynamoDB.
References:
Question 19: Skipped
You are working for a shipping company that is automating the creation of ECS clusters with an Auto Scaling Group using an AWS CloudFormation template that accepts cluster name as its parameters. Initially, you launch the template with input value 'MainCluster', which deployed five instances across two availability zones. The second time, you launch the template with an input value 'SecondCluster'. However, the instances created in the second run were also launched in 'MainCluster' even after specifying a different cluster name.
What is the root cause of this issue?
The cluster name Parameter has not been updated in the file /etc/ecs/ecs.config during bootstrap
(Correct)
The EC2 instance is missing IAM permissions to join the other clusters
The security groups on the EC2 instance are pointing to the wrong ECS cluster
The ECS agent Docker image must be re-built to connect to the other clusters
Explanation
Correct option:
The cluster name Parameter has not been updated in the file /etc/ecs/ecs.config during bootstrap - In the ecs.config file you have to configure the parameter ECS_CLUSTER='your_cluster_name' to register the container instance with a cluster named 'your_cluster_name'.
Incorrect options:
The EC2 instance is missing IAM permissions to join the other clusters - EC2 instances are getting registered to the first cluster, so permissions are not an issue here and hence this statement is an incorrect choice for the current use case.
The ECS agent Docker image must be re-built to connect to the other clusters - Since the first set of instances got created from the template without any issues, there is no issue with the ECS agent here.
The security groups on the EC2 instance are pointing to the wrong ECS cluster - Security groups govern the rules about the incoming network traffic to your ECS containers. The issue here is not about user access and hence is a wrong choice for the current use case.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Question 24: Skipped
An IT company has migrated to a serverless application stack on the AWS Cloud with the compute layer being implemented via Lambda functions. The engineering managers would like to actively troubleshoot any failures in the Lambda functions.
As a Developer Associate, which of the following solutions would you suggest for this use-case?
Use CloudWatch Events to identify and notify any failures in the Lambda code
Use CodeCommit to identify and notify any failures in the Lambda code
The developers should insert logging statements in the Lambda function code which are then available via CloudWatch logs
(Correct)
Use CodeDeploy to identify and notify any failures in the Lambda code
Explanation
Correct option:
"The developers should insert logging statements in the Lambda function code which are then available via CloudWatch logs"
When you invoke a Lambda function, two types of error can occur. Invocation errors occur when the invocation request is rejected before your function receives it. Function errors occur when your function's code or runtime returns an error. Depending on the type of error, the type of invocation, and the client or service that invokes the function, the retry behavior, and the strategy for managing errors varies.
Lambda function failures are commonly caused by:
Permissions issues Code issues Network issues Throttling Invoke API 500 and 502 errors
You can insert logging statements into your code to help you validate that your code is working as expected. Lambda automatically integrates with CloudWatch Logs and pushes all logs from your code to a CloudWatch Logs group associated with a Lambda function, which is named /aws/lambda/<function name>
.
Incorrect options:
"Use CloudWatch Events to identify and notify any failures in the Lambda code" - Typically Lambda functions are triggered as a response to a CloudWatch Event. CloudWatch Events cannot identify and notify failures in the Lambda code.
"Use CodeCommit to identify and notify any failures in the Lambda code"
"Use CodeDeploy to identify and notify any failures in the Lambda code"
AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.
Neither CodeCommit nor CodeDeploy can identify and notify failures in the Lambda code.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html
Question 28: Skipped
A developer from your team has configured the load balancer to route traffic equally between instances or across Availability Zones. However, Elastic Load Balancing (ELB) routes more traffic to one instance or Availability Zone than the others.
Why is this happening and how can it be fixed? (Select two)
Sticky sessions are enabled for the load balancer
(Correct)
After you disable an Availability Zone, the targets in that Availability Zone remain registered with the load balancer, thereby receiving random bursts of traffic
Instances of a specific capacity type aren’t equally distributed across Availability Zones
(Correct)
There could be short-lived TCP connections between clients and instances
For Application Load Balancers, cross-zone load balancing is disabled by default
Explanation
Correct option:
Sticky sessions are enabled for the load balancer - This can be the reason for potential unequal traffic routing by the load balancer. Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information in order to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.
When a load balancer first receives a request from a client, it routes the request to a target, generates a cookie named AWSALB that encodes information about the selected target, encrypts the cookie, and includes the cookie in the response to the client. The client should include the cookie that it receives in subsequent requests to the load balancer. When the load balancer receives a request from a client that contains the cookie, if sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target.
If you use duration-based session stickiness, configure an appropriate cookie expiration time for your specific use case. If you set session stickiness from individual applications, use session cookies instead of persistent cookies where possible.
Instances of a specific capacity type aren’t equally distributed across Availability Zones - A Classic Load Balancer with HTTP or HTTPS listeners might route more traffic to higher-capacity instance types. This distribution aims to prevent lower-capacity instance types from having too many outstanding requests. It’s a best practice to use similar instance types and configurations to reduce the likelihood of capacity gaps and traffic imbalances.
A traffic imbalance might also occur if you have instances of similar capacities running on different Amazon Machine Images (AMIs). In this scenario, the imbalance of the traffic in favor of higher-capacity instance types is desirable.
Incorrect options:
There could be short-lived TCP connections between clients and instances - This is an incorrect statement. Long-lived TCP connections between clients and instances can potentially lead to unequal distribution of traffic by the load balancer. Long-lived TCP connections between clients and instances cause uneven traffic load distribution by design. As a result, new instances take longer to reach connection equilibrium. Be sure to check your metrics for long-lived TCP connections that might be causing routing issues in the load balancer.
For Application Load Balancers, cross-zone load balancing is disabled by default - This is an incorrect statement. With Application Load Balancers, cross-zone load balancing is always enabled.
After you disable an Availability Zone, the targets in that Availability Zone remain registered with the load balancer, thereby receiving random bursts of traffic - This is an incorrect statement. After you disable an Availability Zone, the targets in that Availability Zone remain registered with the load balancer. However, even though they remain registered, the load balancer does not route traffic to them.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/elb-fix-unequal-traffic-routing/
Question 31: Skipped
You are running a cloud file storage website with an Internet-facing Application Load Balancer, which routes requests from users over the internet to 10 registered Amazon EC2 instances. Users are complaining that your website always asks them to re-authenticate when they switch pages. You are puzzled because this behavior is not seen in your local machine or dev environment.
What could be the reason?
The Load Balancer does not have stickiness enabled
(Correct)
The EC2 instances are logging out the users because the instances never have access to the client IPs because of the Load Balancer
The Load Balancer does not have TLS enabled
Application Load Balancer is in slow-start mode, which gives ALB a little more time to read and write session data
Explanation
Correct option:
The Load Balancer does not have stickiness enabled - Sticky sessions are a mechanism to route requests to the same target in a target group. This is useful for servers that maintain state information to provide a continuous experience to clients. To use sticky sessions, the clients must support cookies.
When a load balancer first receives a request from a client, it routes the request to a target, generates a cookie named AWSALB that encodes information about the selected target, encrypts the cookie, and includes the cookie in the response to the client. The client should include the cookie that it receives in subsequent requests to the load balancer. When the load balancer receives a request from a client that contains the cookie, if sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target.
Incorrect options:
Application Load Balancer is in slow-start mode, which gives ALB a little more time to read and write session data - This is an invalid statement. The load balancer serves as a single point of contact for clients and distributes incoming traffic across its healthy registered targets. By default, a target starts to receive its full share of requests as soon as it is registered with a target group and passes an initial health check. Using slow start mode gives targets time to warm up before the load balancer sends them a full share of requests. This does not help in session management.
The EC2 instances are logging out the users because the instances never have access to the client IPs because of the Load Balancer - This is an incorrect statement. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header to the server. If needed, the server can read IP addresses from this data.
The Load Balancer does not have TLS enabled - To use an HTTPS listener, you must deploy at least one SSL/TLS server certificate on your load balancer. The load balancer uses a server certificate to terminate the front-end connection and then decrypt requests from clients before sending them to the targets. This does not help in session management.
References:
Question 34: Skipped
An IT company has its serverless stack integrated with AWS X-Ray. The developer at the company has noticed a high volume of data going into X-Ray and the AWS monthly usage charges have skyrocketed as a result. The developer has requested changes to mitigate the issue.
As a Developer Associate, which of the following solutions would you recommend to obtain tracing trends while reducing costs with minimal disruption?
Implement a network sampling rule
Enable X-Ray sampling
(Correct)
Use Filter Expressions in the X-Ray console
Custom configuration for the X-Ray agents
Explanation
Correct option:
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
Enable X-Ray sampling
To ensure efficient tracing and provide a representative sample of the requests that your application serves, the X-Ray SDK applies a sampling algorithm to determine which requests get traced. By default, the X-Ray SDK records the first request each second, and five percent of any additional requests. X-Ray sampling is enabled directly from the AWS console, hence your application code does not need to change.
Incorrect options:
Use Filter Expressions in the X-Ray console - When you choose a time period of traces to view in the X-Ray console, you might get more results than the console can display. You can narrow the results to just the traces that you want to find by using a filter expression. This option is not correct because it does not reduce the volume of data sent into the X-Ray console.
Custom configuration for the X-Ray agents - You cannot do a custom configuration, instead you can do custom sampling rules. So this option is incorrect.
Implement a network sampling rule - This option has been added as a distractor.
References:
https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html
Question 41: Skipped
A developer wants to enable X-Ray tracing on an on-premises Linux server running a custom application that is accessed through Amazon API Gateway.
What is the most efficient solution that requires minimal configuration?
Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service
(Correct)
Install and run the CloudWatch Unified Agent on the on-premises servers to capture and relay the X-Ray data to the X-Ray service using the PutTraceSegments API call
Configure a Lambda function to analyze the incoming traffic data on the on-premises servers and then relay the X-Ray data to the X-Ray service using the PutTelemetryRecords API call
Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service
Explanation
Correct option:
Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service
The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.
To run the X-Ray daemon locally, on-premises, or on other AWS services, download it, run it, and then give it permission to upload segment documents to X-Ray.
Incorrect options:
Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service - As mentioned above, you need to run the X-Ray daemon on the on-premises servers and give it the required permission to upload X-Ray data to the X-Ray service. So this option is incorrect.
Install and run the CloudWatch Unified Agent on the on-premises servers to capture and relay the X-Ray data to the X-Ray service using the PutTraceSegments API call - This option has been added as a distractor. CloudWatch Agent cannot relay X-Ray data to the X-Ray service using the PutTraceSegments API call.
Configure a Lambda function to analyze the incoming traffic data on the on-premises servers and then relay the X-Ray data to the X-Ray service using the PutTelemetryRecords API call - This option is incorrect as the Lambda function cannot process the X-Ray data for an on-premises instance and then relay it to the X-Ray service.
Reference:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html
Question 44: Skipped
A telecommunications company that provides internet service for mobile device users maintains over 100 c4.large instances in the us-east-1 region. The EC2 instances run complex algorithms. The manager would like to track CPU utilization of the EC2 instances as frequently as every 10 seconds.
Which of the following represents the BEST solution for the given use-case?
Simply get it from the CloudWatch Metrics
Enable EC2 detailed monitoring
Create a high-resolution custom metric and push the data using a script triggered every 10 seconds
(Correct)
Open a support ticket with AWS
Explanation
Correct option:
Create a high-resolution custom metric and push the data using a script triggered every 10 seconds
Using high-resolution custom metric, your applications can publish metrics to CloudWatch with 1-second resolution. You can watch the metrics scroll across your screen seconds after they are published and you can set up high-resolution CloudWatch Alarms that evaluate as frequently as every 10 seconds. You can alert with High-Resolution Alarms, as frequently as 10-second periods. High-Resolution Alarms allow you to react and take actions faster and support the same actions available today with standard 1-minute alarms.
Incorrect options:
Enable EC2 detailed monitoring - As part of basic monitoring, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance, however, this comes at an additional cost.
Simply get it from the CloudWatch Metrics - You can get data from metrics. The basic monitoring data is available automatically in a 5-minute interval and detailed monitoring data is available in a 1-minute interval.
Open a support ticket with AWS - This option has been added as a distractor.
Reference:
Question 52: Skipped
As a Senior Developer, you manage 10 Amazon EC2 instances that make read-heavy database requests to the Amazon RDS for PostgreSQL. You need to make this architecture resilient for disaster recovery.
Which of the following features will help you prepare for database disaster recovery? (Select two)
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single AWS Region
(Correct)
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups across multiple Regions
Use RDS Provisioned IOPS (SSD) Storage in place of General Purpose (SSD) Storage
Use database cloning feature of the RDS DB cluster
Use cross-Region Read Replicas
(Correct)
Explanation
Correct option:
Use cross-Region Read Replicas
In addition to using Read Replicas to reduce the load on your source DB instance, you can also use Read Replicas to implement a DR solution for your production DB environment. If the source DB instance fails, you can promote your Read Replica to a standalone source server. Read Replicas can also be created in a different Region than the source database. Using a cross-Region Read Replica can help ensure that you get back up and running if you experience a regional availability issue.
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups in a single AWS Region
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances use Amazon's failover technology.
The automated backup feature of Amazon RDS enables point-in-time recovery for your database instance. Amazon RDS will backup your database and transaction logs and store both for a user-specified retention period. If it’s a Multi-AZ configuration, backups occur on the standby to reduce I/O impact on the primary. Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.
Incorrect options:
Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that creates backups across multiple Regions - This is an incorrect statement. Automated backups are limited to a single AWS Region while manual snapshots and Read Replicas are supported across multiple Regions.
Use RDS Provisioned IOPS (SSD) Storage in place of General Purpose (SSD) Storage - Amazon RDS Provisioned IOPS Storage is an SSD-backed storage option designed to deliver fast, predictable, and consistent I/O performance. This storage type enhances the performance of the RDS database, but this isn't a disaster recovery option.
Use database cloning feature of the RDS DB cluster - This option has been added as a distractor. Database cloning is only available for Aurora and not for RDS.
References:
https://aws.amazon.com/rds/features/
https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/
Question 55: Skipped
You have an Auto Scaling group configured to a minimum capacity of 1 and a maximum capacity of 5, designed to launch EC2 instances across 3 Availability Zones. During a low utilization period, an entire Availability Zone went down and your application experienced downtime.
What can you do to ensure that your application remains highly available?
Configure ASG fast failover
Enable RDS Multi-AZ
Increase the minimum instance capacity of the Auto Scaling Group to 2
(Correct)
Change the scaling metric of auto-scaling policy to network bytes
Explanation
Correct option:
Increase the minimum instance capacity of the Auto Scaling Group to 2 -
You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity. The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional. If you do not define your desired capacity upfront, it defaults to your minimum capacity.
Since a minimum capacity of 1 was defined, an instance was launched in only one AZ. This AZ went down, taking the application with it. If the minimum capacity is set to 2. As per Auto Scale AZ configuration, it would have launched 2 instances- one in each AZ, making the architecture disaster-proof and hence highly available.
Incorrect options:
Change the scaling metric of auto-scaling policy to network bytes - With target tracking scaling policies, you select a scaling metric and set a target value. You can use predefined customized metrics. Setting the metric to network bytes will not help in this context since the instances have to be spread across different AZs for high availability. The optimized way of doing it, is by defining minimum and maximum instance capacities, as discussed above.
Configure ASG fast failover - This is a made-up option, given as a distractor.
Enable RDS Multi-AZ - This configuration will make your database highly available. But for the current scenario, you will need to have more than 1 instance in separate availability zones to keep the application highly available.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
Question 62: Skipped
Your team lead has requested code review of your code for Lambda functions. Your code is written in Python and makes use of the Amazon Simple Storage Service (S3) to upload logs to an S3 bucket. After the review, your team lead has recommended reuse of execution context to improve the Lambda performance.
Which of the following actions will help you implement the recommendation?
Enable X-Ray integration
Use environment variables to pass operational parameters
Move the Amazon S3 client initialization, out of your function handler
(Correct)
Assign more RAM to the function
Explanation
Correct option:
Move the Amazon S3 client initialization, out of your function handler - AWS best practices for Lambda suggest taking advantage of execution context reuse to improve the performance of your functions. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and cost. To avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications.
Incorrect options:
Use environment variables to pass operational parameters - This is one of the suggested best practices for Lambda. By using environment variables to pass operational parameters you can avoid hard-coding useful information. But, this is not the right answer for the current use-case, since it talks about reusing context.
Assign more RAM to the function - Increasing RAM will help speed up the process. But, in the current question, the reviewer has specifically mentioned about reusing context. Hence, this is not the right answer.
Enable X-Ray integration - You can use AWS X-Ray to visualize the components of your application, identify performance bottlenecks, and troubleshoot requests that resulted in an error. Your Lambda functions send trace data to X-Ray, and X-Ray processes the data to generate a service map and searchable trace summaries. This is a useful tool for troubleshooting. But, for the current use-case, we already know the bottleneck that needs to be fixed and that is the context reuse.
References:
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html
Question 1: Skipped
Your company manages hundreds of EC2 instances running on Linux OS. The instances are configured in several Availability Zones in the eu-west-3 region. Your manager has requested to collect system memory metrics on all EC2 instances using a script.
Which of the following solutions will help you collect this data?
Use a cron job on the instances that pushes the EC2 RAM statistics as a Custom metric into CloudWatch
(Correct)
Extract RAM statistics using the instance metadata
Extract RAM statistics from the standard CloudWatch metrics for EC2 instances
Extract RAM statistics using X-Ray
Explanation
Correct option:
"Use a cron job on the instances that pushes the EC2 RAM statistics as a Custom metric into CloudWatch"
The Amazon CloudWatch Monitoring Scripts for Amazon Elastic Compute Cloud (Amazon EC2) Linux-based instances demonstrate how to produce and consume Amazon CloudWatch custom metrics. These Perl scripts comprise a fully functional example that reports memory, swap, and disk space utilization metrics for a Linux instance. You can set a cron schedule for metrics reported to CloudWatch and report memory utilization to CloudWatch every x minutes.
Incorrect options:
"Extract RAM statistics using the instance metadata" - Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, hostname, events, and security groups. The instance metadata can only provide the ID of the RAM disk specified at launch time. So this option is incorrect.
"Extract RAM statistics from the standard CloudWatch metrics for EC2 instances" - Amazon EC2 sends metrics to Amazon CloudWatch. By default, each data point covers the 5 minutes that follow the start time of activity for the instance. If you've enabled detailed monitoring, each data point covers the next minute of activity from the start time. The standard CloudWatch metrics don't have any metrics for memory utilization details.
"Extract RAM statistics using X-Ray" - AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
X-Ray cannot be used to extract RAM statistics for EC2 instances.
For more information visit https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
Question 10: Skipped
A company wants to add geospatial capabilities to the cache layer, along with query capabilities and an ability to horizontally scale. The company uses Amazon RDS as the database tier.
Which solution is optimal for this use-case?
Leverage the capabilities offered by ElastiCache for Redis with cluster mode enabled
(Correct)
Use CloudFront caching to cater to demands of increasing workloads
Migrate to Amazon DynamoDB to utilize the automatically integrated DynamoDB Accelerator (DAX) along with query capability features
Leverage the capabilities offered by ElastiCache for Redis with cluster mode disabled
Explanation
Correct option:
Leverage the capabilities offered by ElastiCache for Redis with cluster mode enabled
You can use Amazon ElastiCache to accelerate your high volume application workloads by caching your data in-memory providing sub-millisecond data retrieval performance. When used in conjunction with any database including Amazon RDS or Amazon DynamoDB, ElastiCache can alleviate the pressure associated with heavy request loads, increase overall application performance and reduce costs associated with scaling for throughput on other databases.
Amazon ElastiCache makes it easy to deploy and manage a highly available and scalable in-memory data store in the cloud. Among the open source in-memory engines available for use with ElastiCache is Redis, which added powerful geospatial capabilities in its newer versions.
You can leverage ElastiCache for Redis with cluster mode enabled to enhance reliability and availability with little change to your existing workload. Cluster Mode comes with the primary benefit of horizontal scaling up and down of your Redis cluster, with almost zero impact on the performance of the cluster.
Enabling Cluster Mode provides a number of additional benefits in scaling your cluster. In short, it allows you to scale in or out the number of shards (horizontal scaling) versus scaling up or down the node type (vertical scaling). This means that Cluster Mode can scale to very large amounts of storage (potentially 100s of terabytes) across up to 90 shards, whereas a single node can only store as much data in memory as the instance type has capacity for.
Cluster Mode also allows for more flexibility when designing new workloads with unknown storage requirements or heavy write activity. In a read-heavy workload, one can scale a single shard by adding read replicas, up to five, but a write-heavy workload can benefit from additional write endpoints when cluster mode is enabled.
Incorrect options:
Leverage the capabilities offered by ElastiCache for Redis with cluster mode disabled - For a production workload, you should consider using a configuration that includes replication to enhance the protection of your data. Also, only vertical scaling is possible when cluster mode is disabled. The use case mentions horizontal scaling as a requirement, hence disabling cluster mode is not an option.
Use CloudFront caching to cater to demands of increasing workloads - One of the purposes of using CloudFront is to reduce the number of requests that your origin server must respond to directly. With CloudFront caching, more objects are served from CloudFront edge locations, which are closer to your users. This reduces the load on your origin server and reduces latency. However, the use case mentions that in-memory caching is needed for enhancing the performance of the application. So, this option is incorrect.
Migrate to Amazon DynamoDB to utilize the automatically integrated DynamoDB Accelerator (DAX) along with query capability features - Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. Database migration is a more elaborate effort compared to implementing and optimizing the caching layer.
References:
https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/
https://aws.amazon.com/blogs/database/amazon-elasticache-utilizing-redis-geospatial-capabilities/
Question 22: Skipped
An order management system uses a cron job to poll for any new orders. Every time a new order is created, the cron job sends this order data as a message to the message queues to facilitate downstream order processing in a reliable way. To reduce costs and improve performance, the company wants to move this functionality to AWS cloud.
Which of the following is the most optimal solution to meet this requirement?
Use Amazon Simple Notification Service (SNS) to push notifications to Kinesis Data Firehose delivery streams for processing the data for downstream applications
Configure different Amazon Simple Queue Service (SQS) queues to poll for new orders
Use Amazon Simple Notification Service (SNS) to push notifications when an order is created. Configure different Amazon Simple Queue Service (SQS) queues to receive these messages for downstream processing
(Correct)
Use Amazon Simple Notification Service (SNS) to push notifications and use AWS Lambda functions to process the information received from SNS
Explanation
Correct option:
Use Amazon Simple Notification Service (SNS) to push notifications when an order is created. Configure different Amazon Simple Queue Service (SQS) queues to receive these messages for downstream processing
Amazon SNS works closely with Amazon Simple Queue Service (Amazon SQS). These services provide different benefits for developers. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates. Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components—without requiring each component to be concurrently available.
Using Amazon SNS and Amazon SQS together, messages can be delivered to applications that require immediate notification of an event, and also stored in an Amazon SQS queue for other applications to process at a later time.
When you subscribe an Amazon SQS queue to an Amazon SNS topic, you can publish a message to the topic and Amazon SNS sends an Amazon SQS message to the subscribed queue. The Amazon SQS message contains the subject and message that were published to the topic along with metadata about the message in a JSON document.
SNS-SQS fanout is the right solution for this use case.
Incorrect options:
Configure different Amazon Simple Queue Service (SQS) queues to poll for new orders - Amazon SQS cannot be used as a polling service, as messages need to be pushed to the queue, which are then handled by the queue consumers.
Use Amazon Simple Notification Service (SNS) to push notifications and use AWS Lambda functions to process the information received from SNS - Amazon SNS and AWS Lambda are integrated so you can invoke Lambda functions with Amazon SNS notifications. When a message is published to an SNS topic that has a Lambda function subscribed to it, the Lambda function is invoked with the payload of the published message. For the given scenario, we need a service that can store the message data pushed by SNS, for further processing. AWS Lambda does not have capacity to store the message data. In case a Lambda function is unable to process a specific message, it will be left unprocessed. Hence this option is not correct.
Use Amazon Simple Notification Service (SNS) to push notifications to Kinesis Data Firehose delivery streams for processing the data for downstream applications - You can subscribe Amazon Kinesis Data Firehose delivery streams to SNS topics, which allows you to send notifications to additional storage and analytics endpoints. However, Kinesis is built for real-time processing of big data. Whereas, SQS is meant for decoupling dependent systems with easy methods to transmit data/messages. SQS also is a cheaper option when compared to Firehose. Therefore this option is not the right fit for the given use case.
References:
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
https://docs.aws.amazon.com/sns/latest/dg/sns-lambda-as-subscriber.html
https://docs.aws.amazon.com/sns/latest/dg/sns-firehose-as-subscriber.html
Question 24: Skipped
You are working on a project that has over 100 dependencies. Every time your AWS CodeBuild runs a build step it has to resolve Java dependencies from external Ivy repositories which take a long time. Your manager wants to speed this process up in AWS CodeBuild.
Which of the following will help you do this with minimal effort?
Reduce the number of dependencies
Cache dependencies on S3
(Correct)
Ship all the dependencies as part of the source code
Use Instance Store type of EC2 instances to facilitate internal dependency cache
Explanation
Correct option:
Cache dependencies on S3
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your build servers.
Downloading dependencies is a critical phase in the build process. These dependent files can range in size from a few KBs to multiple MBs. Because most of the dependent files do not change frequently between builds, you can noticeably reduce your build time by caching dependencies in S3.
Incorrect options:
Reduce the number of dependencies - This is ideal but sometimes you may not have control over this as your application needs those dependencies, so this option is ruled out.
Ship all the dependencies as part of the source code - This is not a good practice as doing this will increase your build time. If your dependencies are not changing then its best to cache them.
Use Instance Store type of EC2 instances to facilitate internal dependency cache - An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
Instance Store cannot be used to facilitate the internal dependency cache for the code build process.
Reference:
https://aws.amazon.com/blogs/devops/how-to-enable-caching-for-aws-codebuild/
Question 27: Skipped
An organization uses Alexa as its intelligent assistant to improve productivity throughout the organization. A group of developers manages custom Alexa Skills written in Node.Js to control conference-room equipment settings and start meetings using voice activation. The manager has requested developers that all functions code should be monitored for error rates with the possibility of creating alarms on top of them.
Which of the following options should be chosen? (select two)
SSM
CloudTrail
X-Ray
CloudWatch Metrics
(Correct)
CloudWatch Alarms
(Correct)
Explanation
Correct options:
CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, and visualizes it using automated dashboards so you can get a unified view of your AWS resources, applications, and services that run in AWS and on-premises. You can correlate your metrics and logs to better understand the health and performance of your resources. You can also create alarms based on metric value thresholds you specify, or that can watch for anomalous metric behavior based on machine learning algorithms.
CloudWatch Metrics
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. Metric data is kept for 15 months, enabling you to view both up-to-the-minute data and historical data.
CloudWatch retains metric data as follows:
Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution custom metrics. Data points with a period of 60 seconds (1 minute) are available for 15 days Data points with a period of 300 seconds (5 minute) are available for 63 days Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months)
CloudWatch Alarms
You can use an alarm to automatically initiate actions on your behalf. An alarm watches a single metric over a specified time, and performs one or more specified actions, based on the value of the metric relative to a threshold over time. The action is a notification sent to an Amazon SNS topic or an Auto Scaling policy. You can also add alarms to dashboards.
CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. Alarms work together with CloudWatch Metrics.
A metric alarm has the following possible states:
OK – The metric or expression is within the defined threshold.
ALARM – The metric or expression is outside of the defined threshold.
INSUFFICIENT_DATA – The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
Incorrect options:
X-Ray - AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.
X-Ray cannot be used to capture metrics and set up alarms as per the given use-case, so this option is incorrect.
CloudTrail - CloudWatch is a monitoring service whereas CloudTrail is more of an audit service where you can find API calls made on services and by whom.
Systems Manager - Using AWS Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Systems Manager cannot be used to capture metrics and set up alarms as per the given use-case, so this option is incorrect.
References:
https://aws.amazon.com/cloudwatch/
https://aws.amazon.com/cloudtrail/
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html
Question 30: Skipped
A firm uses AWS DynamoDB to store information about people’s favorite sports teams and allow the information to be searchable from their home page. There is a daily requirement that all 10 million records in the table should be deleted then re-loaded at 2:00 AM each night.
Which option is an efficient way to delete with minimal costs?
Call PurgeTable
Scan and call BatchDeleteItem
Delete then re-create the table
(Correct)
Scan and call DeleteItem
Explanation
Correct option:
Delete then re-create the table
The DeleteTable operation deletes a table and all of its items. After a DeleteTable
request, the specified table is in the DELETING
state until DynamoDB completes the deletion.
Scan and call DeleteItem - Scan is a very slow operation for 10 million items and this is not the best-fit option for the given use-case.
Scan and call BatchDeleteItem - Scan is a very slow operation for 10 million items and this is not the best-fit option for the given use-case.
Call PurgeTable - This is a made-up option and has been added as a distractor.
Reference:
https://docs.aws.amazon.com/cli/latest/reference/dynamodb/delete-table.html
Question 35: Skipped
A company has several Linux-based EC2 instances that generate various log files which need to be analyzed for security and compliance purposes. The company wants to use Kinesis Data Streams (KDS) to analyze this log data.
Which of the following is the most optimal way of sending log data from the EC2 instances to KDS?
Install and configure Kinesis Agent on each of the instances
(Correct)
Run cron job on each of the instances to collect log data and send it to Kinesis Data Streams
Install AWS SDK on each of the instances and configure it to send the necessary files to Kinesis Data Streams
Use Kinesis Producer Library (KPL) to collect and ingest data from each EC2 instance
Explanation
Correct option:
Install and configure Kinesis Agent on each of the instances
Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send data to Kinesis Data Streams. The agent continuously monitors a set of files and sends new data to your stream. The agent handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner. It also emits Amazon CloudWatch metrics to help you better monitor and troubleshoot the streaming process.
You can install the agent on Linux-based server environments such as web servers, log servers, and database servers. After installing the agent, configure it by specifying the files to monitor and the stream for the data. After the agent is configured, it durably collects data from the files and reliably sends it to the stream.
The agent can also pre-process the records parsed from monitored files before sending them to your stream. You can enable this feature by adding the dataProcessingOptions configuration setting to your file flow. One or more processing options can be added and they will be performed in the specified order.
Incorrect options:
Run cron job on each of the instances to collect log data and send it to Kinesis Data Streams - This solution is possible, though not an optimal one. This solution requires writing custom code and tracking file/log changes, retry failures and so on. Kinesis Agent is built to handle all these requirements and integrates with Data Streams.
Install AWS SDK on each of the instances and configure it to send the necessary files to Kinesis Data Streams - Kinesis Data Streams APIs that are available in the AWS SDKs, helps you manage many aspects of Kinesis Data Streams, including creating streams, resharding, and putting and getting records. You will need to write custom code to handle new data in the log files and send it over to your stream. Kinesis Agent does it easily, as it is designed to continuously monitor a set of files and send new data to your stream.
Use Kinesis Producer Library (KPL) to collect and ingest data from each EC2 instance - The KPL is an easy-to-use, highly configurable library that helps you write to a Kinesis data stream. It acts as an intermediary between your producer application code and the Kinesis Data Streams API actions. This is not optimal compared to Kinesis Agent which is designed to continuously monitor a set of files and send new data to your stream.
Reference:
https://docs.aws.amazon.com/streams/latest/dev/writing-with-agents.html
Question 36: Skipped
A user has an IAM policy as well as an Amazon SQS policy that apply to his account. The IAM policy grants his account permission for the ReceiveMessage
action on example_queue
, whereas the Amazon SQS policy gives his account permission for the SendMessage
action on the same queue.
Considering the permissions above, which of the following options are correct? (Select two)
The user can send a ReceiveMessage
request to example_queue
, the IAM policy allows this action
(Correct)
If the user sends a SendMessage
request to example_queue
, the IAM policy will deny this action
If you add a policy that denies the user access to all actions for the queue, the policy will override the other two policies and the user will not have access to example_queue
(Correct)
Either of IAM policies or Amazon SQS policies should be used to grant permissions. Both cannot be used together
Adding only an IAM policy to deny the user of all actions on the queue is not enough. The SQS policy should also explicitly deny all action
Explanation
Correct options:
The user can send a ReceiveMessage
request to example_queue
, the IAM policy allows this action
The user has both an IAM policy and an Amazon SQS policy that apply to his account. The IAM policy grants his account permission for the ReceiveMessage
action on example_queue
, whereas the Amazon SQS policy gives his account permission for the SendMessage
action on the same queue.
If you add a policy that denies the user access to all actions for the queue, the policy will override the other two policies and the user will not have access to example_queue
To remove the user's full access to the queue, the easiest thing to do is to add a policy that denies him access to all actions for the queue. This policy overrides the other two because an explicit deny always overrides an allow.
You can also add an additional statement to the Amazon SQS policy that denies the user any type of access to the queue. It has the same effect as adding an IAM policy that denies the user access to the queue.
Incorrect options:
If the user sends a SendMessage
request to example_queue
, the IAM policy will deny this action - If the user sends a SendMessage
request to example_queue
, the Amazon SQS policy allows the action. The IAM policy has no explicit deny on this action, so it plays no part.
Either of IAM policies or Amazon SQS policies should be used to grant permissions. Both cannot be used together - There are two ways to give your users permissions to your Amazon SQS resources: using the Amazon SQS policy system and using the IAM policy system. You can use one or the other, or both. For the most part, you can achieve the same result with either one.
Adding only an IAM policy to deny the user of all actions on the queue is not enough. The SQS policy should also explicitly deny all action - The user can be denied access using any one of the policies. Explicit deny in any policy will override all other allow actions defined using either of the policies.
Reference:
Question 39: Skipped
After reviewing your monthly AWS bill you notice that the cost of using Amazon SQS has gone up substantially after creating new queues; however, you know that your queue clients do not have a lot of traffic and are receiving empty responses.
Which of the following actions should you take?
Use a FIFO queue
Use LongPolling
(Correct)
Decrease DelaySeconds
Increase the VisibilityTimeout
Explanation
Correct option:
Use LongPolling
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
Amazon SQS provides short polling and long polling to receive messages from a queue. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if the query found no messages. With long polling, Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request. Amazon SQS sends an empty response only if the polling wait time expires.
Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response). When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds.
Exam Alert:
Incorrect options:
Increase the VisibilityTimeout - Because there is no guarantee that a consumer received a message, the consumer must delete it. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout. Visibility timeout will not help with cost reduction.
Use a FIFO queue - FIFO queues are designed to enhance messaging between applications when the order of operations and events has to be enforced. FIFO queues will not help with cost reduction. In fact, they are costlier than standard queues.
Decrease DelaySeconds - This is similar to VisibilityTimeout. The difference is that a message is hidden when it is first added to a queue for DelaySeconds, whereas for visibility timeouts a message is hidden only after it is consumed from the queue. DelaySeconds will not help with cost reduction.
Reference:
Question 42: Skipped
An IT company uses a blue/green deployment policy to provision new Amazon EC2 instances in an Auto Scaling group behind a new Application Load Balancer for each new application version. The current set up requires the users to log in after every new deployment.
As a Developer Associate, what advice would you give to the company for resolving this issue?
Enable sticky sessions in the Application Load Balancer
Use ElastiCache to maintain user sessions
(Correct)
Use multicast to replicate session information
Use rolling updates instead of a blue/green deployment
Explanation
Correct option:
Use ElastiCache to maintain user sessions
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
To address scalability and to provide a shared data storage for sessions that can be accessed from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached via ElastiCache.
Incorrect options:
Use rolling updates instead of a blue/green deployment - With rolling deployments, Elastic Beanstalk splits the environment's Amazon EC2 instances into batches and deploys the new version of the application to one batch at a time. It leaves the rest of the instances in the environment running the old version of the application. When processing a batch, Elastic Beanstalk detaches all instances in the batch from the load balancer, deploys the new application version, and then reattaches the instances.
This means that some of the users can experience session disruptions when the instances maintaining the sessions were detached as part of the given batch. So this option is incorrect.
Enable sticky sessions in the Application Load Balancer - As the Application Load Balancer itself is replaced on each new deployment, so maintaining sticky sessions via the Application Load Balancer will not work.
Use multicast to replicate session information - This option has been added as a distractor.
References:
https://aws.amazon.com/caching/session-management/
Question 48: Skipped
A voting system hosted on-premise was recently migrated to AWS to lower cost, gain scalability, and to better serve thousands of concurrent users. When one of the AWS resource state changes, it generates an event and will need to trigger AWS Lambda. The AWS resource whose state changes and AWS Lambda does not have direct integration.
Which of the following methods can be used to trigger AWS Lambda?
AWS Lambda Custom Sources
Cron jobs to trigger AWS Lambda to check the state of your service
Open a support ticket with AWS
CloudWatch Events Rules with AWS Lambda
(Correct)
Explanation
Correct option:
CloudWatch Events Rules with AWS Lambda
You can create a Lambda function and direct CloudWatch Events to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression.
Incorrect options:
AWS Lambda Custom Sources - This is a made-up option and has been added as a distractor.
Open a support ticket with AWS - You can, although the AWS support team will not add a custom configuration for you, they will step you through creating event rule with Lambda.
Cron jobs to trigger AWS Lambda to check the state of your service - You would need an additional server for your cron job instead you should consider using a cron expression with CloudWatch.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
Question 62: Skipped
A developer has pushed a Lambda function that pushes data into an RDS MySQL database with the following Python code:
def handler(event, context):
mysql = mysqlclient.connect()
data = event['data']
mysql.execute(f"INSERT INTO foo (bar) VALUES (${data});")
mysql.close()
return
On the first execution, the Lambda function takes 2 seconds to execute. On the second execution and all the subsequent ones, the Lambda function takes 1.9 seconds to execute.
What can be done to improve the execution time of the Lambda function?
Upgrade the MySQL instance type
Increase the Lambda function RAM
Change the runtime to Node.js
Move the database connection out of the handler
(Correct)
Explanation
Correct option:
Move the database connection out of the handler
Here at every Lambda function execution, the database connection handler will be created, and then closed. These connections steps are expensive in terms of time, and thus should be moved out of the handler
function so that they are kept in the function execution context, and re-used across function calls. This is what the function should look like in the end:
mysql = mysqlclient.connect()
def handler(event, context):
data = event['data']
mysql.execute(f"INSERT INTO foo (bar) VALUES (${data});")
return
Incorrect options:
Upgrade the MySQL instance type - The bottleneck here is the MySQL connection object, not the MySQL instance itself.
Change the runtime to Node.js - Re-writing the function in another runtime won't improve the performance.
Increase the Lambda function RAM - While this may help speed-up the Lambda function, as increasing the RAM also increases the CPU allocated to your function, it only makes sense if RAM or CPU is a critical factor in the Lambda function performance. Here, the connection object is at fault.
Reference:
https://docs.aws.amazon.com/lambda/latest/dg/running-lambda-code.html
Question 63: Skipped
A website serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from an application load balancer. The user base is spread across the world and latency should be minimized for a better user experience.
Which technology/service can help access the static and dynamic content while keeping the data latency low?
Use CloudFront's Lambda@Edge feature to server data from S3 buckets and load balancer programmatically on-the-fly
Use Global Accelerator to transparently switch between S3 bucket and load balancer for different data needs
Configure CloudFront with multiple origins to serve both static and dynamic content at low latency to global users
(Correct)
Use CloudFront's Origin Groups to group both static and dynamic requests into one request for further processing
Explanation
Correct option:
Configure CloudFront with multiple origins to serve both static and dynamic content at low latency to global users
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
You can configure a single CloudFront web distribution to serve different types of requests from multiple origins.
Incorrect options:
Use CloudFront's Lambda@Edge feature to server data from S3 buckets and load balancer programmatically on-the-fly - AWS Lambda@Edge is a general-purpose serverless compute feature that supports a wide range of computing needs and customizations. Lambda@Edge is best suited for computationally intensive operations. This is not relevant for the given use case.
Use Global Accelerator to transparently switch between S3 bucket and load balancer for different data needs - AWS Global Accelerator is a networking service that improves the performance of your users’ traffic by up to 60% using Amazon Web Services’ global network infrastructure.
With Global Accelerator, you are provided two global static public IPs that act as a fixed entry point to your application, improving availability. On the back end, add or remove your AWS application endpoints, such as Application Load Balancers, Network Load Balancers, EC2 Instances, and Elastic IPs without making user-facing changes. Global Accelerator automatically re-routes your traffic to your nearest healthy available endpoint to mitigate endpoint failure.
CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.
Global Accelerator is not relevant for the given use-case.
Use CloudFront's Origin Groups to group both static and dynamic requests into one request for further processing - You can set up CloudFront with origin failover for scenarios that require high availability. To get started, you create an Origin Group with two origins: a primary and a secondary. If the primary origin is unavailable or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin. Origin Groups are for origin failure scenarios and not for request routing.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-overview.html
Question 65: Skipped
A development team uses the AWS SDK for Java to maintain an application that stores data in AWS DynamoDB. The application makes use of Scan
operations to return several items from a 25 GB table. There is no possibility of creating indexes to retrieve these items predictably. Developers are trying to get these specific rows from DynamoDB as fast as possible.
Which of the following options can be used to improve the performance of the Scan operation?
Use a FilterExpression
Use parallel scans
(Correct)
Use a Query
Use a ProjectionExpression
Explanation
Correct option:
Use parallel scans
By default, the Scan operation processes data sequentially. Amazon DynamoDB returns data to the application in 1 MB increments, and an application performs additional Scan operations to retrieve the next 1 MB of data. The larger the table or index being scanned, the more time the Scan takes to complete. To address these issues, the Scan operation can logically divide a table or secondary index into multiple segments, with multiple application workers scanning the segments in parallel.
To make use of a parallel Scan feature, you will need to run multiple worker threads or processes in parallel. Each worker will be able to scan a separate partition of a table concurrently with the other workers.
Incorrect options:
Use a ProjectionExpression - A projection expression is a string that identifies the attributes you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated
Use a FilterExpression - If you need to further refine the Scan results, you can optionally provide a filter expression. A filter expression determines which items within the Scan results should be returned to you. All of the other results are discarded.
A filter expression is applied after a Scan finishes, but before the results are returned. Therefore, a Scan consumes the same amount of read capacity, regardless of whether a filter expression is present.
Use a Query - This could work if we were able to create an index, but the question says: "There is no possibility of creating indexes to retrieve these items predictably". As such, we cannot use a Query.
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan
How Kinesis Data Streams Work via - https://aws.amazon.com/kinesis/data-streams/
Please review this note for more details on enhanced fan-out for Kinesis Data Streams: via - https://aws.amazon.com/kinesis/data-streams/faqs/
How CloudFormation Works: via - https://aws.amazon.com/cloudformation/
How API Gateway Works: via - https://aws.amazon.com/api-gateway/
How Lambda function works: via - https://aws.amazon.com/lambda/
How X-Ray Works: via - https://aws.amazon.com/xray/
Please see this note for more details on provisioned concurrency: via - https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via - https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
Elastic BeanStalk Key Concepts: via - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html
CloudFront Overview: via - https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/
Sample config for ECS Container Agent: via - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html
Please see this note for more details: via - https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html
More info here: via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#sticky-sessions
How X-Ray Works: via - https://aws.amazon.com/xray/
You can also customize the X-Ray sampling rules: via - https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html
How X-Ray Works: via - https://aws.amazon.com/xray/
Geospatial on Amazon ElastiCache for Redis: via - https://aws.amazon.com/blogs/database/work-with-cluster-mode-on-amazon-elasticache-for-redis/
Sample SNS-SQS Fanout message: via - https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
Best Practices for Caching Dependencies: via - https://aws.amazon.com/blogs/devops/how-to-enable-caching-for-aws-codebuild/
How CloudWatch works: via - https://aws.amazon.com/cloudwatch/
How X-Ray Works: via - https://aws.amazon.com/xray/
How CloudTrail Works: via - https://aws.amazon.com/cloudtrail/
How IAM policy and SQS policy work in tandem: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-using-identity-based-policies.html
How IAM policy and SQS policy work in tandem: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-using-identity-based-policies.html
Please review the differences between Short Polling vs Long Polling: via - https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html
CloudWatch Events Key Concepts: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
Schedule Expressions for CloudWatch Events Rules: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
Steps to configure CLoudFront for multiple origins: via - https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-distribution-serve-content/
How DynamoDB parallel Scan works: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan