Deployment with AWS Services
Last updated
Last updated
Question 3: Incorrect
A business has their test environment built on Amazon EC2 configured on General purpose SSD volume.
At which gp2 volume size will their test environment hit the max IOPS?
2.7 TiB
10.6 TiB
5.3 TiB
(Correct)
16 TiB
(Incorrect)
Explanation
Correct option:
The performance of gp2 volumes is tied to volume size, which determines the baseline performance level of the volume and how quickly it accumulates I/O credits; larger volumes have higher baseline performance levels and accumulate I/O credits faster.
5.3 TiB - General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size.
Maximum IOPS vs Volume Size for General Purpose SSD (gp2) volumes: via -
Incorrect options:
10.6 TiB - As explained above, this is an incorrect option.
16 TiB - As explained above, this is an incorrect option.
2.7 TiB - As explained above, this is an incorrect option.
Reference:
Question 6: Skipped
The development team at a HealthCare company has deployed EC2 instances in AWS Account A. These instances need to access patient data with Personally Identifiable Information (PII) on multiple S3 buckets in another AWS Account B.
As a Developer Associate, which of the following solutions would you recommend for the given use-case?
Create an IAM role (instance profile) in Account A and set Account B as a trusted entity. Attach this role to the EC2 instances in Account A and add an inline policy to this role to access S3 data from Account B
Add a bucket policy to all the Amazon S3 buckets in Account B to allow access from EC2 instances in Account A
Create an IAM role with S3 access in Account B and set Account A as a trusted entity. Create another role (instance profile) in Account A and attach it to the EC2 instances in Account A and add an inline policy to this role to assume the role from Account B
(Correct)
Copy the underlying AMI for the EC2 instances from Account A into Account B. Launch EC2 instances in Account B using this AMI and then access the PII data on Amazon S3 in Account B
Explanation
Correct option:
Create an IAM role with S3 access in Account B and set Account A as a trusted entity. Create another role (instance profile) in Account A and attach it to the EC2 instances in Account A and add an inline policy to this role to assume the role from Account B
You can give EC2 instances in one account ("account A") permissions to assume a role from another account ("account B") to access resources such as S3 buckets. You need to create an IAM role in Account B and set Account A as a trusted entity. Then attach a policy to this IAM role such that it delegates access to Amazon S3 like so -
Then you can create another role (instance profile) in Account A and attach it to the EC2 instances in Account A and add an inline policy to this role to assume the role from Account B like so -
Incorrect options:
Create an IAM role (instance profile) in Account A and set Account B as a trusted entity. Attach this role to the EC2 instances in Account A and add an inline policy to this role to access S3 data from Account B - This option contradicts the explanation provided earlier in the explanation, hence this option is incorrect.
Copy the underlying AMI for the EC2 instances from Account A into Account B. Launch EC2 instances in Account B using this AMI and then access the PII data on Amazon S3 in Account B - Copying the AMI is a distractor as this does not solve the use-case outlined in the problem statement.
Add a bucket policy to all the Amazon S3 buckets in Account B to allow access from EC2 instances in Account A - Just adding a bucket policy in Account B is not enough, as you also need to create an IAM policy in Account A to access S3 objects in Account B.
References:
Question 11: Skipped
While defining a business workflow as state machine on AWS Step Functions, a developer has configured several states.
Which of the following would you identify as the state that represents a single unit of work performed by a state machine?
(Correct)
Explanation
Correct option:
A Task state ("Type": "Task") represents a single unit of work performed by a state machine.
All work in your state machine is done by tasks. A task performs work by using an activity or an AWS Lambda function, or by passing parameters to the API actions of other services.
AWS Step Functions can invoke Lambda functions directly from a task state. A Lambda function is a cloud-native task that runs on AWS Lambda. You can write Lambda functions in a variety of programming languages, using the AWS Management Console or by uploading code to Lambda.
Incorrect options:
A Wait state ("Type": "Wait") delays the state machine from continuing for a specified time.
Resource
field is a required parameter for Task
state. This definition is not of a Task
but of type Pass
.
A Fail state ("Type": "Fail") stops the execution of the state machine and marks it as a failure unless it is caught by a Catch block. Because Fail states always exit the state machine, they have no Next field and don't require an End field.
Reference:
Question 14: Skipped
A developer working with EC2 Windows instance has installed Kinesis Agent for Windows to stream JSON-formatted log files to Amazon Simple Storage Service (S3) via Amazon Kinesis Data Firehose. The developer wants to understand the sink type capabilities of Kinesis Firehose.
Which of the following sink types is NOT supported by Kinesis Firehose.
Amazon Elasticsearch Service (Amazon ES) with optionally backing up data to Amazon S3
Amazon Redshift with Amazon S3
Amazon ElastiCache with Amazon S3 as backup
(Correct)
Amazon Simple Storage Service (Amazon S3) as a direct Firehose destination
Explanation
Correct option:
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk. With Kinesis Data Firehose, you don't need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified.
Amazon ElastiCache with Amazon S3 as backup - Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached. ElastiCache is NOT a supported destination for Amazon Kinesis Data Firehose.
Incorrect options:
Amazon Elasticsearch Service (Amazon ES) with optionally backing up data to Amazon S3 - Amazon ES is a supported destination type for Kinesis Firehose. Streaming data is delivered to your Amazon ES cluster, and can optionally be backed up to your S3 bucket concurrently.
Amazon Simple Storage Service (Amazon S3) as a direct Firehose destination - For Amazon S3 destinations, streaming data is delivered to your S3 bucket. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket.
Amazon Redshift with Amazon S3 - For Amazon Redshift destinations, streaming data is delivered to your S3 bucket first. Kinesis Data Firehose then issues an Amazon Redshift COPY command to load data from your S3 bucket to your Amazon Redshift cluster. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket.
Reference:
Question 16: Skipped
A startup has been experimenting with DynamoDB in its new test environment. The development team has discovered that some of the write operations have been overwriting existing items that have the specified primary key. This has messed up their data, leading to data discrepancies.
Which DynamoDB write option should be selected to prevent this kind of overwriting?
Batch writes
Atomic Counters
Conditional writes
(Correct)
Use Scan operation
Explanation
Correct option:
Conditional writes - DynamoDB optionally supports conditional writes for write operations (PutItem, UpdateItem, DeleteItem). A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error.
For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value. Conditional writes are helpful in cases where multiple users attempt to modify the same item. This is the right choice for the current scenario.
Incorrect options:
Batch writes - Bath operations (read and write) help reduce the number of network round trips from your application to DynamoDB. In addition, DynamoDB performs the individual read or write operations in parallel. Applications benefit from this parallelism without having to manage concurrency or threading. But, this is of no use in the current scenario of overwriting changes.
Atomic Counters - Atomic Counters is a numeric attribute that is incremented, unconditionally, without interfering with other write requests. You might use an atomic counter to track the number of visitors to a website. This functionality is not useful for the current scenario.
Use Scan operation - A Scan operation in Amazon DynamoDB reads every item in a table or a secondary index. By default, a Scan operation returns all of the data attributes for every item in the table or index. This is given as a distractor and not related to DynamoDB item updates.
Reference:
Question 17: Skipped
The technology team at an investment bank uses DynamoDB to facilitate high-frequency trading where multiple trades can try and update an item at the same time.
Which of the following actions would make sure that only the last updated value of any item is used in the application?
Use ConsistentRead = true while doing PutItem operation for any item
Use ConsistentRead = false while doing PutItem operation for any item
Use ConsistentRead = true while doing UpdateItem operation for any item
Use ConsistentRead = true while doing GetItem operation for any item
(Correct)
Explanation
Correct option:
Use ConsistentRead = true while doing GetItem operation for any item
DynamoDB supports eventually consistent and strongly consistent reads.
Eventually Consistent Reads
When you read data from a DynamoDB table, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If you repeat your read request after a short time, the response should return the latest data.
Strongly Consistent Reads
When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful.
DynamoDB uses eventually consistent reads by default. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses strongly consistent reads during the operation. As per the given use-case, to make sure that only the last updated value of any item is used in the application, you should use strongly consistent reads by setting ConsistentRead = true for GetItem operation.
Incorrect options:
Use ConsistentRead = true while doing UpdateItem operation for any item
Use ConsistentRead = true while doing PutItem operation for any item
Use ConsistentRead = false while doing PutItem operation for any item
As mentioned in the explanation above, strongly consistent reads apply only while using the read operations (such as GetItem, Query, and Scan). So these three options are incorrect.
Reference:
Question 18: Skipped
An Auto Scaling group has a maximum capacity of 3, a current capacity of 2, and a scaling policy that adds 3 instances.
When executing this scaling policy, what is the expected outcome?
Amazon EC2 Auto Scaling adds only 1 instance to the group
(Correct)
Amazon EC2 Auto Scaling adds 3 instances to the group
Amazon EC2 Auto Scaling does not add any instances to the group, but suggests changing the scaling policy to add one instance
Amazon EC2 Auto Scaling adds 3 instances to the group and scales down 2 of those instances eventually
Explanation
Correct option:
A scaling policy instructs Amazon EC2 Auto Scaling to track a specific CloudWatch metric, and it defines what action to take when the associated CloudWatch alarm is in ALARM.
When a scaling policy is executed, if the capacity calculation produces a number outside of the minimum and maximum size range of the group, Amazon EC2 Auto Scaling ensures that the new capacity never goes outside of the minimum and maximum size limits.
Amazon EC2 Auto Scaling adds only 1 instance to the group
For the given use-case, Amazon EC2 Auto Scaling adds only 1 instance to the group to prevent the group from exceeding its maximum size.
Incorrect options:
Amazon EC2 Auto Scaling adds 3 instances to the group - This is an incorrect statement. Auto Scaling ensures that the new capacity never goes outside of the minimum and maximum size limits.
Amazon EC2 Auto Scaling adds 3 instances to the group and scales down 2 of those instances eventually - This is an incorrect statement. Adding the instances initially and immediately downsizing them is impractical.
Amazon EC2 Auto Scaling does not add any instances to the group, but suggests changing the scaling policy to add one instance - This option has been added as a distractor.
Reference:
Question 19: Skipped
The development team at an analytics company is using SQS queues for decoupling the various components of application architecture. As the consumers need additional time to process SQS messages, the development team wants to postpone the delivery of new messages to the queue for a few seconds.
As a Developer Associate, which of the following solutions would you recommend to the development team?
Use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds
Use FIFO queues to postpone the delivery of new messages to the queue for a few seconds
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
(Correct)
Use visibility timeout to postpone the delivery of new messages to the queue for a few seconds
Explanation
Correct option:
Use delay queues to postpone the delivery of new messages to the queue for a few seconds
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Delay queues let you postpone the delivery of new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.
Incorrect options:
Use FIFO queues to postpone the delivery of new messages to the queue for a few seconds - SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. You cannot use FIFO queues to postpone the delivery of new messages to the queue for a few seconds.
Use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds - Dead-letter queues can be used by other queues (source queues) as a target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed. You cannot use dead-letter queues to postpone the delivery of new messages to the queue for a few seconds.
Use visibility timeout to postpone the delivery of new messages to the queue for a few seconds - Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to postpone the delivery of new messages to the queue for a few seconds.
Reference:
Question 20: Skipped
A company wants to automate its order fulfillment and inventory tracking workflow. Starting from order creation to updating inventory to shipment, the entire process has to be tracked, managed and updated automatically.
Which of the following would you recommend as the most optimal solution for this requirement?
Use Amazon SNS to develop event-driven applications that can share information
Use AWS Step Functions to coordinate and manage the components of order management and inventory tracking workflow
(Correct)
Configure Amazon EventBridge to track the flow of work from order management to inventory tracking systems
Use Amazon Simple Queue Service (Amazon SQS) queue to pass information from order management to inventory tracking workflow
Explanation
Correct option:
Use AWS Step Functions to coordinate and manage the components of order management and inventory tracking workflow
AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as an input to the next. Each step in your application executes in order, as defined by your business logic.
AWS Step Functions enables you to implement a business process as a series of steps that make up a workflow. The individual steps in the workflow can invoke a Lambda function or a container that has some business logic, update a database such as DynamoDB or publish a message to a queue once that step or the entire workflow completes execution.
Benefits of Step Functions:
Build and update apps quickly: AWS Step Functions lets you build visual workflows that enable the fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code.
Improve resiliency: AWS Step Functions manages state, checkpoints and restarts for you to make sure that your application executes in order and as expected. Built-in try/catch, retry and rollback capabilities deal with errors and exceptions automatically.
Write less code: AWS Step Functions manages the logic of your application for you and implements basic primitives such as branching, parallel execution, and timeouts. This removes extra code that may be repeated in your microservices and functions.
Incorrect options:
Use Amazon Simple Queue Service (Amazon SQS) queue to pass information from order management to inventory tracking workflow - You should consider AWS Step Functions when you need to coordinate service components in the development of highly scalable and auditable applications. You should consider using Amazon Simple Queue Service (Amazon SQS), when you need a reliable, highly scalable, hosted queue for sending, storing, and receiving messages between services. Step Functions keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues.
Configure Amazon EventBridge to track the flow of work from order management to inventory tracking systems - Both Amazon EventBridge and Amazon SNS can be used to develop event-driven applications, and your choice will depend on your specific needs. Amazon EventBridge is recommended when you want to build an application that reacts to events from SaaS applications and/or AWS services. Amazon EventBridge is the only event-based service that integrates directly with third-party SaaS partners.
Use Amazon SNS to develop event-driven applications that can share information - Amazon SNS is recommended when you want to build an application that reacts to high throughput or low latency messages published by other applications or microservices (as Amazon SNS provides nearly unlimited throughput), or for applications that need very high fan-out (thousands or millions of endpoints).
References:
Question 23: Skipped
As an AWS certified developer associate, you are working on an AWS CloudFormation template that will create resources for a company's cloud infrastructure. Your template is composed of three stacks which are Stack-A, Stack-B, and Stack-C. Stack-A will provision a VPC, a security group, and subnets for public web applications that will be referenced in Stack-B and Stack-C.
After running the stacks you decide to delete them, in which order should you do it?
Stack B, then Stack C, then Stack A
(Correct)
Stack C then Stack A then Stack B
Stack A, Stack C then Stack B
Stack A, then Stack B, then Stack C
Explanation
Correct option:
AWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion.
Stack B, then Stack C, then Stack A
All of the imports must be removed before you can delete the exporting stack or modify the output value. In this case, you must delete Stack B as well as Stack C, before you delete Stack A.
Incorrect options:
Stack A, then Stack B, then Stack C - All of the imports must be removed before you can delete the exporting stack or modify the output value. In this case, you cannot delete Stack A first because that's being referenced in the other Stacks.
Stack A, Stack C then Stack B - All of the imports must be removed before you can delete the exporting stack or modify the output value. In this case, you cannot delete Stack A first because that's being referenced in the other Stacks.
Stack C then Stack A then Stack B - Stack C is fine but you should delete Stack B before Stack A because all of the imports must be removed before you can delete the exporting stack or modify the output value.
Question 24: Skipped
A developer with access to the AWS Management Console terminated an instance in the us-east-1a availability zone. The attached EBS volume remained and is now available for attachment to other instances. Your colleague launches a new Linux EC2 instance in the us-east-1e availability zone and is attempting to attach the EBS volume. Your colleague informs you that it is not possible and need your help.
Which of the following explanations would you provide to them?
The EBS volume is encrypted
EBS volumes are region locked
EBS volumes are AZ locked
(Correct)
The required IAM permissions are missing
Explanation
Correct option:
EBS volumes are AZ locked
An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. EBS volumes are flexible. For current-generation volumes attached to current-generation instance types, you can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes.
When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data loss due to the failure of any single hardware component. You can attach an EBS volume to an EC2 instance in the same Availability Zone.
Incorrect options:
EBS volumes are region locked - It's confined to an Availability Zone and not by region.
The required IAM permissions are missing - This is a possibility as well but if permissions are not an issue then you are still confined to an availability zone.
The EBS volume is encrypted - This doesn't affect the ability to attach an EBS volume.
Reference:
Question 27: Skipped
A media publishing company is using Amazon EC2 instances for running their business-critical applications. Their IT team is looking at reserving capacity apart from savings plans for the critical instances.
As a Developer Associate, which of the following reserved instance types you would select to provide capacity reservations?
Both Regional Reserved Instances and Zonal Reserved Instances
Zonal Reserved Instances
(Correct)
Neither Regional Reserved Instances nor Zonal Reserved Instances
Regional Reserved Instances
Explanation
Correct option:
When you purchase a Reserved Instance for a specific Availability Zone, it's referred to as a Zonal Reserved Instance. Zonal Reserved Instances provide capacity reservations as well as discounts.
Zonal Reserved Instances - A zonal Reserved Instance provides a capacity reservation in the specified Availability Zone. Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This gives you the ability to create and manage Capacity Reservations independently from the billing discounts offered by Savings Plans or regional Reserved Instances.
Incorrect options:
Regional Reserved Instances - When you purchase a Reserved Instance for a Region, it's referred to as a regional Reserved Instance. A regional Reserved Instance does not provide a capacity reservation.
Both Regional Reserved Instances and Zonal Reserved Instances - As discussed above, only Zonal Reserved Instances provide capacity reservation.
Neither Regional Reserved Instances nor Zonal Reserved Instances - As discussed above, Zonal Reserved Instances provide capacity reservation.
References:
Question 28: Skipped
A social gaming application supports the transfer of gift vouchers between users. When a user hits a certain milestone on the leaderboard, they earn a gift voucher that can be redeemed or transferred to another user. The development team wants to ensure that this transfer is captured in the database such that the records for both users are either written successfully with the new gift vouchers or the status quo is maintained.
Which of the following solutions represent the best-fit options to meet the requirements for the given use-case? (Select two)
Complete both operations on RDS MySQL in a single transaction block
(Correct)
Use the DynamoDB transactional read and write APIs on the table items as a single, all-or-nothing operation
(Correct)
Use the Amazon Athena transactional read and write APIs on the table items as a single, all-or-nothing operation
Complete both operations on Amazon RedShift in a single transaction block
Perform DynamoDB read and write operations with ConsistentRead parameter set to true
Explanation
Correct option:
Use the DynamoDB transactional read and write APIs on the table items as a single, all-or-nothing operation
You can use DynamoDB transactions to make coordinated all-or-nothing changes to multiple items both within and across tables. Transactions provide atomicity, consistency, isolation, and durability (ACID) in DynamoDB, helping you to maintain data correctness in your applications.
Complete both operations on RDS MySQL in a single transaction block
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database with support for transactions in the cloud. A relational database is a collection of data items with pre-defined relationships between them. RDS supports the most demanding database applications. You can choose between two SSD-backed storage options: one optimized for high-performance Online Transaction Processing (OLTP) applications, and the other for cost-effective general-purpose use.
Incorrect options:
Perform DynamoDB read and write operations with ConsistentRead parameter set to true - DynamoDB uses eventually consistent reads unless you specify otherwise. Read operations (such as GetItem, Query, and Scan) provide a ConsistentRead parameter. If you set this parameter to true, DynamoDB uses strongly consistent reads during the operation. Read consistency does not facilitate DynamoDB transactions and this option has been added as a distractor.
Complete both operations on Amazon RedShift in a single transaction block - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. It cannot be used to manage database transactions.
Use the Amazon Athena transactional read and write APIs on the table items as a single, all-or-nothing operation - Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. It cannot be used to manage database transactions.
References:
Question 30: Skipped
What steps can a developer take to optimize the performance of a CPU-bound AWS Lambda function and ensure fast response time?
Increase the function's CPU
Increase the function's timeout
Increase the function's memory
(Correct)
Increase the function's provisioned concurrency
Explanation
Correct option:
Increase the function's memory
Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The Lambda console defaults new functions to the smallest setting and many developers also choose 128 MB for their functions.
The amount of memory also determines the amount of virtual CPU available to a function. Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU-, network- or memory-bound, then changing the memory setting can dramatically improve its performance.
Incorrect options:
Increase the function's provisioned concurrency
Increase the function's reserved concurrency
In Lambda, concurrency is the number of requests your function can handle at the same time. There are two types of concurrency controls available:
Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function.
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
Neither reserved concurrency nor provisioned concurrency has any impact on the CPU available to a function, so both these options are incorrect
Increase the function's CPU - This is a distractor as you cannot directly increase the CPU available to a function.
References:
Question 35: Skipped
A business has purchased one m4.xlarge Reserved Instance but it has used three m4.xlarge instances concurrently for an hour.
As a Developer, explain how the instances are charged?
All instances are charged at one hour of Reserved Instance usage
One instance is charged at one hour of Reserved Instance usage and the other two instances are charged at two hours of On-Demand usage
(Correct)
All instances are charged at one hour of On-Demand Instance usage
One instance is charged at one hour of On-Demand usage and the other two instances are charged at two hours of Reserved Instance usage
Explanation
Correct option:
All Reserved Instances provide you with a discount compared to On-Demand pricing.
One instance is charged at one hour of Reserved Instance usage and the other two instances are charged at two hours of On-Demand usage
A Reserved Instance billing benefit can apply to a maximum of 3600 seconds (one hour) of instance usage per clock-hour. You can run multiple instances concurrently, but can only receive the benefit of the Reserved Instance discount for a total of 3600 seconds per clock-hour; instance usage that exceeds 3600 seconds in a clock-hour is billed at the On-Demand rate.
Incorrect options:
All instances are charged at one hour of Reserved Instance usage - This is incorrect.
All instances are charged at one hour of On-Demand Instance usage - This is incorrect.
One instance is charged at one hour of On-Demand usage and the other two instances are charged at two hours of Reserved Instance usage - This is incorrect. If multiple eligible instances are running concurrently, the Reserved Instance billing benefit is applied to all the instances at the same time up to a maximum of 3600 seconds in a clock-hour; thereafter, On-Demand rates apply.
Reference:
Question 39: Skipped
A development team is working on an AWS Lambda function that accesses DynamoDB. The Lambda function must do an upsert, that is, it must retrieve an item and update some of its attributes or create the item if it does not exist.
Which of the following represents the solution with MINIMUM IAM permissions that can be used for the Lambda function to achieve this functionality?
dynamodb:AddItem, dynamodb:GetItem
dynamodb:UpdateItem, dynamodb:GetItem
(Correct)
dynamodb:GetRecords, dynamodb:PutItem, dynamodb:UpdateTable
dynamodb:UpdateItem, dynamodb:GetItem, dynamodb:PutItem
Explanation
Correct option:
dynamodb:UpdateItem, dynamodb:GetItem - With Amazon DynamoDB transactions, you can group multiple actions together and submit them as a single all-or-nothing TransactWriteItems or TransactGetItems operation.
You can use AWS Identity and Access Management (IAM) to restrict the actions that transactional operations can perform in Amazon DynamoDB. Permissions for Put, Update, Delete, and Get actions are governed by the permissions used for the underlying PutItem, UpdateItem, DeleteItem, and GetItem operations. For the ConditionCheck action, you can use the dynamodb:ConditionCheck
permission in IAM policies.
UpdateItem
action of DynamoDB APIs, edits an existing item's attributes or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
There is no need to inlcude the dynamodb:PutItem
action for the given use-case.
So, the IAM policy must include permissions to get and update the item in the DynamoDB table.
Incorrect options:
dynamodb:AddItem, dynamodb:GetItem
dynamodb:GetRecords, dynamodb:PutItem, dynamodb:UpdateTable
dynamodb:UpdateItem, dynamodb:GetItem, dynamodb:PutItem
These three options contradict the explanation provided above, so these options are incorrect.
References:
Question 45: Skipped
As a senior architect, you are responsible for the development, support, maintenance, and implementation of all database applications written using NoSQL technology. A new project demands a throughput requirement of 10 strongly consistent reads per second of 6KB in size each.
How many read capacity units will you need when configuring your DynamoDB table?
20
(Correct)
60
30
10
Explanation
Correct option:
Before proceeding with the calculations, please review the following:
20
One read capacity unit represents one strongly consistent read per second for an item up to 4 KB in size. If you need to read an item that is larger than 4 KB, DynamoDB will need to consume additional read capacity units.
1) Item Size / 4KB, rounding to the nearest whole number.
So, in the above case, 6KB / 4 KB = 1.5 or 2 read capacity units.
2) 1 read capacity unit per item (since strongly consistent read) × No of reads per second
So, in the above case, 2 x 10 = 20 read capacity units.
Incorrect options:
60
30
10
These three options contradict the details provided in the explanation above, so these are incorrect.
Reference:
Question 48: Skipped
As a Senior Developer, you are tasked with creating several API Gateway powered APIs along with your team of developers. The developers are working on the API in the development environment, but they find the changes made to the APIs are not reflected when the API is called.
As a Developer Associate, which of the following solutions would you recommend for this use-case?
Redeploy the API to an existing stage or to a new stage
(Correct)
Enable Lambda authorizer to access API
Developers need IAM permissions on API execution component of API Gateway
Use Stage Variables for development state of API
Explanation
Correct option:
Redeploy the API to an existing stage or to a new stage
After creating your API, you must deploy it to make it callable by your users. To deploy an API, you create an API deployment and associate it with a stage. A stage is a logical reference to a lifecycle state of your API (for example, dev, prod, beta, v2). API stages are identified by the API ID and stage name. Every time you update an API, you must redeploy the API to an existing stage or to a new stage. Updating an API includes modifying routes, methods, integrations, authorizers, and anything else other than stage settings.
Incorrect options:
Developers need IAM permissions on API execution component of API Gateway - Access control access to Amazon API Gateway APIs is done with IAM permissions. To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway. In the current scenario, developers do not need permissions on "execution components" but on "management components" of API Gateway that help them to create, deploy, and manage an API. Hence, this statement is an incorrect option.
Enable Lambda authorizer to access API - A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API. So, this feature too helps in access control, but in the current scenario its the developers and not the users who are facing the issue. So, this statement is an incorrect option.
Use Stage Variables for development state of API - Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates. Stage variables are not connected to the scenario described in the current use case.
References:
Question 50: Skipped
A development team is building a game where players can buy items with virtual coins. For every virtual coin bought by a user, both the players table as well as the items table in DynamodDB need to be updated simultaneously using an all-or-nothing operation.
As a developer associate, how will you implement this functionality?
Use TransactWriteItems
API of DynamoDB Transactions
(Correct)
Use BatchWriteItem
API to update multiple tables simultaneously
Capture the transactions in the players table using DynamoDB streams and then sync with the items table
Capture the transactions in the items table using DynamoDB streams and then sync with the players table
Explanation
Correct option: Use TransactWriteItems
API of DynamoDB Transactions
With Amazon DynamoDB transactions, you can group multiple actions together and submit them as a single all-or-nothing TransactWriteItems
or TransactGetItems
operation.
TransactWriteItems
is a synchronous and idempotent write operation that groups up to 25 write actions in a single all-or-nothing operation. These actions can target up to 25 distinct items in one or more DynamoDB tables within the same AWS account and in the same Region. The aggregate size of the items in the transaction cannot exceed 4 MB. The actions are completed atomically so that either all of them succeed or none of them succeeds.
You can optionally include a client token when you make a TransactWriteItems call to ensure that the request is idempotent. Making your transactions idempotent helps prevent application errors if the same operation is submitted multiple times due to a connection time-out or other connectivity issue.
Incorrect options:
Use BatchWriteItem
API to update multiple tables simultaneously - A TransactWriteItems
operation differs from a BatchWriteItem
operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem
operation, it is possible that only some of the actions in the batch succeed while the others do not.
Capture the transactions in the players table using DynamoDB streams and then sync with the items table
Capture the transactions in the items table using DynamoDB streams and then sync with the players table
Many applications benefit from capturing changes to items stored in a DynamoDB table, at the point in time when such changes occur. DynamoDB supports streaming of item-level change data capture records in near-real-time. You can build applications that consume these streams and take action based on the contents. DynamoDB streams cannot be used to capture transactions in DynamoDB, therefore both these options are incorrect.
Reference:
Question 51: Skipped
As a Developer, you are given a document written in YAML that represents the architecture of a serverless application. The first line of the document contains Transform: 'AWS::Serverless-2016-10-31'
.
What does the Transform
section in the document represent?
It represents a Lambda function definition
Presence of Transform
section indicates it is a Serverless Application Model (SAM) template
(Correct)
Presence of Transform
section indicates it is a CloudFormation Parameter
It represents an intrinsic function
Explanation
Correct option:
AWS CloudFormation template is a JSON- or YAML-formatted text file that describes your AWS infrastructure. Templates include several major sections. The "Resources" section is the only required section. The optional "Transform" section specifies one or more macros that AWS CloudFormation uses to process your template.
Presence of Transform
section indicates it is a Serverless Application Model (SAM) template - The AWS::Serverless transform, which is a macro hosted by AWS CloudFormation, takes an entire template written in the AWS Serverless Application Model (AWS SAM) syntax and transforms and expands it into a compliant AWS CloudFormation template. So, the presence of the Transform
section indicates, the document is a SAM template.
Incorrect options:
It represents a Lambda function definition - Lambda function is created using "AWS::Lambda::Function" resource and has no connection to Transform
section.
It represents an intrinsic function - Intrinsic Functions in templates are used to assign values to properties that are not available until runtime. They usually start with Fn::
or !
. Example: !Sub
or Fn::Sub
.
Presence of 'Transform' section indicates it is a CloudFormation Parameter - CloudFormation parameters are part of Parameters
block of the template, similar to below code:
References:
Question 52: Skipped
A pharmaceutical company runs their database workloads on Provisioned IOPS SSD (io1) volumes.
As a Developer Associate, which of the following options would you identify as an INVALID configuration for io1 EBS volume types?
200 GiB size volume with 2000 IOPS
200 GiB size volume with 5000 IOPS
200 GiB size volume with 10000 IOPS
200 GiB size volume with 15000 IOPS
(Correct)
Explanation
Correct option:
200 GiB size volume with 15000 IOPS - This is an invalid configuration. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. So, for a 200 GiB volume size, max IOPS possible is 200*50 = 10000 IOPS.
Incorrect options:
Provisioned IOPS SSD (io1) volumes allow you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time. An io1 volume can range in size from 4 GiB to 16 TiB. The maximum ratio of provisioned IOPS to the requested volume size (in GiB) is 50:1. For example, a 100 GiB volume can be provisioned with up to 5,000 IOPS.
200 GiB size volume with 2000 IOPS - As explained above, up to 10000 IOPS is a valid configuration for the given use-case.
200 GiB size volume with 10000 IOPS - As explained above, up to 10000 IOPS is a valid configuration for the given use-case.
200 GiB size volume with 5000 IOPS - As explained above, up to 10000 IOPS is a valid configuration for the given use-case.
Reference:
Question 58: Skipped
The development team at a multi-national retail company wants to support trusted third-party authenticated users from the supplier organizations to create and update records in specific DynamoDB tables in the company's AWS account.
As a Developer Associate, which of the following solutions would you suggest for the given use-case?
Use Cognito Identity pools to enable trusted third-party authenticated users to access DynamoDB
(Correct)
Create a new IAM group in the company's AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM group credentials to access DynamoDB
Create a new IAM user in the company's AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM user credentials to access DynamoDB
Use Cognito User pools to enable trusted third-party authenticated users to access DynamoDB
Explanation
Correct option:
Use Cognito Identity pools to enable trusted third-party authenticated users to access DynamoDB
Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. With an identity pool, you can obtain temporary, limited-privilege AWS credentials to access other AWS services. Amazon Cognito identity pools support the following identity providers:
Public providers: Login with Amazon (Identity Pools), Facebook (Identity Pools), Google (Identity Pools), Sign in with Apple (Identity Pools).
Amazon Cognito User Pools
Open ID Connect Providers (Identity Pools)
SAML Identity Providers (Identity Pools)
Developer Authenticated Identities (Identity Pools)
Exam Alert:
Incorrect options:
Use Cognito User pools to enable trusted third-party authenticated users to access DynamoDB - A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Cognito User Pools cannot be used to obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB.
Create a new IAM user in the company's AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM user credentials to access DynamoDB
Create a new IAM group in the company's AWS account for each of the third-party authenticated users from the supplier organizations. The users can then use the IAM group credentials to access DynamoDB
Both these options involve setting up IAM resources such as IAM users or IAM groups just to provide access to DynamoDB tables. As the users are already trusted third-party authenticated users, Cognito Identity Pool can address this use-case in an elegant way.
Reference:
Question 60: Skipped
As a Team Lead, you are expected to generate a report of the code builds for every week to report internally and to the client. This report consists of the number of code builds performed for a week, the percentage success and failure, and overall time spent on these builds by the team members. You also need to retrieve the CodeBuild logs for failed builds and analyze them in Athena.
Which of the following options will help achieve this?
Use AWS Lambda integration
Use AWS CloudTrail and deliver logs to S3
Enable S3 and CloudWatch Logs integration
(Correct)
Use CloudWatch Events
Explanation
Correct option:
Enable S3 and CloudWatch Logs integration - AWS CodeBuild monitors functions on your behalf and reports metrics through Amazon CloudWatch. These metrics include the number of total builds, failed builds, successful builds, and the duration of builds. You can monitor your builds at two levels: Project level, AWS account level. You can export log data from your log groups to an Amazon S3 bucket and use this data in custom processing and analysis, or to load onto other systems.
Incorrect options:
Use CloudWatch Events - You can integrate CloudWatch Events with CodeBuild. However, we are looking at storing and running queries on logs, so Cloudwatch logs with S3 integration makes sense for this context.o
Use AWS Lambda integration - Lambda is a good choice to use boto3 library to read logs programmatically. But, CloudWatch and S3 integration is already built-in and is an optimized way of managing the given use-case.
Use AWS CloudTrail and deliver logs to S3 - AWS CodeBuild is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in CodeBuild. CloudTrail captures all API calls for CodeBuild as events, including calls from the CodeBuild console and from code calls to the CodeBuild APIs. If you create a trail, you can enable continuous delivery of CloudTrail events to an S3 bucket, including events for CodeBuild. This is an important feature for monitoring a service but isn't a good fit for the current scenario.
References:
Question 65: Skipped
A diagnostic lab stores its data on DynamoDB. The lab wants to backup a particular DynamoDB table data on Amazon S3, so it can download the S3 backup locally for some operational use.
Which of the following options is NOT feasible?
Use the DynamoDB on-demand backup capability to write to Amazon S3 and download locally
(Correct)
Use AWS Data Pipeline to export your table to an S3 bucket in the account of your choice and download locally
Use AWS Glue to copy your table to Amazon S3 and download locally
Use Hive with Amazon EMR to export your data to an S3 bucket and download locally
Explanation
Correct option:
Use the DynamoDB on-demand backup capability to write to Amazon S3 and download locally - This option is not feasible for the given use-case. DynamoDB has two built-in backup methods (On-demand, Point-in-time recovery) that write to Amazon S3, but you will not have access to the S3 buckets that are used for these backups.
Incorrect options:
Use AWS Data Pipeline to export your table to an S3 bucket in the account of your choice and download locally - This is the easiest method. This method is used when you want to make a one-time backup using the lowest amount of AWS resources possible. Data Pipeline uses Amazon EMR to create the backup, and the scripting is done for you. You don't have to learn Apache Hive or Apache Spark to accomplish this task.
Use Hive with Amazon EMR to export your data to an S3 bucket and download locally - Use Hive to export data to an S3 bucket. Or, use the open-source emr-dynamodb-connector to manage your own custom backup method in Spark or Hive. These methods are the best practice to use if you're an active Amazon EMR user and are comfortable with Hive or Spark. These methods offer more control than the Data Pipeline method.
Use AWS Glue to copy your table to Amazon S3 and download locally - Use AWS Glue to copy your table to Amazon S3. This is the best practice to use if you want automated, continuous backups that you can also use in another service, such as Amazon Athena.
References:
Question 2: Skipped
A company’s e-commerce website is expecting hundreds of thousands of visitors on Black Friday. The marketing department is concerned that high volumes of orders might stress SQS leading to message failures. The company has approached you for the steps to be taken as a precautionary measure against the high volumes.
What step will you suggest as a Developer Associate?
Amazon SQS is highly scalable and does not need any intervention to handle the expected high volumes
(Correct)
Pre-configure the SQS queue to increase the capacity when messages hit a certain threshold
Enable auto-scaling in the SQS queue
Convert the queue into FIFO ordered queue, since messages to the down system will be processed faster once they are ordered
Explanation
Correct option:
Amazon SQS is highly scalable and does not need any intervention to handle the expected high volumes
Amazon SQS leverages the AWS cloud to dynamically scale, based on demand. SQS scales elastically with your application so you don't have to worry about capacity planning and pre-provisioning. For most standard queues (depending on queue traffic and message backlog), there can be a maximum of approximately 120,000 inflight messages (received from a queue by a consumer, but not yet deleted from the queue).
Incorrect options:
Pre-configure the SQS queue to increase the capacity when messages hit a certain threshold - This is an incorrect statement. Amazon SQS scales dynamically, automatically provisioning the needed capacity.
Enable auto-scaling in the SQS queue - SQS queues are, by definition, auto-scalable and do not need any configuration changes for auto-scaling.
Convert the queue into FIFO ordered queue, since messages to the down system will be processed faster once they are ordered - This is a wrong statement. You cannot convert an existing standard queue to FIFO queue. To make the move, you must either create a new FIFO queue for your application or delete your existing standard queue and recreate it as a FIFO queue.
References:
Question 4: Skipped
Your company has a three-year contract with a healthcare provider. The contract states that monthly database backups must be retained for the duration of the contract for compliance purposes. Currently, the limit on backup retention for automated backups, on Amazon Relational Database Service (RDS), does not meet your requirements.
Which of the following solutions can help you meet your requirements?
Enable RDS Multi-AZ
Enable RDS automatic backups
Enable RDS Read replicas
Create a cron event in CloudWatch, which triggers an AWS Lambda function that triggers the database snapshot
(Correct)
Explanation
Correct option:
Create a cron event in CloudWatch, which triggers an AWS Lambda function that triggers the database snapshot - There are multiple ways to run periodic jobs in AWS. CloudWatch Events with Lambda is the simplest of all solutions. To do this, create a CloudWatch Rule and select “Schedule” as the Event Source. You can either use a cron expression or provide a fixed rate (such as every 5 minutes). Next, select “Lambda Function” as the Target. Your Lambda will have the necessary code for snapshot functionality.
Incorrect options:
Enable RDS automatic backups - You can enable automatic backups but as of 2020, the retention period is 0 to 35 days.
Enable RDS Read replicas - Amazon RDS server's built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. Read replicas are useful for heavy read-only data workloads. These are not suitable for the given use-case.
Enable RDS Multi-AZ - Multi-AZ allows you to create a highly available application with RDS. It does not directly help in database backups or retention periods.
Reference:
Question 6: Skipped
The development team at a social media company is considering using Amazon ElastiCache to boost the performance of their existing databases.
As a Developer Associate, which of the following use-cases would you recommend as the BEST fit for ElastiCache? (Select two)
Use ElastiCache to improve performance of compute-intensive workloads
(Correct)
Use ElastiCache to run highly complex JOIN queries
Use ElastiCache to improve latency and throughput for write-heavy application workloads
Use ElastiCache to improve latency and throughput for read-heavy application workloads
(Correct)
Use ElastiCache to improve performance of Extract-Transform-Load (ETL) workloads
Explanation
Correct option:
Use ElastiCache to improve latency and throughput for read-heavy application workloads
Use ElastiCache to improve performance of compute-intensive workloads
Amazon ElastiCache allows you to run in-memory data stores in the AWS cloud. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, and Q&A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in the cache.
Incorrect options:
Use ElastiCache to improve latency and throughput for write-heavy application workloads - As mentioned earlier in the explanation, Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads. Caching is not a good fit for write-heavy applications as the cache goes stale at a very fast rate.
Use ElastiCache to improve performance of Extract-Transform-Load (ETL) workloads - ETL workloads involve reading and transforming high volume data which is not a good fit for caching. You should use AWS Glue or Amazon EMR to facilitate ETL workloads.
Use ElastiCache to run highly complex JOIN queries - Complex JSON queries can be run on relational databases such as RDS or Aurora. ElastiCache is not a good fit for this use-case.
References:
Question 11: Skipped
A team is checking the viability of using AWS Step Functions for creating a banking workflow for loan approvals. The web application will also have human approval as one of the steps in the workflow.
As a developer associate, which of the following would you identify as the key characteristics for AWS Step Function? (Select two)
Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that do not support any human approval steps
Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that can also support any human approval steps
(Correct)
Both Standard and Express Workflows support all service integrations, activities, and design patterns
You should use Express Workflows for workloads with high event rates and short duration
(Correct)
Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of 180 days or 6 months
Explanation
Correct options:
Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that can also support any human approval steps - Standard Workflows on AWS Step Functions are more suitable for long-running, durable, and auditable workflows where repeating workflow steps is expensive (e.g., restarting a long-running media transcode) or harmful (e.g., charging a credit card twice). Example workloads include training and deploying machine learning models, report generation, billing, credit card processing, and ordering and fulfillment processes. Step functions also support any human approval steps.
You should use Express Workflows for workloads with high event rates and short duration* - You should use Express Workflows for workloads with high event rates and short durations. Express Workflows support event rates of more than 100,000 per second.
Incorrect options:
Standard Workflows on AWS Step Functions are suitable for long-running, durable, and auditable workflows that do not support any human approval steps - As Step functions support any human approval steps, so this option is incorrect.
Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of 180 days or 6 months - Express Workflows have a maximum duration of five minutes and Standard workflows have a maximum duration of one year.
Both Standard and Express Workflows support all service integrations, activities, and design patterns - Standard Workflows support all service integrations, activities, and design patterns. Express Workflows do not support activities, job-run (.sync), and Callback patterns.
Reference:
Question 12: Skipped
An intern at an IT company is getting started with AWS Cloud and wants to understand the following Amazon S3 bucket access policy:
As a Developer Associate, can you help him identify what the policy is for?
Allows full S3 access, but explicitly denies access to the Production bucket if the user has not signed in using MFA within the last thirty minutes
(Correct)
Allows IAM users to access their own home directory in Amazon S3, programmatically and in the console
Allows full S3 access to an Amazon Cognito user, but explicitly denies access to the Production bucket if the Cognito user is not authenticated
Allows a user to manage a single Amazon S3 bucket and denies every other AWS action and resource if the user is not signed in using MFA within last thirty minutes
Explanation
Correct option:
Allows full S3 access, but explicitly denies access to the Production bucket if the user has not signed in using MFA within the last thirty minutes - This example shows how you might create a policy that allows an Amazon S3 user to access any bucket, including updating, adding, and deleting objects. However, it explicitly denies access to the Production bucket if the user has not signed in using multi-factor authentication (MFA) within the last thirty minutes. This policy grants the permissions necessary to perform this action in the console or programmatically using the AWS CLI or AWS API.
This policy never allows programmatic access to the Production bucket using long-term user access keys. This is accomplished using the aws:MultiFactorAuthAge condition key with the NumericGreaterThanIfExists condition operator. This policy condition returns true if MFA is not present or if the age of the MFA is greater than 30 minutes. In those situations, access is denied. To access the Production bucket programmatically, the S3 user must use temporary credentials that were generated in the last 30 minutes using the GetSessionToken API operation.
Incorrect options:
Allows a user to manage a single Amazon S3 bucket and denies every other AWS action and resource if the user is not signed in using MFA within last thirty minutes
Allows full S3 access to an Amazon Cognito user, but explicitly denies access to the Production bucket if the Cognito user is not authenticated
Allows IAM users to access their own home directory in Amazon S3, programmatically and in the console
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
Question 13: Skipped
Your web application reads and writes data to your DynamoDB table. The table is provisioned with 400 Write Capacity Units (WCU’s) shared across 4 partitions. One of the partitions receives 250 WCU/second while others receive much less. You receive the error 'ProvisionedThroughputExceededException'.
What is the likely cause of this error?
You have a hot partition
(Correct)
Write Capacity Units (WCU’s) are applied across to all your DynamoDB tables and this needs reconfiguration
Configured IAM policy is wrong
CloudWatch monitoring is lagging
Explanation
Correct option:
You have a hot partition
It's not always possible to distribute read and write activity evenly. When data access is imbalanced, a "hot" partition can receive a higher volume of read and write traffic compared to other partitions. To better accommodate uneven access patterns, DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled, provided that traffic does not exceed your table’s total provisioned capacity or the partition maximum capacity.
Incorrect options:
CloudWatch monitoring is lagging - The error is specific to DynamoDB itself and not to any connected service. CloudWatch is a fully managed service from AWS and does not result in throttling.
Configured IAM policy is wrong - The error is not associated with authorization but to exceeding something pre-configured value. So, it's clearly not a permissions issue.
Write-capacity units (WCU’s) are applied across to all your DynamoDB tables and this needs reconfiguration - This statement is incorrect. Read Capacity Units and Write Capacity Units are specific to one table.
Reference:
Question 14: Skipped
You have been asked by your Team Lead to enable detailed monitoring of the Amazon EC2 instances your team uses. As a Developer working on AWS CLI, which of the below command will you run?
aws ec2 run-instances --image-id ami-09092360 --monitoring Enabled=true
aws ec2 monitor-instances --instance-ids i-1234567890abcdef0
(Correct)
aws ec2 run-instances --image-id ami-09092360 --monitoring State=enabled
aws ec2 monitor-instances --instance-id i-1234567890abcdef0
Explanation
Correct option:
aws ec2 monitor-instances --instance-ids i-1234567890abcdef0
- This enables detailed monitoring for a running instance.
Incorrect options:
aws ec2 run-instances --image-id ami-09092360 --monitoring Enabled=true
- This syntax is used to enable detailed monitoring when launching an instance from AWS CLI.
aws ec2 run-instances --image-id ami-09092360 --monitoring State=enabled
- This is an invalid syntax
aws ec2 monitor-instances --instance-id i-1234567890abcdef0
- This is an invalid syntax
References:
Question 20: Skipped
You are creating a mobile application that needs access to the AWS API Gateway. Users will need to register first before they can access your API and you would like the user management to be fully managed.
Which authentication option should you use for your API Gateway layer?
Use Cognito User Pools
(Correct)
Use IAM permissions with sigv4
Use API Gateway User Pools
Use Lambda Authorizer
Explanation
Correct option:
Use Cognito User Pools - As an alternative to using IAM roles and policies or Lambda authorizers, you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway. To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user into the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request's Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn't authorized to make the call because the client did not have credentials that could be authorized.
Incorrect options:
Use Lambda Authorizer- A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API. A Lambda authorizer is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML, or that uses request parameters to determine the caller's identity. This won't be a fully managed user management solution but it would allow you to check for access at the AWS API Gateway level.
Use IAM permissions with sigv4 - Signature Version 4 is the process to add authentication information to AWS requests sent by HTTP. For security, most requests to AWS must be signed with an access key, which consists of an access key ID and secret access key. These two keys are commonly referred to as your security credentials. But, we cannot possibly create an IAM user for every visitor of the site, so this is where social identity providers come in to help.
Use API Gateway User Pools - This is a made-up option.
References:
Question 23: Skipped
A cybersecurity company is running a serverless backend with several compute-heavy workflows running on Lambda functions. The development team has noticed a performance lag after analyzing the performance metrics for the Lambda functions.
As a Developer Associate, which of the following options would you suggest as the BEST solution to address the compute-heavy workloads?
Invoke the Lambda functions asynchronously to process the compute-heavy workflows
Increase the amount of memory available to the Lambda functions
(Correct)
Use reserved concurrency to account for the compute-heavy workflows
Use provisioned concurrency to account for the compute-heavy workflows
Explanation
Correct option:
Increase the amount of memory available to the Lambda functions
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
In the AWS Lambda resource model, you choose the amount of memory you want for your function which allocates proportional CPU power and other resources. This means you will have access to more compute power when you choose one of the new larger settings. You can set your memory in 64MB increments from 128MB to 3008MB. You access these settings when you create a function or update its configuration. The settings are available using the AWS Management Console, AWS CLI, or SDKs.
Therefore, by increasing the amount of memory available to the Lambda functions, you can run the compute-heavy workflows.
Incorrect options:
Invoke the Lambda functions asynchronously to process the compute-heavy workflows - When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors and can send invocation records to a downstream resource to chain together components of your application. The method of invocation has no bearing on the Lambda function's ability to process the compute-heavy workflows.
Use reserved concurrency to account for the compute-heavy workflows
Use provisioned concurrency to account for the compute-heavy workflows
Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency. The type of concurrency has no bearing on the Lambda function's ability to process the compute-heavy workflows. So both these options are incorrect.
Reference:
Question 29: Skipped
A development team uses shared Amazon S3 buckets to upload files. Due to this shared access, objects in S3 buckets have different owners making it difficult to manage the objects.
As a developer associate, which of the following would you suggest to automatically make the S3 bucket owner, also the owner of all objects in the bucket, irrespective of the AWS account used for uploading the objects?
Use S3 Access Analyzer to identify the owners of all objects and change the ownership to the bucket owner
Use S3 CORS to make the S3 bucket owner, the owner of all objects in the bucket
Use Bucket Access Control Lists (ACLs) to control access on S3 bucket and then define its owner
Use S3 Object Ownership to default bucket owner to be the owner of all objects in the bucket
(Correct)
Explanation
Correct option:
Use S3 Object Ownership to default bucket owner to be the owner of all objects in the bucket
S3 Object Ownership is an Amazon S3 bucket setting that you can use to control ownership of new objects that are uploaded to your buckets. By default, when other AWS accounts upload objects to your bucket, the objects remain owned by the uploading account. With S3 Object Ownership, any new objects that are written by other accounts with the bucket-owner-full-control canned access control list (ACL) automatically become owned by the bucket owner, who then has full control of the objects.
S3 Object Ownership has two settings: 1. Object writer – The uploading account will own the object. 2. Bucket owner preferred – The bucket owner will own the object if the object is uploaded with the bucket-owner-full-control
canned ACL. Without this setting and canned ACL, the object is uploaded and remains owned by the uploading account.
Incorrect options:
Use S3 CORS to make the S3 bucket owner, the owner of all objects in the bucket - Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
Use S3 Access Analyzer to identify the owners of all objects and change the ownership to the bucket owner - Access Analyzer for S3 helps review all buckets that have bucket access control lists (ACLs), bucket policies, or access point policies that grant public or shared access. Access Analyzer for S3 alerts you to buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization.
Use Bucket Access Control Lists (ACLs) to control access on S3 bucket and then define its owner - Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. A bucket ACLs allow you to control access at bucket level.
None of the above features are useful for the current scenario and hence are incorrect options.
References:
Question 32: Skipped
You are a developer working with the AWS CLI to create Lambda functions that contain environment variables. Your functions will require over 50 environment variables consisting of sensitive information of database table names.
What is the total set size/number of environment variables you can create for AWS Lambda?
The total size of all environment variables shouldn't exceed 4 KB. There is no limit on the number of variables
(Correct)
The total size of all environment variables shouldn't exceed 8 KB. There is no limit on the number of variables
The total size of all environment variables shouldn't exceed 8 KB. The maximum number of variables that can be created is 50
The total size of all environment variables shouldn't exceed 4 KB. The maximum number of variables that can be created is 35
Explanation
Correct option:
The total size of all environment variables shouldn't exceed 4 KB. There is no limit on the number of variables
An environment variable is a pair of strings that are stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request. The total size of all environment variables doesn't exceed 4 KB. There is no limit defined on the number of variables that can be used.
Incorrect options:
The total size of all environment variables shouldn't exceed 8 KB. The maximum number of variables that can be created is 50 - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.
The total size of all environment variables shouldn't exceed 8 KB. There is no limit on the number of variables - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.
The total size of all environment variables shouldn't exceed 4 KB. The maximum number of variables that can be created is 35 - Incorrect option. The total size of environment variables cannot exceed 4 KB with no restriction on the number of variables.
Reference:
Question 36: Skipped
A company uses Amazon RDS as its database. For improved user experience, it has been decided that a highly reliable fully-managed caching layer has to be configured in front of RDS.
Which of the following is the right choice, keeping in mind that cache content regeneration is a costly activity?
Install Redis on an Amazon EC2 instance
Migrate the database to Amazon Redshift
Implement Amazon ElastiCache Redis in Cluster Mode
(Correct)
Implement Amazon ElastiCache Memcached
Explanation
Correct option:
Implement Amazon ElastiCache Redis in Cluster-Mode - One can leverage ElastiCache for Redis with cluster mode enabled to enhance reliability and availability with little change to your existing workload. Cluster mode comes with the primary benefit of horizontal scaling of your Redis cluster, with almost zero impact on the performance of the cluster.
When building production workloads, you should consider using a configuration with replication, unless you can easily recreate your data. Enabling Cluster-Mode provides a number of additional benefits in scaling your cluster. In short, it allows you to scale in or out the number of shards (horizontal scaling) versus scaling up or down the node type (vertical scaling). This means that Cluster-Mode can scale to very large amounts of storage (potentially 100s of terabytes) across up to 90 shards, whereas a single node can only store as much data in memory as the instance type has capacity for.
Incorrect options:
Install Redis on an Amazon EC2 instance - It is possible to install Redis directly onto Amazon EC2 instance. But, unlike ElastiCache for Redis, which is a managed service, you will need to maintain and manage your Redis installation.
Implement Amazon ElastiCache Memcached - Redis and Memcached are popular, open-source, in-memory data stores. Although they are both easy to use and offer high performance, there are important differences to consider when choosing an engine. Memcached is designed for simplicity while Redis offers a rich set of features that make it effective for a wide range of use cases. Redis offers snapshots facility, replication, and supports transactions, which Memcached cannot and hence ElastiCache Redis is the right choice for our use case.
Migrate the database to Amazon Redshift - Amazon Redshift belongs to "Big Data as a Service" cloud facility, while Redis can be primarily classified under "In-Memory Databases". "Data Warehousing" is the primary reason why developers consider Amazon Redshift over the competitors, whereas "Performance" is the key factor in picking Redis.
References:
Question 39: Skipped
A new member of your team is working on creating Dead Letter Queue (DLQ) for AWS Lambda functions.
As a Developer Associate, can you help him identify the use cases, wherein AWS Lambda will add a message into a DLQ after being processed? (Select two)
The Lambda function invocation is synchronous
The event fails all processing attempts
(Correct)
The event has been processed successfully
The Lambda function invocation failed only once but succeeded thereafter
The Lambda function invocation is asynchronous
(Correct)
Explanation
Correct option:
The Lambda function invocation is asynchronous - When an asynchronous invocation event exceeds the maximum age or fails all retry attempts, Lambda discards it. Or sends it to dead-letter queue if you have configured one.
The event fails all processing attempt - A dead-letter queue acts the same as an on-failure destination in that it is used when an event fails all processing attempts or expires without being processed.
Incorrect options:
The Lambda function invocation is synchronous - When you invoke a function synchronously, Lambda runs the function and waits for a response. Queues are generally used with asynchronous invocations since queues implement the decoupling feature of various connected systems. It does not make sense to use queues when the calling code will wait on it for a response.
The event has been processed successfully - A successfully processed event is not sent to the dead-letter queue.
The event processing failed only once but succeeded thereafter - A successfully processed event is not sent to the dead-letter queue.
Reference:
Question 40: Skipped
As part of their on-boarding, the employees at an IT company need to upload their profile photos in a private S3 bucket. The company wants to build an in-house web application hosted on an EC2 instance that should display the profile photos in a secure way when the employees mark their attendance.
As a Developer Associate, which of the following solutions would you suggest to address this use-case?
Make the S3 bucket public so that the application can reference the image URL for display
Keep each user's profile image encoded in base64 format in a DynamoDB table and reference it from the application for display
Keep each user's profile image encoded in base64 format in an RDS table and reference it from the application for display
Save the S3 key for each user's profile photo in a DynamoDB table and use a lambda function to dynamically generate a pre-signed URL. Reference this URL for display via the web application
(Correct)
Explanation
Correct option:
"Save the S3 key for each user's profile photo in a DynamoDB table and use a lambda function to dynamically generate a pre-signed URL. Reference this URL for display via the web application"
On Amazon S3, all objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.
You can also use an IAM instance profile to create a pre-signed URL. When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object), and expiration date and time. The pre-signed URLs are valid only for the specified duration. So for the given use-case, the object key can be retrieved from the DynamoDB table, and then the application can generate the pre-signed URL using the IAM instance profile.
Incorrect options:
"Make the S3 bucket public so that the application can reference the image URL for display" - Making the S3 bucket public would violate the security and privacy requirements for the use-case, so this option is incorrect.
"Keep each user's profile image encoded in base64 format in a DynamoDB table and reference it from the application for display"
"Keep each user's profile image encoded in base64 format in an RDS table and reference it from the application for display"
It's a bad practice to keep the raw image data in the database itself. Also, it would not be possible to create a secure access URL for the image without a significant development effort. Hence both these options are incorrect.
Reference:
Question 42: Skipped
Your company is planning to move away from reserving EC2 instances and would like to adopt a more agile form of serverless architecture.
Which of the following is the simplest and the least effort way of deploying the Docker containers on this serverless architecture?
Amazon Elastic Container Service (Amazon ECS) on Fargate
(Correct)
AWS Elastic Beanstalk
Amazon Elastic Kubernetes Service (Amazon EKS) on Fargate
Amazon Elastic Container Service (Amazon ECS) on EC2
Explanation
Correct option:
Amazon Elastic Container Service (Amazon ECS) on Fargate - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You can host your cluster on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks using the Fargate launch type.
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
Incorrect options:
Amazon Elastic Container Service (Amazon ECS) on EC2 - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. ECS uses EC2 instances and hence cannot be called a serverless solution.
Amazon Elastic Kubernetes Service (Amazon EKS) on Fargate - Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. You can choose to run your EKS clusters using AWS Fargate, which is a serverless compute for containers. Since the use-case talks about the simplest and the least effort way to deploy Docker containers, EKS is not the best fit as you can use ECS Fargate to build a much easier solution. EKS is better suited to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane or worker nodes.
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. Beanstalk uses EC2 instances for its deployment, hence cannot be called a serverless architecture.
References:
Question 46: Skipped
A company has more than 100 million members worldwide enjoying 125 million hours of TV shows and movies each day. The company uses AWS for nearly all its computing and storage needs, which use more than 10,000 server instances on AWS. This results in an extremely complex and dynamic networking environment where applications are constantly communicating inside AWS and across the Internet. Monitoring and optimizing its network is critical for the company.
The company needs a solution for ingesting and analyzing the multiple terabytes of real-time data its network generates daily in the form of flow logs. Which technology/service should the company use to ingest this data economically and has the flexibility to direct this data to other downstream systems?
AWS Glue
Amazon Kinesis Data Streams
(Correct)
Amazon Simple Queue Service (SQS)
Amazon Kinesis Firehose
Explanation
Correct option:
Amazon Kinesis Data Streams
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.
Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).
Incorrect options:
Amazon Simple Queue Service (SQS) - Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. AWS recommends using Amazon SQS for cases where individual message fail/success are important, message delays are needed and there is only one consumer for the messages received (if more than one consumers need to consume the message, then AWS suggests configuring more queues).
Amazon Kinesis Firehose - Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
Kinesis data streams is highly customizable and best suited for developers building custom applications or streaming data for specialized needs. Data Streams also provide greater flexibility in integrating downstream applications than Firehose. Data Streams is also a cost-effective option compared to Firehose. Therefore, KDS is the right solution.
AWS Glue - AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months. Glue is not best suited to handle real-time data.
References:
Question 47: Skipped
A media company uses Amazon Simple Queue Service (SQS) queue to manage their transactions. With changing business needs, the payload size of the messages is increasing. The Team Lead of the project is worried about the 256 KB message size limit that SQS has.
What can be done to make the queue accept messages of a larger size?
Use the MultiPart API
Get a service limit increase from AWS
Use the SQS Extended Client
(Correct)
Use gzip compression
Explanation
Correct option:
Use the SQS Extended Client - To manage large Amazon Simple Queue Service (Amazon SQS) messages, you can use Amazon Simple Storage Service (Amazon S3) and the Amazon SQS Extended Client Library for Java. This is especially useful for storing and consuming messages up to 2 GB. Unless your application requires repeatedly creating queues and leaving them inactive or storing large amounts of data in your queues, consider using Amazon S3 for storing your data.
Incorrect options:
Use the MultiPart API - This is an incorrect statement. There is no multi-part API for Amazon Simple Queue Service.
Get a service limit increase from AWS - While it is possible to get service limits extended for certain AWS services, AWS already offers Extended Client to deal with queues that have larger messages.
Use gzip compression - You can compress the messages before sending them to the queue. The messages also need to be encoded after this to cater to SQS message standards. This adds bulk to the messages and will not be an optimal solution for the current scenario.
Reference:
Question 48: Skipped
A leading financial services company offers data aggregation services for Wall Street trading firms. The company bills its clients based on per unit of clickstream data provided to the clients. As the company operates in a regulated industry, it needs to have the same ordered clickstream data available for auditing within a window of 7 days.
As a Developer Associate, which of the following AWS services do you think provides the ability to run the billing process and auditing process on the given clickstream data in the same order?
AWS Kinesis Data Firehose
Amazon SQS
AWS Kinesis Data Streams
(Correct)
AWS Kinesis Data Analytics
Explanation
Correct option:
AWS Kinesis Data Streams
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering). Amazon Kinesis Data Streams is recommended when you need the ability to consume records in the same order a few hours later.
For example, you have a billing application and an audit application that runs a few hours behind the billing application. By default, records of a stream are accessible for up to 24 hours from the time they are added to the stream. You can raise this limit to a maximum of 365 days. For the given use-case, Amazon Kinesis Data Streams can be configured to store data for up to 7 days and you can run the audit application up to 7 days behind the billing application.
Incorrect options:
AWS Kinesis Data Firehose - Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. As Kinesis Data Firehose is used to load streaming data into data stores, therefore this option is incorrect.
AWS Kinesis Data Analytics - Amazon Kinesis Data Analytics is the easiest way to analyze streaming data in real-time. You can quickly build SQL queries and sophisticated Java applications using built-in templates and operators for common processing functions to organize, transform, aggregate, and analyze data at any scale. Kinesis Data Analytics enables you to easily and quickly build queries and sophisticated streaming applications in three simple steps: setup your streaming data sources, write your queries or streaming applications and set up your destination for processed data. As Kinesis Data Analytics is used to build SQL queries and sophisticated Java applications, therefore this option is incorrect.
Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. For SQS, you cannot have the same message being consumed by multiple consumers in the same order a few hours later, therefore this option is incorrect.
References:
Question 53: Skipped
Recently, you started an online learning platform using AWS Lambda and AWS Gateway API. Your first version was successful, and you began developing new features for the second version. You would like to gradually introduce the second version by routing only 10% of the incoming traffic to the new Lambda version.
Which solution should you opt for?
Use Tags to distinguish the different versions
Use environment variables
Use AWS Lambda aliases
(Correct)
Deploy your Lambda in a VPC
Explanation
Correct option:
Use AWS Lambda aliases - A Lambda alias is like a pointer to a specific Lambda function version. You can create one or more aliases for your AWS Lambda function. Users can access the function version using the alias ARN. An alias can only point to a function version, not to another alias. You can update an alias to point to a new version of the function. Event sources such as Amazon S3 invoke your Lambda function. These event sources maintain a mapping that identifies the function to invoke when events occur. If you specify a Lambda function alias in the mapping configuration, you don't need to update the mapping when the function version changes. This is the right choice for the current requirement.
Incorrect options:
Use Tags to distinguish the different versions - You can tag Lambda functions to organize them by owner, project or department. Tags are freeform key-value pairs that are supported across AWS services for use in filtering resources and adding detail to billing reports. This does not address the given use-case.
Use environment variables - You can use environment variables to store secrets securely and adjust your function's behavior without updating code. An environment variable is a pair of strings that are stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request. For example, you can use environment variables to point to test, development or production databases by passing it as an environment variable during runtime. This option does not address the given use-case.
Deploy your Lambda in a VPC - Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined. This adds another layer of security for your entire architecture. Not the right choice for the given scenario.
References:
Question 56: Skipped
Your company uses an Application Load Balancer to route incoming end-user traffic to applications hosted on Amazon EC2 instances. The applications capture incoming request information and store it in the Amazon Relational Database Service (RDS) running on Microsoft SQL Server DB engines.
As part of new compliance rules, you need to capture the client's IP address. How will you achieve this?
You can get the Client IP addresses from Elastic Load Balancing logs
Use the header X-Forwarded-From
Use the header X-Forwarded-For
(Correct)
You can get the Client IP addresses from server access logs
Explanation
Correct option:
Use the header X-Forwarded-For - The X-Forwarded-For request header helps you identify the IP address of a client when you use an HTTP or HTTPS load balancer. Because load balancers intercept traffic between clients and servers, your server access logs contain only the IP address of the load balancer. To see the IP address of the client, use the X-Forwarded-For request header. Elastic Load Balancing stores the IP address of the client in the X-Forwarded-For request header and passes the header to your server.
Incorrect options:
You can get the Client IP addresses from server access logs - As discussed above, Load Balancers intercept traffic between clients and servers, so server access logs will contain only the IP address of the load balancer.
Use the header X-Forwarded-From - This is a made-up option and given as a distractor.
You can get the Client IP addresses from Elastic Load Balancing logs - Elastic Load Balancing logs requests sent to the load balancer, including requests that never made it to the targets. For example, if a client sends a malformed request, or there are no healthy targets to respond to the request, the request is still logged. So, this is not the right option if we wish to collect the IP addresses of the clients that have access to the instances.
References:
Question 58: Skipped
You have a three-tier web application consisting of a web layer using AngularJS, an application layer using an AWS API Gateway and a data layer in an Amazon Relational Database Service (RDS) database. Your web application allows visitors to look up popular movies from the past. The company is looking at reducing the number of calls made to endpoint and improve latency to the API.
What can you do to improve performance?
Use Mapping Templates
Enable API Gateway Caching
(Correct)
Use Amazon Kinesis Data Streams to stream incoming data and reduce the burden on Gateway APIs
Use Stage Variables
Explanation
Correct option:
Enable API Gateway Caching - You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
Incorrect options:
Use Mapping Templates - A mapping template is a script expressed in Velocity Template Language (VTL) and applied to the payload using JSONPath expressions. Mapping templates help format/structure the data in a way that it is easily readable, unlike a server response that might always be easy to ready. Mapping Templates do not help in latency issues of the APIs.
Use Stage Variables - Stage variables act like environment variables and can be used to change the behavior of your API Gateway methods for each deployment stage; for example, making it possible to reach a different back end depending on which stage the API is running on. Stage variables do not help in latency issues.
Use Amazon Kinesis Data Streams to stream incoming data and reduce the burden on Gateway APIs - Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
Reference:
Question 59: Skipped
A company wants to automate and orchestrate a multi-source high-volume flow of data in a scalable data management solution built using AWS services. The solution must ensure that the business rules and transformations run in sequence, handle reprocessing of data in case of errors, and require minimal maintenance.
Which AWS service should the company use to manage and automate the orchestration of the data flows?
AWS Step Functions
(Correct)
Amazon Kinesis Data Streams
AWS Batch
AWS Glue
Explanation
Correct option:
AWS Step Functions
AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines.
Incorrect options:
Amazon Kinesis Data Streams - Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale.
AWS Glue - AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.
AWS Batch - AWS Batch is a set of batch management capabilities that enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized compute resources) based on the volume and specific resource requirements of the batch jobs submitted.
References:
Question 60: Skipped
You have migrated an on-premise SQL Server database to an Amazon Relational Database Service (RDS) database attached to a VPC inside a private subnet. Also, the related Java application, hosted on-premise, has been moved to an Amazon Lambda function.
Which of the following should you implement to connect AWS Lambda function to its RDS instance?
Use Environment variables to pass in the RDS connection string
Use Lambda layers to connect to the internet and RDS separately
Configure Lambda to connect to VPC with private subnet and Security Group needed to access RDS
(Correct)
Configure lambda to connect to the public subnet that will give internet access and use Security Group to access RDS inside the private subnet
Explanation
Correct option:
Configure Lambda to connect to VPC with private subnet and Security Group needed to access RDS - You can configure a Lambda function to connect to private subnets in a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your lambda function to the VPC to access private resources during execution. When you connect a function to a VPC, Lambda creates an elastic network interface for each combination of the security group and subnet in your function's VPC configuration. This is the right way of giving RDS access to Lambda.
Incorrect options:
Use Lambda layers to connect to the internet and RDS separately - You can configure your Lambda function to pull in additional code and content in the form of layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Layers will not help in configuring access to RDS instance and hence is an incorrect choice.
Configure lambda to connect to the public subnet that will give internet access and use the Security Group to access RDS inside the private subnet - This is an incorrect statement. Connecting a Lambda function to a public subnet does not give it internet access or a public IP address. To grant internet access to your function, its associated VPC must have a NAT gateway (or NAT instance) in a public subnet.
Use Environment variables to pass in the RDS connection string - You can use environment variables to store secrets securely and adjust your function's behavior without updating code. You can use environment variables to exchange data with RDS, but you will still need access to RDS, which is not possible with just environment variables.
References:
Question 61: Skipped
The development team at a company wants to encrypt a 111 GB object using AWS KMS.
Which of the following represents the best solution?
Make a GenerateDataKey
API call that returns a plaintext key and an encrypted copy of a data key. Use a plaintext key to encrypt the data
(Correct)
Make a GenerateDataKeyWithPlaintext
API call that returns an encrypted copy of a data key. Use a plaintext key to encrypt the data
Make an Encrypt
API call to encrypt the plaintext data as ciphertext using a customer master key (CMK) with imported key material
Make a GenerateDataKeyWithoutPlaintext
API call that returns an encrypted copy of a data key. Use an encrypted key to encrypt the data
Explanation
Correct option:
Make a GenerateDataKey
API call that returns a plaintext key and an encrypted copy of a data key. Use a plaintext key to encrypt the data - GenerateDataKey
API, generates a unique symmetric data key for client-side encryption. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.
GenerateDataKey
returns a unique data key for each request. The bytes in the plaintext key are not related to the caller or the CMK.
To encrypt data outside of AWS KMS:
Use the GenerateDataKey
operation to get a data key.
Use the plaintext data key (in the Plaintext field of the response) to encrypt your data outside of AWS KMS. Then erase the plaintext data key from memory.
Store the encrypted data key (in the CiphertextBlob field of the response) with the encrypted data.
To decrypt data outside of AWS KMS:
Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.
Use the plaintext data key to decrypt data outside of AWS KMS, then erase the plaintext data key from memory.
Incorrect options:
Make a GenerateDataKeyWithPlaintext
API call that returns an encrypted copy of a data key. Use a plaintext key to encrypt the data - This is a made-up option, given only as a distractor.
Make an Encrypt
API call to encrypt the plaintext data as ciphertext using a customer master key (CMK) with imported key material - Encrypt
API is used to encrypt plaintext into ciphertext by using a customer master key (CMK). The Encrypt operation has two primary use cases:
To encrypt small amounts of arbitrary data, such as a personal identifier or database password, or other sensitive information.
To move encrypted data from one AWS Region to another.
Neither of the two is useful for the given scenario.
Make a GenerateDataKeyWithoutPlaintext
API call that returns an encrypted copy of a data key. Use an encrypted key to encrypt the data - GenerateDataKeyWithoutPlaintext
API, generates a unique symmetric data key. This operation returns a data key that is encrypted under a customer master key (CMK) that you specify.
GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey
operation except that returns only the encrypted copy of the data key. This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.
References:
Question 63: Skipped
A development team has created a new IAM user that has s3:putObject
permission to write to an S3 bucket. This S3 bucket uses server-side encryption with AWS KMS managed keys (SSE-KMS) as the default encryption. Using the access key ID and the secret access key of the IAM user, the application received an access denied error when calling the PutObject
API.
As a Developer Associate, how would you resolve this issue?
Correct the bucket policy of the S3 bucket to allow the IAM user to upload encrypted objects
Correct the policy of the IAM user to allow the s3:Encrypt
action
Correct the policy of the IAM user to allow the kms:GenerateDataKey
action
(Correct)
Correct the ACL of the S3 bucket to allow the IAM user to upload encrypted objects
Explanation
Correct option:
Correct the policy of the IAM user to allow the kms:GenerateDataKey
action - You can protect data at rest in Amazon S3 by using three different modes of server-side encryption: SSE-S3, SSE-C, or SSE-KMS. SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS. You can choose a customer managed CMK or the AWS managed CMK for Amazon S3 in your account. If you choose to encrypt your data using the standard features, AWS KMS and Amazon S3 perform the following actions:
Amazon S3 requests a plaintext data key and a copy of the key encrypted under the specified CMK.
AWS KMS generates a data key, encrypts it under the CMK, and sends both the plaintext data key and the encrypted data key to Amazon S3.
Amazon S3 encrypts the data using the data key and removes the plaintext key from memory as soon as possible after use.
Amazon S3 stores the encrypted data key as metadata with the encrypted data.
The error message indicates that your IAM user or role needs permission for the kms:GenerateDataKey
action. This permission is required for buckets that use default encryption with a custom AWS KMS key.
In the JSON policy documents, look for policies related to AWS KMS access. Review statements with "Effect": "Allow" to check if the user or role has permissions for the kms:GenerateDataKey action on the bucket's AWS KMS key. If this permission is missing, then add the permission to the appropriate policy.
In the JSON policy documents, look for statements with "Effect": "Deny". Then, confirm that those statements don't deny the s3:PutObject action on the bucket. The statements must also not deny the IAM user or role access to the kms:GenerateDataKey action on the key used to encrypt the bucket. Additionally, make sure the necessary KMS and S3 permissions are not restricted using a VPC endpoint policy, service control policy, permissions boundary, or session policy.
Incorrect options:
Correct the policy of the IAM user to allow the s3:Encrypt
action - This is an invalid action given only as a distractor.
Correct the bucket policy of the S3 bucket to allow the IAM user to upload encrypted objects - The user already has access to the bucket. What the user lacks is access to generate a KMS key, which is mandatory when a bucket is enabled for default encryption.
Correct the ACL of the S3 bucket to allow the IAM user to upload encrypted objects - Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. ACL is another way of giving access to S3 bucket objects. Permissions to use KMS keys will still be needed.
References:
Question 64: Skipped
To meet compliance guidelines, a company needs to ensure replication of any data stored in its S3 buckets.
Which of the following characteristics are correct while configuring an S3 bucket for replication? (Select two)
Once replication is enabled on a bucket, all old and new objects will be replicated
S3 lifecycle actions are not replicated with S3 replication
(Correct)
Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags
(Correct)
Replicated objects do not retain metadata
Object tags cannot be replicated across AWS Regions using Cross-Region Replication
Explanation
Correct options:
Same-Region Replication (SRR) and Cross-Region Replication (CRR) can be configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags - Amazon S3 Replication (CRR and SRR) is configured at the S3 bucket level, a shared prefix level, or an object level using S3 object tags. You add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS region for replication.
S3 lifecycle actions are not replicated with S3 replication - With S3 Replication (CRR and SRR), you can establish replication rules to make copies of your objects into another storage class, in the same or a different region. Lifecycle actions are not replicated, and if you want the same lifecycle configuration applied to both source and destination buckets, enable the same lifecycle configuration on both.
Incorrect options:
Object tags cannot be replicated across AWS Regions using Cross-Region Replication - Object tags can be replicated across AWS Regions using Cross-Region Replication. For customers with Cross-Region Replication already enabled, new permissions are required for tags to replicate.
Once replication is enabled on a bucket, all old and new objects will be replicated - Replication only replicates the objects added to the bucket after replication is enabled on the bucket. Any objects present in the bucket before enabling replication are not replicated.
Replicated objects do not retain metadata - You can use replication to make copies of your objects that retain all metadata, such as the original object creation time and version IDs. This capability is important if you need to ensure that your replica is identical to the source object.
Reference:
Question 2: Skipped
Your mobile application needs to perform API calls to DynamoDB. You do not want to store AWS secret and access keys onto the mobile devices and need all the calls to DynamoDB made with a different identity per mobile device.
Which of the following services allows you to achieve this?
Cognito Identity Pools
(Correct)
Cognito User Pools
Cognito Sync
IAM
Explanation
Correct option:
"Cognito Identity Pools"
Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. Identity pools provide AWS credentials to grant your users access to other AWS services.
Incorrect options:
"Cognito User Pools" - AWS Cognito User Pools is there to authenticate users for your applications which looks similar to Cognito Identity Pools. The difference is that Identity Pools allows a way to authorize your users to use the various AWS services and User Pools is not about authorizing to AWS services but to provide add sign-up and sign-in functionality to web and mobile applications.
"Cognito Sync" - You can use it to synchronize user profile data across mobile devices and the web without requiring your own backend. The client libraries cache data locally so your app can read and write data regardless of device connectivity status.
"IAM" - This is not a good solution because it would require you to have an IAM user for each mobile device which is not a good practice or manageable way of handling deployment.
Exam Alert:
Reference:
Question 3: Skipped
Your company has been hired to build a resilient mobile voting app for an upcoming music award show that expects to have 5 to 20 million viewers. The mobile voting app will be marketed heavily months in advance so you are expected to handle millions of messages in the system. You are configuring Amazon Simple Queue Service (SQS) queues for your architecture that should receive messages from 20 KB to 200 KB.
Is it possible to send these messages to SQS?
Yes, the max message size is 512KB
Yes, the max message size is 256KB
(Correct)
No, the max message size is 128KB
No, the max message size is 64KB
Explanation
Correct option:
Yes, the max message size is 256KB
The minimum message size is 1 byte (1 character). The maximum is 262,144 bytes (256 KB).
Incorrect options:
Yes, the max message size is 512KB - The max size is 256KB
No, the max message size is 128KB - The max size is 256KB
No, the max message size is 64KB - The max size is 256KB
Reference:
Question 6: Skipped
Your company manages MySQL databases on EC2 instances to have full control. Applications on other EC2 instances managed by an ASG make requests to these databases to get information that displays data on dashboards viewed on mobile phones, tablets, and web browsers.
Your manager would like to scale your Auto Scaling group based on the number of requests per minute. How can you achieve this?
Attach additional Elastic File Storage
You enable detailed monitoring and use that to scale your ASG
You create a CloudWatch custom metric and build an alarm to scale your ASG
(Correct)
Attach an Elastic Load Balancer
Explanation
Correct option:
You create a CloudWatch custom metric and build an alarm to scale your ASG
Here we need to scale on the metric "number of requests per minute", which is a custom metric we need to create, as it's not readily available in CloudWatch.
Metrics produced by AWS services are standard resolution by default. When you publish a custom metric, you can define it as either standard resolution or high resolution. When you publish a high-resolution metric, CloudWatch stores it with a resolution of 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds.
Incorrect options:
Attach an Elastic Load Balancer - This is not what you need for auto-scaling. An Elastic Load Balancer distributes workloads across multiple compute resources and checks your instances' health status to name a few, but it does not automatically increase and decrease the number of instances based on the application requirement.
Attach additional Elastic File Storage - This is a file storage service designed for performance. Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. This cannot be used to facilitate auto-scaling.
You enable detailed monitoring and use that to scale your ASG - The detailed monitoring metrics won't provide information about database /application-level requests per minute, so this option is not correct.
Reference:
Question 11: Skipped
The development team at a company wants to insert vendor records into an Amazon DynamoDB table as soon as the vendor uploads a new file into an Amazon S3 bucket.
As a Developer Associate, which set of steps would you recommend to achieve this?
Create an S3 event to invoke a Lambda function that inserts records into DynamoDB
(Correct)
Develop a Lambda function that will poll the S3 bucket and then insert the records into DynamoDB
Set up an event with Amazon CloudWatch Events that will monitor the S3 bucket and then insert the records into DynamoDB
Write a cron job that will execute a Lambda function at a scheduled time and insert the records into DynamoDB
Explanation
Correct option:
Create an S3 event to invoke a Lambda function that inserts records into DynamoDB
The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration that identifies the events you want Amazon S3 to publish and the destinations where you want Amazon S3 to send the notifications. You store this configuration in the notification subresource that is associated with a bucket.
Amazon S3 APIs such as PUT, POST, and COPY can create an object. Using these event types, you can enable notification when an object is created using a specific API, or you can use the s3:ObjectCreated:* event type to request notification regardless of the API that was used to create an object.
For the given use-case, you would create an S3 event notification that triggers a Lambda function whenever we have a PUT object operation in the S3 bucket. The Lambda function in turn would execute custom code to inserts records into DynamoDB.
Incorrect options:
Write a cron job that will execute a Lambda function at a scheduled time and insert the records into DynamoDB - This is not efficient because there may not be any unprocessed file in the S3 bucket when the cron triggers the Lambda on schedule. So this is not the correct option.
Set up an event with Amazon CloudWatch Events that will monitor the S3 bucket and then insert the records into DynamoDB - The CloudWatch event cannot directly insert records into DynamoDB as it's not a supported target type. The CloudWatch event needs to use something like a Lambda function to insert the records into DynamoDB.
Develop a Lambda function that will poll the S3 bucket and then insert the records into DynamoDB - This is not efficient because there may not be any unprocessed file in the S3 bucket when the Lambda function polls the S3 bucket at a given time interval. So this is not the correct option.
Reference:
Question 13: Skipped
A financial services company with over 10,000 employees has hired you as the new Senior Developer. Initially caching was enabled to reduce the number of calls made to all API endpoints and improve the latency of requests to the company’s API Gateway.
For testing purposes, you would like to invalidate caching for the API clients to get the most recent responses. Which of the following should you do?
Using the Header Cache-Control: max-age=0
(Correct)
Use the Request parameter: ?bypass_cache=1
Using the request parameter ?cache-control-max-age=0
Using the Header Bypass-Cache=1
Explanation
Correct option:
Using the Header Cache-Control: max-age=0
A client of your API can invalidate an existing cache entry and reload it from the integration endpoint for individual requests. The client must send a request that contains the Cache-Control: max-age=0 header. The client receives the response directly from the integration endpoint instead of the cache, provided that the client is authorized to do so. This replaces the existing cache entry with the new response, which is fetched from the integration endpoint.
Incorrect options:
Use the Request parameter: ?bypass_cache=1 - Method parameters take query string but this is not one of them.
Using the Header Bypass-Cache=1 - This is a made-up option.
Using the request parameter ?cache-control-max-age=0 - To invalidate cache it requires a header and not a request parameter.
Reference:
Question 16: Skipped
You work as a developer doing contract work for the government on AWS gov cloud. Your applications use Amazon Simple Queue Service (SQS) for its message queue service. Due to recent hacking attempts, security measures have become stricter and require you to store data in encrypted queues.
Which of the following steps can you take to meet your requirements without making changes to the existing code?
Enable SQS KMS encryption
(Correct)
Use Secrets Manager
Use the SSL endpoint
Use Client side encryption
Explanation
Correct option:
Enable SQS KMS encryption
Server-side encryption (SSE) lets you transmit sensitive data in encrypted queues. SSE protects the contents of messages in queues using keys managed in AWS Key Management Service (AWS KMS).
AWS KMS combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use Amazon SQS with AWS KMS, the data keys that encrypt your message data are also encrypted and stored with the data they protect.
You can choose to have SQS encrypt messages stored in both Standard and FIFO queues using an encryption key provided by AWS Key Management Service (KMS).
Incorrect options:
Use the SSL endpoint - The given use-case needs encryption at rest. When using SSL, the data is encrypted during transit, but the data needs to be encrypted at rest as well, so this option is incorrect.
Use Client-side encryption - For additional security, you can build your application to encrypt messages before they are placed in a message queue but will require a code change, so this option is incorrect.
*Use Secrets Manager * - AWS Secrets Manager enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. Secrets Manager cannot be used for encrypting data at rest.
Reference:
Question 17: Skipped
You have configured a Network ACL and a Security Group for the load balancer and Amazon EC2 instances to allow inbound traffic on port 80. However, users are still unable to connect to your website after launch.
Which additional configuration is required to make the website accessible to all users over the internet?
Add a rule to the Network ACLs to allow outbound traffic on ports 1025 - 5000
Add a rule to the Network ACLs to allow outbound traffic on ports 1024 - 65535
(Correct)
Add a rule to the Network ACLs to allow outbound traffic on ports 32768 - 61000
Add a rule to the Security Group allowing outbound traffic on port 80
Explanation
Correct option:
Add a rule to the Network ACLs to allow outbound traffic on ports 1024 - 65535
A Network Access Control List (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.
When you create a custom Network ACL and associate it with a subnet, by default, this custom Network ACL denies all inbound and outbound traffic until you add rules. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic. Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
The client that initiates the request chooses the ephemeral port range. The range varies depending on the client's operating system. Requests originating from Elastic Load Balancing use ports 1024-65535. List of ephemeral port ranges:
Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.
Requests originating from Elastic Load Balancing use ports 1024-65535.
Windows operating systems through Windows Server 2003 use ports 1025-5000.
Windows Server 2008 and later versions use ports 49152-65535.
A NAT gateway uses ports 1024-65535.
AWS Lambda functions use ports 1024-65535.
Incorrect options:
Add a rule to the Network ACLs to allow outbound traffic on ports 1025 - 5000 - As discussed above, Windows operating systems through Windows Server 2003 use ports 1025-5000. ELB uses the port range 1024-65535.
Add a rule to the Network ACLs to allow outbound traffic on ports 32768 - 61000 - As discussed above, Linux kernels (including the Amazon Linux kernel) use ports 1025-5000. ELB uses the port range 1024-65535.
Add a rule to the Security Group allowing outbound traffic on port 80 - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level. Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
References:
Question 19: Skipped
A .NET developer team works with many ASP.NET web applications that use EC2 instances to host them on IIS. The deployment process needs to be configured so that multiple versions of the application can run in AWS Elastic Beanstalk. One version would be used for development, testing, and another version for load testing.
Which of the following methods do you recommend?
You cannot have multiple development environments in Elastic Beanstalk, just one development and one production environment
Create an Application Load Balancer to route based on hostname so you can pass on parameters to the development Elastic Beanstalk environment. Create a file in .ebextensions/ to know how to handle the traffic coming from the ALB
Use only one Beanstalk environment and perform configuration changes using an Ansible script
Define a dev environment with a single instance and a 'load test' environment that has settings close to production environment
(Correct)
Explanation
Correct option:
Define a dev environment with a single instance and a 'load test' environment that has settings close to production environment
AWS Elastic Beanstalk makes it easy to create new environments for your application. You can create and manage separate environments for development, testing, and production use, and you can deploy any version of your application to any environment. Environments can be long-running or temporary. When you terminate an environment, you can save its configuration to recreate it later.
It is common practice to have many environments for the same application. You can deploy multiple environments when you need to run multiple versions of an application. So for the given use-case, you can set up 'dev' and 'load test' environment.
You cannot have multiple development environments in Elastic Beanstalk, just one development, and one production environment - Incorrect, use the Create New Environment wizard in the AWS Management Console for BeanStalk to guide you on this.
Use only one Beanstalk environment and perform configuration changes using an Ansible script - Ansible is an open-source deployment tool that integrates with AWS. It allows us to deploy the infrastructure. Elastic Beanstalk provisions the servers that you need for hosting the application and it also handles multiple environments, so Beanstalk is a better option.
Create an Application Load Balancer to route based on hostname so you can pass on parameters to the development Elastic Beanstalk environment. Create a file in .ebextensions/ to know how to handle the traffic coming from the ALB - This is not a good design if you need to load test because you will have two versions on the same instances and may not be able to access resources in the system due to the load testing.
Reference:
Question 21: Skipped
You are assigned as the new project lead for a web application that processes orders for customers. You want to integrate event-driven processing anytime data is modified or deleted and use a serverless approach using AWS Lambda for processing stream events.
Which of the following databases should you choose from?
Kinesis
ElastiCache
DynamoDB
(Correct)
RDS
Explanation
Correct option:
DynamoDB
A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table, and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near real-time.
Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified.
Incorrect options:
RDS - By itself, RDS cannot be used to stream events like DynamoDB, so this option is ruled out. However, you can use Amazon Kinesis for streaming data from RDS.
ElastiCache - ElastiCache works as an in-memory data store and cache, it cannot be used to stream data like DynamoDB.
Kinesis - Kinesis is not a database, so this option is ruled out.
Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources.
Reference:
Question 23: Skipped
A senior cloud engineer designs and deploys online fraud detection solutions for credit card companies processing millions of transactions daily. The Elastic Beanstalk application sends files to Amazon S3 and then sends a message to an Amazon SQS queue containing the path of the uploaded file in S3. The engineer wants to postpone the delivery of any new messages to the queue for at least 10 seconds.
Which SQS feature should the engineer leverage?
Use visibility timeout parameter
Use DelaySeconds parameter
(Correct)
Enable LongPolling
Implement application-side delay
Explanation
Correct option:
Use DelaySeconds parameter
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Delay queues let you postpone the delivery of new messages to a queue for several seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.
Incorrect options:
Implement application-side delay - You can customize your application to delay sending messages but it is not a robust solution. You can run into a scenario where your application crashes before sending a message, then that message would be lost.
Use visibility timeout parameter - Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to postpone the delivery of new messages to the queue for a few seconds.
Enable LongPolling - Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response). When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds. You cannot use LongPolling to postpone the delivery of new messages to the queue for a few seconds.
Reference:
Question 37: Skipped
A firm maintains a highly available application that receives HTTPS traffic from mobile devices and web browsers. The main Developer would like to set up the Load Balancer routing to route traffic from web servers to smart.com/api and from mobile devices to smart.com/mobile. A developer advises that the previous recommendation is not needed and that requests should be sent to api.smart.com and mobile.smart.com instead.
Which of the following routing options were discussed in the given use-case? (select two)
Path based
(Correct)
Web browser version
Client IP
Host based
(Correct)
Cookie value
Explanation
Correct options:
Path based
You can create a listener with rules to forward requests based on the URL path. This is known as path-based routing. If you are running microservices, you can route traffic to multiple back-end services using path-based routing. For example, you can route general requests to one target group and request to render images to another target group.
This path-based routing allows you to route requests to, for example, /api to one set of servers (also known as target groups) and /mobile to another set. Segmenting your traffic in this way gives you the ability to control the processing environment for each category of requests. Perhaps /api requests are best processed on Compute Optimized instances, while /mobile requests are best handled by Memory Optimized instances.
Host based
You can create Application Load Balancer rules that route incoming traffic based on the domain name specified in the Host header. Requests to api.example.com can be sent to one target group, requests to mobile.example.com to another, and all others (by way of a default rule) can be sent to a third. You can also create rules that combine host-based routing and path-based routing. This would allow you to route requests to api.example.com/production and api.example.com/sandbox to distinct target groups.
Incorrect options:
Client IP - This option has been added as a distractor. Routing is not based on the client's IP address.
Web browser version - Routing has nothing to do with the client's web browser, if it was then there is something sneaky going on.
Cookie value - Application Load Balancers support load balancer-generated cookies only and you cannot modify them. When routing sticky sessions to route requests to the same target then cookies are needed to be supported by the client's browser.
Reference:
Question 43: Skipped
You have an Amazon Kinesis Data Stream with 10 shards, and from the metrics, you are well below the throughput utilization of 10 MB per second to send data. You send 3 MB per second of data and yet you are receiving ProvisionedThroughputExceededException errors frequently.
What is the likely cause of this?
You have too many shards
The data retention period is too long
Metrics are slow to update
The partition key that you have selected isn't distributed enough
(Correct)
Explanation
Correct option:
The partition key that you have selected isn't distributed enough
Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs.
A Kinesis data stream is a set of shards. A shard is a uniquely identified sequence of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity.
The partition key is used by Kinesis Data Streams to distribute data across shards. Kinesis Data Streams segregates the data records that belong to a stream into multiple shards, using the partition key associated with each data record to determine the shard to which a given data record belongs.
For the given use-case, as the partition key is not distributed enough, all the data is getting skewed at a few specific shards and not leveraging the entire cluster of shards.
You can also use metrics to determine which are your "hot" or "cold" shards, that is, shards that are receiving much more data, or much less data, than expected. You could then selectively split the hot shards to increase capacity for the hash keys that target those shards. Similarly, you could merge cold shards to make better use of their unused capacity.
Incorrect options:
Metrics are slow to update - Metrics are a CloudWatch concept. This option has been added as a distractor.
You have too many shards - Too many shards is not the issue as you would see a LimitExceededException in that case.
The data retention period is too long - Your streaming data is retained for up to 365 days. The data retention period is not an issue causing this error.
References:
Question 46: Skipped
A developer is configuring an Application Load Balancer (ALB) to direct traffic to the application's EC2 instances and Lambda functions.
Which of the following characteristics of the ALB can be identified as correct? (Select two)
An ALB has three possible target types: Hostname, IP and Lambda
An ALB has three possible target types: Instance, IP and Lambda
(Correct)
You can not specify publicly routable IP addresses to an ALB
(Correct)
If you specify targets using IP addresses, traffic is routed to instances using the primary private IP address
If you specify targets using an instance ID, traffic is routed to instances using any private IP address from one or more network interfaces
Explanation
Correct options:
An ALB has three possible target types: Instance, IP and Lambda
When you create a target group, you specify its target type, which determines the type of target you specify when registering targets with this target group. After you create a target group, you cannot change its target type. The following are the possible target types:
Instance
- The targets are specified by instance ID
IP
- The targets are IP addresses
Lambda
- The target is a Lambda function
You can not specify publicly routable IP addresses to an ALB
When the target type is IP, you can specify IP addresses from specific CIDR blocks only. You can't specify publicly routable IP addresses.
Incorrect options:
If you specify targets using an instance ID, traffic is routed to instances using any private IP address from one or more network interfaces - If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance.
If you specify targets using IP addresses, traffic is routed to instances using the primary private IP address - If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces. This enables multiple applications on an instance to use the same port.
An ALB has three possible target types: Hostname, IP and Lambda - This is incorrect, as described in the correct explanation above.
Reference:
Question 47: Skipped
DevOps engineers are developing an order processing system where notifications are sent to a department whenever an order is placed for a product. The system also pushes identical notifications of the new order to a processing module that would allow EC2 instances to handle the fulfillment of the order. In the case of processing errors, the messages should be allowed to be re-processed at a later stage. The order processing system should be able to scale transparently without the need for any manual or programmatic provisioning of resources.
Which of the following solutions can be used to address this use-case in the most cost-efficient way?
SQS + SES
SNS + Lambda
SNS + Kinesis
SNS + SQS
(Correct)
Explanation
Correct option:
SNS + SQS
Amazon SNS enables message filtering and fanout to a large number of subscribers, including serverless functions, queues, and distributed systems. Additionally, Amazon SNS fans out notifications to end users via mobile push messages, SMS, and email.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
Because each buffered request can be processed independently, Amazon SQS can scale transparently to handle the load without any provisioning instructions from you.
SNS and SQS can be used to create a fanout messaging scenario in which messages are "pushed" to multiple subscribers, which eliminates the need to periodically check or poll for updates and enables parallel asynchronous processing of the message by the subscribers. SQS can allow for later re-processing and dead letter queues. This is called the fan-out pattern.
Incorrect options:
SNS + Kinesis - You can use Amazon Kinesis Data Streams to collect and process large streams of data records in real-time. Kinesis Data Streams stores records from 24 hours (by default) to 8760 hours (365 days). However, you need to manually provision shards in case the load increases or you need to use CloudWatch alarms to set up auto scaling for the shards. Since Kinesis only supports transparent scaling in the on-demand mode, however, it is not cost efficient for the given use case, so this option is not the right fit for the given use case.
SNS + Lambda - Amazon SNS and AWS Lambda are integrated so you can invoke Lambda functions with Amazon SNS notifications. The Lambda function receives the message payload as an input parameter and can manipulate the information in the message, publish the message to other SNS topics, or send the message to other AWS services. However, your EC2 instances cannot "poll" from Lambda functions and as such, this would not work.
SQS + SES - This will not work as the messages need to be processed twice (once for sending the notification and later for order fulfillment) and SQS only allows for one consuming application.
References:
Question 49: Skipped
You have uploaded a zip file to AWS Lambda that contains code files written in Node.Js. When your function is executed you receive the following output, 'Error: Memory Size: 10,240 MB Max Memory Used'.
Which of the following explains the problem?
Your zip file is corrupt
Your Lambda function ran out of RAM
(Correct)
You have uploaded a zip file larger than 50 MB to AWS Lambda
The uncompressed zip file exceeds AWS Lambda limits
Explanation
Correct option:
Your Lambda function ran out of RAM
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
The maximum amount of memory available to the Lambda function at runtime is 10,240 MB. Your Lambda function was deployed with 10,240 MB of RAM, but it seems your code requested or used more than that, so the Lambda function failed.
Incorrect options:
Your zip file is corrupt - A memory size error states that Lambda was able to extract so the file is not corrupt
The uncompressed zip file exceeds AWS Lambda limits - This is not correct as your function was able to execute.
You have uploaded a zip file larger than 50 MB to AWS Lambda - This is not correct as your lambda function was able to execute
Reference:
Question 50: Skipped
You are a system administrator whose company recently moved its production application to AWS and migrated data from MySQL to AWS DynamoDB. You are adding new tables to AWS DynamoDB and need to allow your application to query your data by the primary key and an alternate key. This option must be added when first creating tables otherwise changes cannot be made afterward.
Which of the following actions should you take?
Migrate away from DynamoDB
Create a GSI
Create a LSI
(Correct)
Call Scan
Explanation
Correct option:
Create an LSI
LSI stands for Local Secondary Index. Some applications only need to query data using the base table's primary key; however, there may be situations where an alternate sort key would be helpful. To give your application a choice of sort keys, you can create one or more local secondary indexes on a table and issue Query or Scan requests against these indexes.
Incorrect options:
Call Scan - Scan is an operation on the data. Once you create your local secondary indexes on a table you can then issue Scan requests again.
Create a GSI - GSI (Global Secondary Index) is an index with a partition key and a sort key that can be different from those on the base table.
Migrate away from DynamoDB - Migrating to another database that is not NoSQL may cause you to make changes that require substantial code changes.
Reference:
Question 52: Skipped
You are designing a high-performance application that requires millions of connections. You have several EC2 instances running Apache2 web servers and the application will require capturing the user’s source IP address and source port without the use of X-Forwarded-For.
Which of the following options will meet your needs?
Classic Load Balancer
Network Load Balancer
(Correct)
Application Load Balancer
Elastic Load Balancer
Explanation
Correct option:
Network Load Balancer
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. Incoming connections remain unmodified, so application software need not support X-Forwarded-For.
Incorrect options:
Application Load Balancer - An Application Load Balancer functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply and then selects a target from the target group for the rule action.
One of many benefits of the Application Load Balancer is its support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL. For needs relating to network traffic go with Network Load Balancer.
Elastic Load Balancer - Elastic Load Balancing is the service itself that offers different types of load balancers.
Classic Load Balancer - It is a basic load balancer that distributes traffic. If your account was created before 2013-12-04, your account supports EC2-Classic instances and you will benefit in using this type of load balancer. The classic load balancer can be used regardless of when your account was created and whether you use EC2-Classic or whether your instances are in a VPC but just remember its the basic load balancer AWS offers and not advanced as the others.
Reference:
Question 55: Skipped
For an application that stores personal health information (PHI) in an encrypted Amazon RDS for MySQL DB instance, a developer wants to improve its performance by caching frequently accessed data and adding the ability to sort or rank the cached datasets.
What is the best approach to meet these requirements subject to the constraint that the PHI stays encrypted at all times?
Migrate the frequently accessed data to DynamoDB Accelerator (DAX) that has encryption enabled for data in transit and at rest
Migrate the frequently accessed data to an EC2 Instance Store that has encryption enabled for data in transit and at rest
Store the frequently accessed data in an Amazon ElastiCache for Memcached instance with encryption enabled for data in transit and at rest
Store the frequently accessed data in an Amazon ElastiCache for Redis instance with encryption enabled for data in transit and at rest
(Correct)
Explanation
Correct option:
Store the frequently accessed data in an Amazon ElastiCache for Redis instance with encryption enabled for data in transit and at rest
Amazon ElastiCache for Redis is a Redis-compatible in-memory data structure service that can be used as a data store or cache. It delivers the ease of use and power of Redis along with the availability, reliability, scalability, security, and performance suitable for the most demanding applications.
In addition to strings, Redis supports lists, sets, sorted sets, hashes, bit arrays, and hyperlog logs. Applications can use these more advanced data structures to support a variety of use cases. For example, you can use Redis Sorted Sets to easily implement a game leaderboard that keeps a list of players sorted by their rank.
Incorrect options:
Store the frequently accessed data in an Amazon ElastiCache for Memcached instance with encryption enabled for data in transit and at rest - Memcached is designed for simplicity and it does not offer support for advanced data structures and operations such as sort or rank.
Migrate the frequently accessed data to DynamoDB Accelerator (DAX) that has encryption enabled for data in transit and at rest - DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX cannot be used with RDS MySQL as a caching service, so this option is incorrect.
Migrate the frequently accessed data to an EC2 Instance Store that has encryption enabled for data in transit and at rest - This option is incorrect. EC2 instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content. It can also be used to store temporary data that you replicate across a fleet of instances, such as a load-balanced pool of web servers.
References:
Question 57: Skipped
As a Full-stack Web Developer, you are involved with every aspect of a company’s platform from development with PHP and JavaScript to the configuration of NoSQL databases with Amazon DynamoDB. You are not concerned about your response receiving stale data from your database and need to perform 16 eventually consistent reads per second of 12 KB in size each.
How many read capacity units (RCUs) do you need?
48
192
12
24
(Correct)
Explanation
Correct option:
Before proceeding with the calculations, please review the following:
24
One read capacity unit represents two eventually consistent reads per second, for an item up to 4 KB in size. So that means that for an item of 12KB in size, we need 3 RCU (12 KB / 4 KB) for two eventually consistent reads per second. As we need 16 eventually consistent reads per second, we need 3 * (16 / 2) = 24 RCU.
Incorrect options:
12
192
48
These three options contradict the details provided in the explanation above, so these are incorrect.
Reference:
Question 59: Skipped
A developer is migrating an on-premises application to AWS Cloud. The application currently processes user uploads and uploads them to a local directory on the server. All such file uploads must be saved and then made available to all instances in an Auto Scaling group.
As a Developer Associate, which of the following options would you recommend for this use-case?
Use Amazon EBS and configure the application AMI to use a snapshot of the same EBS instance while launching new instances
Use Amazon S3 and make code changes in the application so all uploads are put on S3
(Correct)
Use Amazon EBS as the storage volume and share the files via file synchronization software
Use Instance Store type of EC2 instances and share the files via file synchronization software
Explanation
Correct option:
Use Amazon S3 and make code changes in the application so all uploads are put on S3
Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Using this web service, you can easily build applications that make use of Internet storage.
You can use S3 PutObject API from the application to upload the objects in a single bucket, which is then accessible from all instances.
Incorrect options:
Use Amazon EBS and configure the application AMI to use a snapshot of the same EBS instance while launching new instances - Using EBS to share data between instances is not possible because EBS volume is tied to an instance by definition. Creating a snapshot would only manage to move the stale data into the new instances.
Use Instance Store type of EC2 instances and share the files via file synchronization software
Use Amazon EBS as the storage volume and share the files via file synchronization software
Technically you could use file synchronization software on EC2 instances with EBS or Instance Store type, but that involves a lot of development effort and still would not be as production-ready as just using S3. So both these options are incorrect.
Reference:
Question 61: Skipped
You are getting ready for an event to show off your Alexa skill written in JavaScript. As you are testing your voice activation commands you find that some intents are not invoking as they should and you are struggling to figure out what is happening. You included the following code console.log(JSON.stringify(this.event))
in hopes of getting more details about the request to your Alexa skill.
You would like the logs stored in an Amazon Simple Storage Service (S3) bucket named MyAlexaLog
. How do you achieve this?
Use CloudWatch integration feature with Kinesis
Use CloudWatch integration feature with Lambda
Use CloudWatch integration feature with Glue
Use CloudWatch integration feature with S3
(Correct)
Explanation
Correct option:
Use CloudWatch integration feature with S3
You can export log data from your CloudWatch log groups to an Amazon S3 bucket and use this data in custom processing and analysis, or to load onto other systems.
Incorrect options:
Use CloudWatch integration feature with Kinesis - You can use both to do custom processing or analysis but with S3 you don't have to process anything. Instead, you configure the CloudWatch settings to send logs to S3.
Use CloudWatch integration feature with Lambda - You can use both to do custom processing or analysis but with S3 you don't have to process anything. Instead, you configure the CloudWatch settings to send logs to S3.
Use CloudWatch integration feature with Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. Glue is not the right fit for the given use-case.
Reference:
FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket
Upload the compressed file in a single operation
Correct answer
Upload the compressed file using multipart upload with S3 transfer acceleration
Upload the compressed file using multipart upload
Overall explanation
Correct option:
Upload the compressed file using multipart upload with S3 transfer acceleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. If you're uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance. If you're uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
Incorrect options:
Upload the compressed file in a single operation - In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. Multipart upload provides improved throughput - you can upload parts in parallel to improve throughput. Therefore, this option is not correct.
Upload the compressed file using multipart upload - Although using multipart upload would certainly speed up the process, combining with S3 transfer acceleration would further improve the transfer speed. Therefore just using multipart upload is not the correct option.
FTP the compressed file into an EC2 instance that runs in the same region as the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket - This is a roundabout process of getting the file into S3 and added as a distractor. Although it is technically feasible to follow this process, it would involve a lot of scripting and certainly would not be the fastest way to get the file into S3.
References:
DomainDevelopment with AWS Services
Correct answer
Develop the SAM template locally => upload the template to S3 => deploy your application to the cloud
Develop the SAM template locally => deploy the template to S3 => use your application in the cloud
Develop the SAM template locally => upload the template to CodeCommit => deploy your application to CodeDeploy
Develop the SAM template locally => upload the template to Lambda => deploy your application to the cloud
Overall explanation
Correct option:
Develop the SAM template locally => upload the template to S3 => deploy your application to the cloud
The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML.
You can develop and test your serverless application locally, and then you can deploy your application by using the sam deploy command. The sam deploy command zips your application artifacts, uploads them to Amazon Simple Storage Service (Amazon S3), and deploys your application to the AWS Cloud. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism.
Incorrect options:
Develop the SAM template locally => upload the template to Lambda => deploy your application to the cloud
Develop the SAM template locally => upload the template to CodeCommit => deploy your application to CodeDeploy
Develop the SAM template locally => deploy the template to S3 => use your application in the cloud
These three options contradict the details provided in the explanation above, so these are incorrect.
Reference:
DomainDevelopment with AWS Services
Correct answer
DynamoDB Transactions
DynamoDB Indexes
DynamoDB TTL
DynamoDB Streams
Overall explanation
Correct option:
DynamoDB Transactions
You can use DynamoDB transactions to make coordinated all-or-nothing changes to multiple items both within and across tables. Transactions provide atomicity, consistency, isolation, and durability (ACID) in DynamoDB, helping you to maintain data correctness in your applications.
DynamoDB Transactions Overview:
Incorrect options:
DynamoDB TTL - DynamoDB TTL allows you to expire data based on a timestamp, so this option is not correct.
DynamoDB Streams - DynamoDB Streams gives a changelog of changes that happened to your tables and then may even relay these to a Lambda function for further processing.
DynamoDB Indexes - GSI and LSI are used to allow you to query your tables using different partition/sort keys.
Reference:
DomainDevelopment with AWS Services
NetworkStack
which will export the subnetId
that can be used when creating EC2 instances in another stack.To use the exported value in another stack, which of the following functions must be used?!GetAtt
!Sub
!Ref
Correct answer
!ImportValue
Overall explanation
Correct option:
!ImportValue
The intrinsic function Fn::ImportValue
returns the value of an output exported by another stack. You typically use this function to create cross-stack references.
Incorrect options:
!Ref
- Returns the value of the specified parameter or resource.
!GetAtt
- Returns the value of an attribute from a resource in the template.
!Sub
- Substitutes variables in an input string with values that you specify.
Reference:
DomainDevelopment with AWS Services
1
20
6
Correct answer
10
Overall explanation
Correct option:
10
Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs.
A Kinesis data stream is a set of shards. A shard is a uniquely identified sequence of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity.
Kinesis Data Streams Overview:
Each KCL consumer application instance uses "workers" to process data in Kinesis shards. At any given time, each shard of data records is bound to a particular worker via a lease. For the given use-case, an EC2 instance acts as the worker for the KCL application. You can have at most one EC2 instance per shard in Kinesis for the given application. As we have 10 shards, the max number of EC2 instances is 10.
Incorrect options:
1
6
20
These three options contradict the explanation provided earlier. So these are incorrect.
Reference:
DomainDevelopment with AWS Services
user_id
basis.As a developer, which message parameter should you set the value of user_id
to guarantee the ordering?MessageDeduplicationId
MessageHash
MessageOrderId
Correct answer
MessageGroupId
Overall explanation
Correct option:
AWS FIFO queues are designed to enhance messaging between applications when the order of operations and events has to be enforced.
MessageGroupId
The message group ID is the tag that specifies that a message belongs to a specific message group. Messages that belong to the same message group are always processed one by one, in a strict order relative to the message group (however, messages that belong to different message groups might be processed out of order).
Incorrect options:
MessageDeduplicationId - The message deduplication ID is the token used for the deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.
MessageOrderId - This is a made-up option and has been added as a distractor.
MessageHash - This is a made-up option and has been added as a distractor.
References:
DomainDevelopment with AWS Services
Create a new cache policy for the CloudFront distribution and set the cache behavior to None
to improve caching performance. Update the CloudFront distribution to use the new cache policy
Create a new cache policy for the CloudFront distribution and set the cache behavior to Cache based on selected request headers
. Use Whitelist Headers
as the caching criteria
Choose the Customize
option for the Object Caching
setting and reduce the Default TTL
value so that CloudFront forwards requests to your origin more frequently
Correct answer
Create a new cache policy for the CloudFront distribution and set the cache behavior to Query string forwarding and caching
. In the Query string whitelist field include the language string. Update the CloudFront distribution to use the new cache policy
Overall explanation
Correct option:
Create a new cache policy for the CloudFront distribution and set the cache behavior to Query string forwarding and caching. In the Query string whitelist field include the language string. Update the CloudFront distribution to use the new cache policy
CloudFront can cache different versions of your content based on the values of query string parameters. Forward all, cache based on whitelist option should be chosen if your origin server returns different versions of your objects based on one or more query string parameters. Then specify the parameters that you want CloudFront to use as a basis for caching in the Query string whitelist field.
Incorrect options:
Create a new cache policy for the CloudFront distribution and set the cache behavior to None to improve caching performance. Update the CloudFront distribution to use the new cache policy - None improves Caching. Choose this option if your origin returns the same version of an object regardless of the values of query string parameters. This increases the likelihood that CloudFront can serve a request from the cache, which improves performance and reduces the load on your origin.
Create a new cache policy for the CloudFront distribution and set the cache behavior to Cache based on selected request headers
. Use Whitelist Headers
as the caching criteria - Cache based on selected request headers
is not a valid option since the use case mentions using query string parameters.
Choose the Customize
option for the Object Caching
setting and reduce the Default TTL
value so that CloudFront forwards requests to your origin more frequently - Default TTL specifies the default amount of time, in seconds, that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. This option is irrelevant for the current use case since the response returned defaulting to the same language is not a TTL issue.
References:
DomainDevelopment with AWS Services
Correct answer
Configure provisioned concurrency for the Lambda function to respond immediately to the function's invocations
Configure reserved concurrency to guarantee the maximum number of concurrent instances of the Lambda function
Enable API caching in Amazon API Gateway to cache AWS Lambda function response
Configure an interface VPC endpoint powered by AWS PrivateLink to access the Amazon API Gateway REST API with milliseconds latency
Overall explanation
Correct option:
Configure provisioned concurrency for the Lambda function to respond immediately to the function's invocations
When Lambda allocates an instance of your function, the runtime loads your function's code and runs the initialization code that you define outside of the handler. If your code and dependencies are large, or you create SDK clients during initialization, this process can take some time. When your function has not been used for some time, needs to scale up, or when you update a function, Lambda creates new execution environments. This causes the portion of requests that are served by new instances to have higher latency than the rest, otherwise known as a cold start.
By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with low latency. Lambda functions configured with provisioned concurrency run with consistent start-up latency, making them ideal for building interactive mobile or web backends, latency-sensitive microservices, and synchronously invoked APIs.
Functions with Provisioned Concurrency differ from on-demand functions in some important ways:
Initialization code does not need to be optimized. Since this happens long before the invocation, lengthy initialization does not impact the latency of invocations. If you are using runtimes that typically take longer to initialize, like Java, the performance of these can benefit from using Provisioned Concurrency.
Initialization code is run more frequently than the total number of invocations. Since Lambda is highly available, for every one unit of Provisioned Concurrency, there are a minimum of two execution environments prepared in separate Availability Zones. This is to ensure that your code is available in the event of a service disruption. As environments are reaped and load balancing occurs, Lambda over-provisions environments to ensure availability. You are not charged for this activity. If your code initializer implements logging, you will see additional log files anytime that this code is run, even though the main handler is not invoked.
Provisioned Concurrency cannot be used with the $LATEST version. This feature can only be used with published versions and aliases of a function. If you see cold starts for functions configured to use Provisioned Concurrency, you may be invoking the $LATEST version, instead of the version or alias with Provisioned Concurrency configured.
Reducing cold starts with Provisioned Concurrency:
Incorrect options:
Configure reserved concurrency to guarantee the maximum number of concurrent instances of the Lambda function - Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function. Whereas, provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations.
Configure an interface VPC endpoint powered by AWS PrivateLink to access the Amazon API Gateway REST API with milliseconds latency - An interface VPC endpoint can be used to connect your VPC resources to the AWS Lambda function without crossing the public internet. VPC endpoint is irrelevant to the current discussion.
Enable API caching in Amazon API Gateway to cache AWS Lambda function response - With caching, you can reduce the number of calls made to your AWS Lambda function and also improve the latency of requests to your API. Caching is best-effort and applications making frequent API calls to retrieve static data can benefit from a caching layer. Caching does not reduce the initialization time Lambda takes and hence is not an optimal solution for this use case.
References:
DomainDevelopment with AWS Services
Upload all the code as a folder to S3 and refer the folder in AWS::Lambda::Function
block
Correct selection
Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function
block as long as there are no third-party dependencies
Correct selection
Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function
block
Upload all the code to CodeCommit and refer to the CodeCommit Repository in AWS::Lambda::Function
block
Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function
block and reference the dependencies as a zip file stored in S3
Overall explanation
Correct options:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
How Lambda function works:
Upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function
block
You can upload all the code as a zip to S3 and refer the object in AWS::Lambda::Function
block.
The AWS::Lambda::Function resource creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code.
Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function
block as long as there are no third-party dependencies
The other option is to write the code inline for Node.js and Python as long as there are no dependencies for your code, besides the dependencies already provided by AWS in your Lambda Runtime (aws-sdk and cfn-response and many other AWS related libraries are preloaded via, for example, boto3 (python) in the lambda instances.)
YAML template for creating a Lambda function:
Incorrect options:
Upload all the code to CodeCommit and refer to the CodeCommit Repository in AWS::Lambda::Function
block
Upload all the code as a folder to S3 and refer the folder in AWS::Lambda::Function
block
Write the AWS Lambda code inline in CloudFormation in the AWS::Lambda::Function
block and reference the dependencies as a zip file stored in S3
These three options contradict the explanation provided earlier. So these are incorrect.
Reference:
DomainDevelopment with AWS Services
CloudWatch Events
SQS
Amazon S3
Kinesis
Correct option:
CloudWatch Events
You can create a Lambda function and direct CloudWatch Events to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression.
CloudWatch Events Key Concepts:
Schedule Expressions for CloudWatch Events Rules:
Incorrect options:
Amazon S3
SQS
Kinesis
These three AWS services don't have cron capabilities, so these options are incorrect.
References:
DomainDevelopment with AWS Services
S3 Inventory
S3 Access Logs
S3 Analytics
Correct answer
S3 Select
Overall explanation
Correct option:
S3 Select
S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases in many cases you can get as much as a 400% improvement.
Incorrect options:
S3 Inventory - Amazon S3 inventory is one of the tools Amazon S3 provides to help manage your storage. You can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs.
S3 Analytics - By using Amazon S3 analytics storage class analysis you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. This new Amazon S3 analytics feature observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for infrequent access) storage class.
S3 Access Logs - Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.
Reference:
DomainDevelopment with AWS Services
Setup a Web Server environment and a .ebextensions
file
Correct answer
Setup a Worker environment and a cron.yaml
file
Setup a Worker environment and a .ebextensions
file
Setup a Web Server environment and a cron.yaml
file
Overall explanation
Correct option:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Elastic BeanStalk Key Concepts:
Setup a Worker environment and a cron.yaml
file
An environment is a collection of AWS resources running an application version. An environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier.
If your AWS Elastic Beanstalk application performs operations or workflows that take a long time to complete, you can offload those tasks to a dedicated worker environment. Decoupling your web application front end from a process that performs blocking operations is a common way to ensure that your application stays responsive under load.
For a worker environment, you need a cron.yaml
file to define the cron jobs and do repetitive tasks.
Incorrect options:
Setup a Web Server environment and a cron.yaml
file
Setup a Worker environment and a .ebextensions
file
Setup a Web Server environment and a .ebextensions
file
.ebextensions/
won't work to define cron jobs, and Web Server environments cannot be set up to perform repetitive and scheduled tasks. So these three options are incorrect.
References:
DomainDevelopment with AWS Services
Leverage horizontal scaling for the application's persistence layer by adding Oracle RAC on AWS
Leverage vertical scaling for the application instance by provisioning a larger Amazon EC2 instance size
Correct answer
Leverage horizontal scaling for the web and application tiers by using Auto Scaling groups and Application Load Balancer
Leverage SQS with asynchronous AWS Lambda calls to decouple the application and data tiers
Overall explanation
Correct option:
Leverage horizontal scaling for the web and application tiers by using Auto Scaling groups and Application Load Balancer - A horizontally scalable system is one that can increase capacity by adding more computers to the system. This is in contrast to a vertically scalable system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.
Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Elastic Load Balancing is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed.
To use Elastic Load Balancing with your Auto Scaling group, you attach the load balancer to your Auto Scaling group to register the group with the load balancer. Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group.
When you use Elastic Load Balancing with your Auto Scaling group, it's not necessary to register individual EC2 instances with the load balancer. Instances that are launched by your Auto Scaling group are automatically registered with the load balancer. Likewise, instances that are terminated by your Auto Scaling group are automatically deregistered from the load balancer.
This option will require fewer design changes, it's mostly configuration changes and the ability for the web/application tier to be able to communicate across instances. Hence, this is the right solution for the current use case.
Incorrect options:
Leverage SQS with asynchronous AWS Lambda calls to decouple the application and data tiers - This is incorrect as it uses asynchronous AWS Lambda calls and the application uses synchronous transactions. The question says there should be no change in the application architecture.
Leverage horizontal scaling for the application's persistence layer by adding Oracle RAC on AWS - The issue is not with the persistence layer at all. This option has only been used as a distractor.
You can deploy scalable Oracle Real Application Clusters (RAC) on Amazon EC2 using Amazon Machine Images (AMI) on AWS Marketplace. Oracle RAC is a shared-everything database cluster technology from Oracle that allows a single database (a set of data files) to be concurrently accessed and served by one or many database server instances.
Leverage vertical scaling for the application instance by provisioning a larger Amazon EC2 instance size - Vertical scaling is just a band-aid solution and will not work long term.
References:
DomainDevelopment with AWS Services
Correct answer
Use the local directory /tmp
Use an S3 bucket
Use the local directory /opt
Create a tmp/ directory in the source zip file and use it
Overall explanation
Correct option:
Use the local directory /tmp
This is 512MB of temporary space you can use for your Lambda functions.
Incorrect options:
Create a tmp/ directory in the source zip file and use it - This option has been added as a distractor, as you can't access a directory within a zip file.
Use the local directory /opt - This option has been added as a distractor. This path is not accessible.
Use an S3 bucket - This won't be temporary after the Lambda function is deleted, so this option is incorrect.
Reference:
DomainDevelopment with AWS Services
Deleting a file is a recoverable operation
Correct answer
Versioning can be enabled only for a specific folder
Any file that was unversioned before enabling versioning will have the 'null' version
Overwriting a file increases its versions
Overall explanation
Correct option:
Versioning can be enabled only for a specific folder
The versioning state applies to all (never some) of the objects in that bucket. The first time you enable a bucket for versioning, objects in it are thereafter always versioned and given a unique version ID.
Versioning Overview:
Incorrect options:
Overwriting a file increases its versions - If you overwrite an object (file), it results in a new object version in the bucket. You can always restore the previous version.
Deleting a file is a recoverable operation - Correct, when you delete an object (file), Amazon S3 inserts a delete marker, which becomes the current object version and you can restore the previous version.
Any file that was unversioned before enabling versioning will have the 'null' version - Objects stored in your bucket before you set the versioning state have a version ID of null. Those existing objects in your bucket do not change.
Reference:
DomainDevelopment with AWS Services
Enable DAX for DynamoDB and ElastiCache Memcached for S3
Correct answer
Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3
Enable ElastiCache Redis for DynamoDB and CloudFront for S3
Overall explanation
Correct option:
Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3
DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.
DAX is tightly integrated with DynamoDB—you simply provision a DAX cluster, use the DAX client SDK to point your existing DynamoDB API calls at the DAX cluster, and let DAX handle the rest. Because DAX is API-compatible with DynamoDB, you don't have to make any functional application code changes. DAX is used to natively cache DynamoDB reads.
CloudFront is a content delivery network (CDN) service that delivers static and dynamic web content, video streams, and APIs around the world, securely and at scale. By design, delivering data out of CloudFront can be more cost-effective than delivering it from S3 directly to your users.
When a user requests content that you serve with CloudFront, their request is routed to a nearby Edge Location. If CloudFront has a cached copy of the requested file, CloudFront delivers it to the user, providing a fast (low-latency) response. If the file they’ve requested isn’t yet cached, CloudFront retrieves it from your origin – for example, the S3 bucket where you’ve stored your content.
So, you can use CloudFront to improve application performance to serve static content from S3.
Incorrect options:
Enable ElastiCache Redis for DynamoDB and CloudFront for S3
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store.
ElastiCache for Redis Overview:
Although, you can integrate Redis with DynamoDB, it's much more involved from a development perspective. For the given use-case, you should use DAX which is a much better fit.
Enable DAX for DynamoDB and ElastiCache Memcached for S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for implementing an in-memory cache to decrease access latency, increase throughput, and ease the load off your relational or NoSQL database.
ElastiCache cannot be used as a cache to serve static content from S3, so both these options are incorrect.
References:
DomainDevelopment with AWS Services
--page-size
Correct answer
--projection-expression
--max-items
--filter-expression
Overall explanation
Correct option:
--projection-expression
A projection expression is a string that identifies the attributes you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.
To read data from a table, you use operations such as GetItem, Query, or Scan. DynamoDB returns all of the item attributes by default. To get just some, rather than all of the attributes, use a projection expression.
Incorrect options:
--filter-expression - If you need to further refine the Query results, you can optionally provide a filter expression. A filter expression determines which items within the Query results should be returned to you. All of the other results are discarded. A filter expression is applied after Query finishes, but before the results are returned. Therefore, a Query will consume the same amount of read capacity, regardless of whether a filter expression is present.
--page-size - You can use the --page-size option to specify that the AWS CLI requests a smaller number of items from each call to the AWS service. The CLI still retrieves the full list but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call.
--max-items - To include fewer items at a time in the AWS CLI output, use the --max-items option. The AWS CLI still handles pagination with the service as described above, but prints out only the number of items at a time that you specify.
Reference:
DomainDevelopment with AWS Services
Correct answer
Use a caching strategy to write to the backend first and then invalidate the cache
Use a caching strategy to write to the backend first and wait for the cache to expire via TTL
Use a caching strategy to update the cache and the backend at the same time
Use a caching strategy to write to the cache directly and sync the backend at a later time
Overall explanation
Correct option:
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
Broadly, you can set up two types of caching strategies:
Lazy Loading
Write-Through
Use a caching strategy to write to the backend first and then invalidate the cache
This option is similar to the write-through strategy wherein the application writes to the backend first and then invalidate the cache. As the cache gets invalidated, the caching engine would then fetch the latest value from the backend, thereby making sure that the product prices and product description stay consistent with the backend.
Incorrect options:
Use a caching strategy to update the cache and the backend at the same time - The cache and the backend cannot be updated at the same time via a single atomic operation as these are two separate systems. Therefore this option is incorrect.
Use a caching strategy to write to the backend first and wait for the cache to expire via TTL - This strategy could work if the TTL is really short. However, for the duration of this TTL, the cache would be out of sync with the backend, hence this option is not correct for the given use-case.
Use a caching strategy to write to the cache directly and sync the backend at a later time - This option is given as a distractor as this strategy is not viable to address the given use-case. The product prices and description on the cache must always stay consistent with the backend. You cannot sync the backend at a later time.
Reference:
DomainDevelopment with AWS Services
Correct answer
Create an AWS Lambda function that uses Amazon Simple Email Service to send an email notification to the concerned security team. Configure this function as Amazon Cognito post-authentication Lambda trigger
Configure Amazon Cognito user pools authenticated API operations and MFA API operations to send all login data to Amazon Kinesis Data Streams. Configure an AWS Lambda function to analyze these streams and trigger an SNS notification to the security team based on user access
Configure an AWS Lambda function as a trigger to Amazon Cognito identity pools authenticated API operations. Create the Lambda function to utilize the Amazon Simple Email Service to send an email notification to the concerned security team
Create an AWS Lambda function that uses Amazon Simple Email Service to send an email notification to the concerned security team. Configure this function as Amazon Cognito pre-authentication Lambda trigger
Overall explanation
Correct option:
Create an AWS Lambda function that uses Amazon Simple Email Service to send an email notification to the concerned security team. Configure this function as Amazon Cognito post-authentication Lambda trigger
Amazon Cognito invokes Post authentication Lambda trigger after signing in a user, you can add custom logic after Amazon Cognito authenticates the user.
Post-authentication Lambda flows:
Incorrect options:
Create an AWS Lambda function that uses Amazon Simple Email Service to send an email notification to the concerned security team. Configure this function as Amazon Cognito pre-authentication Lambda trigger - Pre-authentication Lambda trigger: Amazon Cognito invokes this trigger when a user attempts to sign in so that you can create custom validation that accepts or denies the authentication request. This is not useful for the current use case since we want to track user login activity which happens post-authentication.
Configure an AWS Lambda function as a trigger to Amazon Cognito identity pools authenticated API operations. Create the Lambda function to utilize the Amazon Simple Email Service to send an email notification to the concerned security team - This statement is incorrect. Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. Cognito identity pools are for authorization and not for authentication.
Configure Amazon Cognito user pools authenticated API operations and MFA API operations to send all login data to Amazon Kinesis Data Streams. Configure an AWS Lambda function to analyze these streams and trigger an SNS notification to the security team based on user access - This is a made-up option given only as a distractor.
References:
DomainDevelopment with AWS Services
Adding both an LSI and a GSI to a table is not recommended by AWS best practices as this is a known cause for creating throttles
The LSI is throttling so you need to provision more RCU and WCU to the LSI
Correct answer
The GSI is throttling so you need to provision more RCU and WCU to the GSI
Metrics are lagging in your CloudWatch dashboard and you should see the RCU and WCU peaking for the main table in a few minutes
Overall explanation
Correct option:
The GSI is throttling so you need to provision more RCU and WCU to the GSI
DynamoDB supports two types of secondary indexes:
Global secondary index — An index with a partition key and a sort key that can be different from those on the base table. A global secondary index is considered "global" because queries on the index can span all of the data in the base table, across all partitions. A global secondary index is stored in its own partition space away from the base table and scales separately from the base table.
Local secondary index — An index that has the same partition key as the base table, but a different sort key. A local secondary index is "local" in the sense that every partition of a local secondary index is scoped to a base table partition that has the same partition key value.
Differences between GSI and LSI:
If you perform heavy write activity on the table, but a global secondary index on that table has insufficient write capacity, then the write activity on the table will be throttled. To avoid potential throttling, the provisioned write capacity for a global secondary index should be equal or greater than the write capacity of the base table since new updates will write to both the base table and global secondary index.
Incorrect options
The LSI is throttling so you need to provision more RCU and WCU to the LSI - LSI use the RCU and WCU of the main table, so you can't provision more RCU and WCU to the LSI.
Adding both an LSI and a GSI to a table is not recommended by AWS best practices as this is a known cause for creating throttles - This option has been added as a distractor. It is fine to have LSI and GSI together.
Metrics are lagging in your CloudWatch dashboard and you should see the RCU and WCU peaking for the main table in a few minutes - This could be a reason, but in this case, the GSI is at fault as the application has been running fine for months.
References:
DomainDevelopment with AWS Services
Upload the code through the AWS console and upload the dependencies as a zip
Zip the function as-is with a package.json file so that AWS Lambda can resolve the dependencies for you
Correct answer
Put the function and the dependencies in one folder and zip them together
Zip the function and the dependencies separately and upload them in AWS Lambda as two parts
Overall explanation
Correct option:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
How Lambda function works:
Put the function and the dependencies in one folder and zip them together
A deployment package is a ZIP archive that contains your function code and dependencies. You need to create a deployment package if you use the Lambda API to manage functions, or if you need to include libraries and dependencies other than the AWS SDK. You can upload the package directly to Lambda, or you can use an Amazon S3 bucket, and then upload it to Lambda. If the deployment package is larger than 50 MB, you must use Amazon S3. This is the standard way of packaging Lambda functions.
Incorrect options:
Zip the function as-is with a package.json file so that AWS Lambda can resolve the dependencies for you
Upload the code through the AWS console and upload the dependencies as a zip
Zip the function and the dependencies separately and upload them in AWS Lambda as two parts
These three options are incorrect as there's only one way of deploying a Lambda function, which is to provide the zip file with all dependencies that it'll need.
Reference:
DomainDevelopment with AWS Services
Correct answer
DynamoDB DAX
DynamoDB Streams
More partitions
ElastiCache
Overall explanation
Correct option:
DynamoDB DAX
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement: from milliseconds to microseconds: even at millions of requests per second.
Incorrect options:
DynamoDB Streams - A stream record contains information about a data modification to a single item in a DynamoDB table. This is not the correct option for the given use-case.
ElastiCache - ElastiCache can cache the results from anything but you will need to modify your code to check the cache before querying the main query store. As the given use-case mandates minimal effort, so this option is not correct.
More partitions - This option has been added as a distractor as DynamoDB handles that for you automatically.
Reference:
DomainDevelopment with AWS Services
Correct answer
Create a config file in the .ebextensions folder to configure the Load Balancer
Configure Health Checks
Open up the port 80 for the security group
Use a separate CloudFormation template to load the SSL certificate onto the Load Balancer
Overall explanation
Correct option:
The simplest way to use HTTPS with an Elastic Beanstalk environment is to assign a server certificate to your environment's load balancer. When you configure your load balancer to terminate HTTPS, the connection between the client and the load balancer is secure. Backend connections between the load balancer and EC2 instances use HTTP, so no additional configuration of the instances is required.
Create a config file in the .ebextensions folder to configure the Load Balancer
To update your AWS Elastic Beanstalk environment to use HTTPS, you need to configure an HTTPS listener for the load balancer in your environment. Two types of load balancers support an HTTPS listener: Classic Load Balancer and Application Load Balancer.
Example .ebextensions/securelistener-alb.config
Use this example when your environment has an Application Load Balancer. The example uses options in the aws:elbv2:listener namespace to configure an HTTPS listener on port 443 with the specified certificate. The listener routes traffic to the default process.
Incorrect options:
Use a separate CloudFormation template to load the SSL certificate onto the Load Balancer - A separate CloudFormation template won't be able to mutate the state of a Load Balancer managed by Elastic Beanstalk, so this option is incorrect.
Open up the port 80 for the security group - Port 80 is for HTTP traffic, so this option is incorrect.
Configure Health Checks - Health Checks are not related to SSL certificates, so this option is ruled out.
References:
DomainDevelopment with AWS Services
When a read is done on a bucket, there's a grace period of 5 minutes to do the same read again
Correct answer
The S3 bucket policy authorizes reads
The EC2 instance is using cached temporary IAM credentials
Removing an instance role from an EC2 instance can take a few minutes before being active
Overall explanation
Correct option:
The S3 bucket policy authorizes reads
When evaluating an IAM policy of an EC2 instance doing actions on S3, the least-privilege union of both the IAM policy of the EC2 instance and the bucket policy of the S3 bucket are taken into account.
For the given use-case, as IAM role has been removed, therefore only the S3 bucket policy comes into effect which authorizes reads.
Incorrect options:
The EC2 instance is using cached temporary IAM credentials - As the IAM instance role has been removed that wouldn't be the case
Removing an instance role from an EC2 instance can take a few minutes before being active - It is immediately active and even if it wasn't, it wouldn't make sense as we can still do reads but not writes.
When a read is done on a bucket, there's a grace period of 5 minutes to do the same read again - This is not true. Every single request is evaluated against IAM in the AWS model.
Reference:
DomainDevelopment with AWS Services
HTTP 502: Bad gateway
HTTP 500: Internal server error
HTTP 504: Gateway timeout
Correct answer
HTTP 503: Service unavailable
Overall explanation
Correct option:
HTTP 503: Service unavailable
The Load Balancer generates the HTTP 503: Service unavailable
error when the target groups for the load balancer have no registered targets.
Incorrect options:
HTTP 500: Internal server error
HTTP 502: Bad gateway
HTTP 504: Gateway timeout
Here is a summary of the possible causes for these error types:
Reference:
Please review this reference material for a deep-dive on cross-account access to objects that are in Amazon S3 buckets -
Data Flow for ES: via -
Data Flow for S3: via -
Data Flow for Redshift: via -
via -
via -
via -
How Step Functions work: via -
How CloudFormation Works: via -
via -
Reference:
![EBS Volume Overview]https://assets-pt.media.datacumulus.com/aws-dva-pt/assets/pt2-q62-i1.jpg) via -
Regional and Zonal Reserved Instances: via -
High Level Overview of EC2 Instance Purchase Options: via -
DynamoDB Transactions Overview: via -
via -
Please review this note on the EC2 Reserved Instance types: via -
High Level Overview of EC2 Instance Purchase Options: via -
Actions defined by DynamoDB: via -
via -
Overview of Provisioned IOPS SSD (io1) volumes: via -
Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via -
via -
Info on Queue Quotas: via -
Standard to FIFO queue conversion: via -
via -
Overview of Amazon ElastiCache features: via -
ProvisionedThroughputExceededException explained: via -
Hot partition explained: via -
EC2 detailed monitoring: via -
via -
Redis Cluster config: via -
Please see this note for more details: via -
ECS Fargate Overview: via -
KDS provides the ability to consume records in the same order a few hours later via -
via -
Lambda VPC Config: via -
Cognito Overview: via -
Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via -
via -
How EFS Works: via -
via -
via -
via -
DynamoDB Streams Overview: via -
Please refer to this excellent blog for more details on using Kinesis for streaming data from RDS:
How Kinesis Data Streams Work via -
via -
via -
Kinesis Data Streams Overview: via -
How SNS Works: via -
How Lambda function works: via -
via -
Differences between GSI and LSI: via -
via -
via -
via -
Exporting CloudWatch Log Data to Amazon S3: via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via -
via - via -
via -
via -
via -
via -
Here is a great reference blog for understanding the various scenarios for using IAM policy vs S3 bucket policy -
via -