Stephen Maarek Course study
Last updated
Last updated
You have attached an Internet Gateway to your VPC, but your EC2 instances still don't have access to the internet. What is NOT a possible issue?
Route Tables are missing entries
The EC2 instances don't have public IPs
The Security Group does not allow traffic in
The NACL does not allow network traffic out
When using VPC Endpoints, what are the only two AWS services that have a Gateway Endpoint available?
Amazon S3 & Amazon SQS
Amazon SQS & DynamoDB
Amazon S3 & DynamoDB
You need to set up a dedicated connection between your on-premises corporate datacenter and AWS Cloud. This connection must be private, consistent, and traffic must not travel through the Internet. Which AWS service should you use?
Site-to-Site VPN
AWS PrivateLink
Amazon EventBridge
AWS Direct Connect
Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. To learn more about CORS, go here: https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Which S3 encryption method mandates that you use HTTPS while uploading/download objects?
SSE-C - Correct
SSE-S3
SSE-KMS
Client-Side Encryption
You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?
Use a bucket policy that forces HTTPS connections
Do nothing, Amazon S3 automatically encrypt new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3) - Correct
Enable Versioning
You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?
S3 CORS
S3 Pre-Signed URL
S3 Bucket Policies
IAM Users
S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.
You have a paid content that is stored in the S3 bucket. You want to distribute that content globally, so you have set up a CloudFront Distribution and configured the S3 bucket to only exchange data with your CloudFront Distribution. Which CloudFront feature allows you to securely distribute this paid content?
Origin Access Control
S3 Pre-Signed URL
CloudFront Signed URL
CloudFront Invalidations
CloudFront Signed URLs are commonly used to distribute paid content through dynamically generated signed URLs.
You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?
Use Origin Access Control
Use CloudFront Geo Restriction (Correct)
Set up a Security Group and attach it to your CloudFront Distribution
Use a Route 53 Latency record and attach it to CloudFront
You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?
Configure your CloudFront Distribution and create an Origin Access Control, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution (Correct)
Send an email to your clients and tell them to not use the S3 endpoint
Use S3 Access Points to redirect clients to CloudFront
A website is hosted on a set of EC2 instances fronted by an Application Load Balancer. You have created a CloudFront Distribution and set up its origin to point to your ALB. What should you use to provide access to hundreds of private files served by your CloudFront distribution?
CloudFront Signed URLs
CloudFront Origin Access Control
CloudFront HTTPS Encryption
CloudFront Signed Cookies (Correct)
You are hosting highly dynamic content in an S3 bucket in the us-east-1
region. You want to make this data to be available with low latency in Singapore's ap-southeast-1
region. What do you recommend?
Amazon CloudFront
S3 Cross-Region Replication (Correct)
S3 Pre-Signed URLs
When you're configuring a CloudFront distribution to use Signed URLs/Cookies, it is recommended to use ............................ signer instead of ................................ signer.
Trusted Key Group, CloudFront Key Pair (Correct)
CloudFront Key Pair, Trusted Key Group
What does this S3 bucket policy do?
Forces GetObject
request to be encrypted if coming from CloudFront
Only allows the S3 bucket content to be accessed from your CloudFront Distribution (Correct)
Only allows GetObject
type of request on the S3 bucket from anybody
success alert
This is good as SQS will hold the failed messages for some days so we have time to consume and analyze them.
A Lambda function that's invoked asynchronously, fails to process from time to time after 3 retries. To troubleshoot the issue, you would like to collect and analyze these events later on. What is the best practice you can do?
Add logging statements for all events in your Lambda function, then filter CloudWatch Logs
Invoke your function synchronously
Add a Dead Letter Queue (DLQ) to send messages to SQS (Correct)
Add a Dead Letter Queue (DLQ) to send messages to SNS
While updating a Lambda function, you get the following exception An error occurred: InvalidParameterValueException when calling the UpdateFunctionCode operation: Unzipped size must be smaller than 262144000 bytes
. How should you solve this issue?
You have uploaded a deployment package .zip
larger than 50 MB to AWS Lambda
The uncompressed deployment package .zip
exceeds AWS Lambda Limits (Correct)
The deployment package .zip
file is corrupted
Unfortunately, there's a 4 KB limit for environment variables.Question 7:
A Lambda function makes requests to 3rd party API. To successfully make the requests, the 3rd party API requires you to send a token which is a long string of 8 KB. Where should you place this token?
Place it in the Lambda function's environment variables (Wrong)
Place it in the deployment package .zip
file (Correct)
Which of the following AWS service does NOT require a Lambda Event Source Mapping?
DynamoDB Streams
Kinesis Data Streams
Simple Queue Service (SQS)
Simple Notification Service (SNS) (COrrect)
Because it's asynchronous.
Which of the following is the recommended way to send the result of an asynchronous Lambda function to an SQS queue?
Write it in the Lambda function code
Use Lambda Destinations (Correct)
Use Lambda Layers
Use a Dead Letter Queue (DLQ)
Question 23:
You want to give another AWS account access to invoke a Lambda function in your AWS account. Which of the following can NOT be used to do so?
Lambda Execution Role (Correct)
Lambda Resource-based Policy
Cross-account IAM Role
You have a CloudFormation template that declares a Lambda function. The Lambda function's code is stored in an S3 bucket with Versioning enabled. You use S3Bucket
, S3Key
, and S3ObjectVersion
in the CloudFormation template to reference the code. You have updated the Lambda function's code then uploaded it to S3, but the function hasn't been updated. What should you do?
Update S3Bucket
to reference the updated code
Update S3Key
to reference the updated code
Update S3ObjectVersion
to reference the updated code (Correct)
A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.
An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:
Effect
Principal
Version (Wrong)
Action
Resource
To enable random host port, set host port = 0 (or empty), which allows multiple containers of the same type to launch on the same EC2 container instance
.Question 10:
You have a containerized application stored as Docker images in an ECR repository, that you want to run on an ECS cluster. You're trying to launch two copies of the same Docker container on the same EC2 container instance. The first container successfully starts, but the second container doesn't. You have checked that there's enough CPU and RAM on the EC2 container instance. What is the problem here?
The EC2 container instance doesn't have the required IAM permissions to fetch Docker images from the ECR repository
The host port defined in the task definition (COrrect)
The container port defined in the task definition
EC2 container instances can only run one container instance for each Docker image
Security Groups do not matter when an EC2 instance registers with the ECS service. By default, Security Groups allow all outbound traffic.
Question 11:
A newly launched EC2 container instance can't be registered with your ECS cluster. What is NOT a reason for this issue?
The ECS agent is not running
The AMI used isn't the Amazon ECS-optimized AMI
The EC2 container instance is missing IAM permissions
The security group on the EC2 instance does not allow inbound traffic (Correct)
Question 12:
You want to pull Docker images from a private ECR repository. Which AWS CLI command can you use?
docker login -u $AWS_ACCESS_KEY_ID -p $AWS_SECRET_ACCESS_KEY $ECR_URL
docker pull $ECR_IMAGE_URL
docker login -u $AWS_USERNAME -p $AWS_PASSWORD $ECR_URL
docker pull $ECR_IMAGE_URL
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_URL
docker pull $ECR_IMAGE_URL (Correct)
docker build -t $ECR_URL .
docker pull $ECR_IMAGE_URL
Which ECS Task Placement strategy is the MOST cost-efficient?
spread
binpack (Correct)
random
Which ECS ECS Task Placement constraint allows you to place each ECS Task on a different EC2 container instance?
distinctInstance (Correct)
memberOf
CloudFormation references a template from Amazon S3, no matter what. If you upload the template from the AWS console, it gets uploaded to Amazon S3 behind the scenes, and CloudFormation references that template from there.Question 6:
Before a CloudFormation template can be used by CloudFormation, it must be uploaded .............................
directly in CloudFormation
to Amazon S3 (C)
to AWS CodeCommit
When you write a CloudFormation template, you must specify the order in which CloudFormation should create your resources.
True
False
What is wrong with the following CloudFormation template?
The parameter AWS::Region
is missing so !Ref 'AWS::Region'
won't work
The !FindInMap
function is using the wrong syntax. It should have only 2 arguments
The parameter EnvironmentName
is missing
You can not define two mappings in a template
You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue. What should you do to prepare your SQS Queue?
Contact AWS Support to pre-warm your SQS Standard Queue
Enable Auto Scaling in your SQS queue
Increase the capacity of the SQS queue
Do nothing, SQS scales automatically (Correct)
You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?
Enable Long Polling
Add DelaySeconds
parameter to the messages when being produced
Increase the Visibility Timeout (Correct)
Decrease the Visibility Timeout
When SQS Long Polling is enabled, Amazon SQS reducing the number of empty responses when there are no messages available to return and eliminating false empty responses (when SQS messages are available but aren't included in a response).
SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)
You have a fleet of EC2 instances (consumers) managed by an Auto Scaling Group that used to process messages in an SQS Standard Queue. Lately, you have found a lot of messages processed twice and after further investigation, you found that these messages can not be processed successfully. How would you troubleshoot (debug) why these messages fail?
SQS Standard Queue
SQS Dead Letter Queue (Correct)
SQS Delay Queue
SQS FIFO Queue
SQS Dead Letter Queue is where other SQS queues (source queues) can send messages that can't be processed (consumed) successfully. It's useful for debugging as it allows you to isolate problematic messages so you can debug why their processing doesn't succeed.
SQS FIFO (First-In-First-Out) Queues have all the capabilities of the SQS Standard Queue, plus the following two features. First, The order in which messages are sent and received are strictly preserved and a message is delivered once and remains available until a consumer process and deletes it. Second, duplicated messages are not introduced into the queue.
SNS + SQS Fan-out is a common pattern where only one message is sent to the SNS topic and then "fan-out" to multiple SQS queues. This approach has the following features: it's fully decoupled, no data loss, and you have the ability to add more SQS queues (more applications) over time.
You have a Kinesis data stream with 6 shards provisioned. This data stream usually receiving 5 MB/s of data and sending out 8 MB/s. Occasionally, your traffic spikes up to 2x and you get a ProvisionedThroughputExceededException
exception. What should you do to resolve the issue?
Enable Kinesis Replication
Add more Shards
Use SQS as a buffer to Kinesis
You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins, and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users' data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?
There are too many shards, you should only use 1 shard
You shouldn't use multiple consumers, only one and it should re-order data
For each record sent to Kinesis add a partition key that represents the identity of the user
Kinesis Data Stream uses the partition key associated with each data record to determine which shard a given data record belongs to. When you use the identity of each user as the partition key, this ensures the data for each user is ordered hence sent to the same shard.Question 8:
You have a website where you want to analyze clickstream data such as the sequence of clicks a user makes, the amount of time a user spends, and where the navigation begins, and how it ends. You decided to use Amazon Kinesis, so you have configured the website to send these clickstream data all the way to a Kinesis data stream. While you checking the data sent to your Kinesis data stream, you found that the users' data is not ordered and the data for one individual user is spread across many shards. How would you fix this problem?
There are too many shards, you should only use 1 shard
You shouldn't use multiple consumers, only one and it should re-order data
For each record sent to Kinesis add a partition key that represents the identity of the user (Correct)
Use Kinesis Data Analytics with Kinesis Data Streams as the underlying source of data.Question 9:
Which AWS service is most appropriate when you want to perform real-time analytics on streams of data?
Amazon SQS
Amazon SNS
Amazon Kinesis Data Analytics (C)
Amazon Kinesis Data Firehose
This is a perfect combo of technology for loading data near real-time data into S3 and Redshift. Kinesis Data Firehose supports custom data transformations using AWS Lambda.Question 10:
You are running an application that produces a large amount of real-time data that you want to load into S3 and Redshift. Also, these data need to be transformed before being delivered to their destination. What is the best architecture would you choose?
SQS + AWS Lambda
SNS + HTTP Endpoint
Kinesis Data Streams + Kinesis Data Firehose (C)
Note: Kinesis Data Firehose is now supported, but not Kinesis Data Streams.Question 11:
Which of the following is NOT a supported subscriber for AWS SNS?
Amazon Kinesis Data Streams (C)
Amazon SQS
AWS Lambda
HTTP(S) Endpoint
SNS Message Filtering allows you to filter messages sent to SNS topic's subscriptions.Question 13:
You have an SNS topic with 1000s of subscribers and you want to send some messages to certain subscribers and not others. What SNS feature allows you to do so?
SNS Message Filtering (C)
SNS Access Policy
SNS FIFO Topic
Your SQS costs are extremely high. Upon closer look, you notice that your consumers are polling your SQS queues too often and getting empty data as a result. What should you do?
Decrease the number of consumers
Enable Long Polling (C)
Increase the Visibility Timeout
SQS message size limit is 256 KB, but you want to send messages of 1 MB. You should ........................
Contact AWS Support to increase SQS Message Size Limit
Modify your SQS queue configuration and set the maximum message size to 2 MB
Use the SQS Extended Client Library (C)
Which SQS FIFO message attribute allows multiple messages belong to the same group to be processed in order?
MessageDeduplicationId
MessageGroupId (C)
MessageHash
MessageOrderId
Which SQS FIFO message attribute prevents messages with the same deduplication ID to be delivered during a 5-minutes period?
MessageDeduplicationId (C)
MessageGroupId
MessageHash
MessageOrderId
When using Kinesis Client Library, each shard is to be read-only by one KCL instance. So, if you have 10 shards, then the maximum KCL instances you can have is 10.Question 20:
A Kinesis Data Stream that you manage experiencing an increase in traffic due to a sale day. Your Kinesis Data Streams currently has 6 shards, and you have been asked to split shards to become 10 shards. You have been using a consuming application based on Kinesis Client Library (KCL) and are running on a set of EC2 instances. What's the maximum number of EC2 instances that can be deployed to process the messages in shards?
1
6
10 (C)
20
You can have as many consumers as "MessageGroupID" for your SQS FIFO queues.
You are running an SQS FIFO queue with 10 message groups (defined by MessageGroupID
). What's the maximum number of consumers can consume simultaneously?
1
10 (C)
20
Infinite
You can configure a Kinesis Data Stream to keep records for a maximum of .................. days.
30 days
90 days
180 days
365 days (C)
You have a Kinesis Data Stream where you intermittently get a ProvisionedThroughputExceededException
exception in your producers' applications. The following can be used to resolve the error, EXCEPT .......................
Use a highly distributed partition key
Retry with exponential backoff
Contact AWS Support to increase your Kinesis Data Stream's capacity (C)
Add more shards
What should you use to increase the read throughput for Kinesis Data Streams consumers up to 2 MB/s per consumer per shard?
Classic Fan-out Consumer
Enhanced Fan-out Consumer (C)
Kinesis Producer Library (KPL)
AWS SDK
You can configure an SQS queue to keep messages for a maximum of .................. days.
4 days
7 days
14 days (C)
30 days
This is a paid offering and is disabled by default. When enabled, the EC2 instance's metrics are available in 1-minute periods.Question 1:
You have a couple of EC2 instances in which you would like their Standard CloudWatch Metrics to be collected every 1 minute. What should you do?
Enable CloudWatch Custom Metrics
Enable High Resolution
Enable Basic Monitoring
Enable Detailed Monitoring (C)
High-Resolution Custom Metrics can have a minimum resolution of ........................
10 seconds
1 second (C)
30 seconds
1 minute
You have an RDS DB instance that's configured to push its database logs to CloudWatch. You want to create a CloudWatch alarm if there's an Error
found in the logs. How would you do that?
Create a CloudWatch Logs Metric Filter that filter the logs for the keyword Error
, then create a CloudWatch Alarm based on that Metric Filter
Create a scheduled CloudWatch Event that triggers an AWS Lambda every 1 hour, scans the logs, and notify you through SNS topic
Create an AWS Config Rule that monitors Error
in your database logs and notify you through SNS topic
How would you monitor your EC2 instance memory usage in CloudWatch?
Enable EC2 Detailed Monitoring
By default, the EC2 instance pushes memory usage to CloudWatch
Use Unified CloudWatch Agent to push memory usage as a custom metric to CloudWatch
Question 6:
A CloudWatch Alarm set on a High-Resolution Custom Metric can be triggered as often as ......................
1 second
10 seconds
30 seconds
1 minute
If you set an alarm on a high-resolution metric, you can specify a high-resolution alarm with a period of 10 seconds or 30 seconds, or you can set a regular alarm with a period of any multiple of 60 seconds.
You can use the CloudTrail Console to view the last 90 days of recorded API activity. For events older than 90 days, use Athena to analyze CloudTrail logs stored in the S3 bucket.Question 10:
One of your teammates terminated an EC2 instance 4 months ago which has critical data. You don't know who made this so you are going to review all API calls within this period using CloudTrail. You already have CloudTrail set up and configured to send logs to the S3 bucket. What should you do to find out who made this?
Use CloudTrail Event History in CloudTrail Console
Analyze CloudTrail logs in S3 bucket using Amazon Athena
You would like to test out a complex CloudWatch Alarm that responds to globally increased traffic on your application. You are in a test environment. How can you test out this alarm in a cost-effective manner and efficiently?
Setup a global EC2 fleet and increase the request rate to your application until you reach the Alarm state
Change the alarm thresholds temporarily
Use the set-alarm-state
CLI command
Question 13:
You want to continuously monitor RAM usage for an application hosted on an EC2 instance. By default, CloudWatch doesn't push RAM usage, so you will use a CloudWatch custom metric. Which API call allows you to push custom metric data to CloudWatch?
SendMetricData
PutCustomMetric
PutMetricData
SendCustomMetric
By default, they never expire.Question 14:
By default, all logs stored in CloudWatch Logs are automatically expiring after 7 days.
False
True
In CloudWatch Logs, Log Retention Policy defined at ........................... level.
Log Streams
Log Groups
You have configured your application to use X-Ray to trace and troubleshoot your application requests. Your application traces appear in X-Ray when you run the application on your laptop, but it doesn't appear when you deploy the application to Elastic Beanstalk. What is possible a cause for this issue?
A config file .ebextensions/xray-daemon.config
is missing in your code
You need to give your application the required IAM permissions from the X-Ray Console
You haven't configured your code correctly
You have configured your application to use X-Ray to trace and troubleshoot your application requests. Your application traces appear in X-Ray when you run the application on your laptop, but it doesn't appear when you deploy the application to an EC2 instance using AWS CodeDeploy. What is possible a cause for this issue?
The CodeDeploy appspec.yml
file breaks the X-Ray integration. It is a well-known bug
The X-Ray daemon is not running on the EC2 instance
X-Ray integration needs to be enabled from the CodeDeploy Console
What should you do to configure X-Ray Daemon to send traces from multiple AWS accounts to a central AWS account?
Create an IAM user in the central account, then generate Access Keys and load them onto the X-Ray agent in the other accounts
Create an IAM role in the central account, then create IAM roles in the other accounts to assume this IAM role
You would like to add additional information to your X-Ray traces with the ability to search and filter through this information efficiently. What should you use?
Segments
Sampling
Annotations
Metadata
The following APIs can be used to write to X-Ray, EXCEPT .........................
BatchGetTraces
PutTraceSegments
PutTelemetryRecords
By default, the X-Ray SDK records the first request .................., and .................. of any additional requests.
each minute, 5%
every 2 seconds, 10%
each second, 10%
each second, 5%
DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to 10x performance improvement. It caches the most frequently used data, thus offloading the heavy reads on hot keys of your DynamoDB table, hence preventing the "ProvisionedThroughputExceededException" exception. DynamoDB Streams allows you to capture a time-ordered sequence of item-level modifications in a DynamoDB table. It's integrated with AWS Lambda so that you create triggers that automatically respond to events in real-time.Question 4:
You have developed a mobile application that uses DynamoDB as its datastore. You want to automate sending welcome emails to new users after they sign up. What is the most efficient way to achieve this?
Schedule a Lambda function to run every minute using CloudWatch Events, scan the entire table looking for new users
Enable SNS and DynamoDB integration
Enable DynamoDB Streams and configure it to invoke a Lambda function to send emails
You are running an application in production that is leveraging DynamoDB as its datastore and is experiencing smooth sustained usage. There is a need to make the application run in development mode as well, where it will experience an unpredictable volume of requests. What is the most cost-effective solution that you recommend?
Use Provisioned Capacity Mode with Auto Scaling enabled for both development and production
Use Provisioned Capacity Mode with Auto Scaling enabled for production and use On-Demand Capacity Mode for development
Use Provisioned Capacity Mode with Auto Scaling enabled for development and use On-Demand Capacity Mode for production
Use On-Demand Capacity Mode for both development and production
The maximum size of an item in a DynamoDB table is ...................
400 KB
500 KB
400 MB
1 MB
Remember RCUs and WCUs are spread across all the table's partitions.Question 13:
An application begins to receive a lot of ProvisionedThroughputExceededException
exceptions from a DynamoDB table that you manage. After checking the CloudWatch metrics, you found that you haven't exceeded the total provisioned RCU. What is a possible cause for this?
Everything is good, just the CloudWatch metrics are slow to update
You have a Hot Partition / Hot Key
There's a bug in the application's code
You are using GetItem
API call to retrieve items from a DynamoDB table. Which of the following allows you to select only certain attributes from the item?
FilterExpression
ConditionalWrite
ProjectionExpression
What is the best way to delete all the data in a DynamoDB table?
DeleteTable
and then CreateTable
Use a Scan
and run BatchWriteItem
Use a Scan
and run DeleteItem
You want to perform a Scan
operation on a DynamoDB table to retrieve all the items. What should you do to increase the performance of your scan operation?
Increase the Limit parameter
Use Parallel Scans
Use Sequential Scans
Increase the RCU
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.htmlQuestion 17:
You would like to make a Query
to a DynamoDB table using an attribute that's NOT part of your table's Primary Key. What should you do to make this query efficient?
Nothing, Query supports non-key attributes
Create a Local Secondary Index
Create a Global Secondary Index
You are working on designing a new DynamoDB table where you want to make a Query
using an attribute that's NOT part of your table's Primary Key. You need to use the >=
predicate while keeping the same Partition Key. What should you do to make this query efficient?
Nothing, Use Query as is
Create a Local Secondary Index
Create a Global Secondary Index
Which Concurrency Model can be implemented using DynamoDB?
Optimistic Locking
Pessimistic Locking
No Locking
Global Secondary Index (GSI) uses an independent amount of RCU and WCU and if they are throttled due to insufficient capacity, then the main table will also be throttled.Question 21:
You have an application running for over a year now using a DynamoDB table, with Provisioned RCUs and WCUs, without any throttling issues. There's a requirement for your table to support second type of queries, so you have decided to use the existing Local Secondary Index (LSI) and create a new Global Secondary Index (GSI) to support these queries. One month later, the table begins to experience throttling issues. After checking the table's CloudWatch metrics, you found that you haven't exceeded the table's Provisioned RCU and WCU. What should you do?
The LSI is throttling so you need to provision more RCU and WCU to the LSI
Adding both an LSI and a GSI to a table is not recommended by AWS best practices as this is a known cause for creating throttles
The GSI is throttling so you need to provision more RCU and WCU to the GSI
CloudWatch metrics takes time to propagate, you should see the RCU and WCU peaking for the main table in a few minutes
Which of the following AWS CLI options allows you to retrieve a subset of the item's attributes coming from a DynamoDB Scan
operation?
—filter-expression
—page-size
—max-items
—projection-expression
You would like to paginate the results of a DynamoDB Scan
operation in order to minimize the number of items returned in the CLI's output. Which of the following AWS CLI options should you use?
—page-size & —max-items
—page-size & —starting-token
—max-items & —starting-token
—filter-expression
You are developing a banking application that uses DynamoDB to store its data. You want to update both the Exchanges and the AccountBalance tables at the same time or not at all. Which DynamoDB feature allows you to do so?
DynamoDB Indexes
DynamoDB Transactions
DynamoDB Streams
DynamoDB TTL
DynamoDB Streams records can be sent to the following, EXCEPT ............................
Simple Queue Service (SQS)
Kinesis Data Streams
AWS Lambda
You have configured a DynamoDB table with a TTL attribute to delete expired user's sessions, but you noticed that some expired items still exist in queries you make. What should you do to resolve this problem?
Do nothing, DynamoDB automatically delete expired items within 48 hours of the expiration
This is a known bug in DynamoDB. Use a Lambda function to periodically delete the expired items
Contact AWS Support to periodically delete expired items
To redirect your API Gateway stage to the correct AWS Lambda Alias, you should use .........................
Canary Deployment
Stage Variables
X-Ray Integration
HTTP Endpoint
You have an API Gateway backed by a set of Lambda functions and you want to mask some fields in the output data returned by one of the Lambda functions. What should you use?
Mapping Templates
Custom Authorizer
Cognito Mask
Which specification allows you to import/export your API as code?
gRPC
SOAP
Swagger / OpenAPI
You are running an API that's backed by a Lambda function. Your API receives a large number of GET requests which results in your Lambda function becomes overloaded and your bill starts to substantially increase. The response returns the same payload and changes once each day. What should you do to handle all these requests and reduce your bill?
Enable mock responses
Integrate API Gateway with ElastiCache
Enable Caching for your Stage
API Gateway Caching is defined per ..................... with the default TTL .....................
API, 300 seconds
Stage, 300 seconds
API, 600 seconds
Stage, 600 seconds
How can clients invalidate the cache of an API from the client-side?
Using Stage Variables
Pass an URL parameter InvalidateCache
Pass the HTTP header Cache-Control: max-age=0
You would like to authenticate your users against Facebook before they are able to make requests to your API hosted by API Gateway. What should you use to make a seamless authentication integration?
Cognito Sync
DynamoDB Table with Lambda Authorizer
Cognito User Pools
Cognito Federated Identity Pools
Which of the following HTTP error code API Gateway returns where there are too many requests?
400
403
429
502
Which of the following CloudWatch metrics helps you analyze the timeout issues between your API Gateway and a Lambda function?
CacheHitCount
IntegrationLatency
CacheMissCount
Count
AWS CodePipeline is a fully managed continuous delivery (CD) service that helps you automate your release pipeline for fast and reliable application and infrastructure updates. It automates the build, test, and deploy phases of your release process every time there is a code change. It has direct integration with Elastic Beanstalk.
You would like to orchestrate your CICD pipeline to deliver all the way to Elastic Beanstalk. Which AWS service do you recommend?
AWS CodeBuild
AWS CodePipeline
AWS CloudFormation
AWS Simple Workflow Service
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of computing services such as EC2, Fargate, Lambda, and your on-premises servers. You can define the strategy you want to execute such as in-place or blue/green deployments.
Which AWS service helps you deploy your code to a fleet of EC2 instances with a specific strategy (e.g., Blue/Green deployment)?
AWS CodeBuild
AWS CodePipeline
AWS CodeDeploy
AWS CodeCommit
AWS CodeBuild is a fully managed continuous integration (CI) service that compiles source code, runs tests, and produces software packages that are ready to deploy. It is an alternative to Jenkins.Question 5:
You have a Jenkins Continuous Integration (CI) build server hosted on-premises and you would like to stop it and replace it with an AWS managed service. Which AWS should you choose?
AWS Jenkins
AWS CodeBuild
AWS CloudFormation
Amazon ECS
You plan to create a CICD for your application in AWS. You want to run some automated tests on your application before the deployment process. Which AWS service allows you to do so?
AWS CodeCommit
AWS CodePipeline
AWS CodeDeploy
AWS CodeBuild
You're using CodeCommit as your code repository where developers push code changes to it. To prevent developers from committing any secret credentials, you want to automatically trigger code analysis after each commit. How can you achieve this?
Setup AWS CloudWatch Events in CodeCommit
Setup AWS SNS / Lambda integration in CodeCommit
Setup SES in CodeCommit
Your colleague has an IAM user in another AWS Account who wants to access your CodeCommit repository. What should you do?
Share your SSH Keys with him
Generate Git Credentials and share with him
Create an IAM role in your AWS account with the required permissions, then tell him to use STS cross-account access to assume this IAM role
A CodePipeline pipeline that you manage just failed to deploy your code to Elastic Beanstalk even though the code has been pushed successfully to your CodeCommit repository. The pipeline just working fine 10 minutes ago. What is most likely the reason for this?
IAM Permissions are wrong
CodePipeline is waiting for manual approval
Your CodeBuild stage probably failed some tests
Someone has deleted the Elastic Beanstalk environment
Your manager wants to receive emails when your CodePipeline pipeline fails so he can identify and troubleshoot the problems. How can you achieve this?
Setup an AWS CloudWatch Event Rule
Setup an SNS notification
Setup an SES email
The buildspec.yml
file must be placed .......................... so CodeBuild can work properly.
in the codebuild/
directory in your code
in the codecommit/
directory in your code
at the root of your code
CodeBuild containers are deleted at the end of their execution (success or failure). You can't SSH into them, even while they're running.
When your CodeBuild build project fails, you can do the following to troubleshoot the issues, EXCEPT ..........................
Review logs in CloudWatch Logs
Review logs in Amazon S3
SSH into the CodeBuild container to debug from there
Run CodeBuild locally to reproduce the build
You're using CodeBuild to build your application as part of the CICD process. The build process takes a long time, so you investigated this and found that 15 minutes at each build is spent on pulling dependencies from remote repositories. What should you do to drastically speed up the build process?
Embed dependencies into the code
Modify the buildspec.yml
to enable Dependencies Caching in Amazon S3
Remove all the dependencies
CodeBuild can run any commands, so you can use it to run commands including build a static website and copy your static web files to an S3 bucket.
(Hard Question - think outside the box!!)
You would like to automatically deploy a Single Page Application (SPA) to the S3 bucket, after generating the static files from a set of markdown files. Which AWS services should you use for this?
CodeCommit + CodePipeline
CodePipeline + CodeBuild
CodePipeline + CodeDeploy
CodeDeploy
What's the proper order of Lifecycle Events in CodeDeploy for an EC2/on-premises deployment?
ApplicationStop, BeforeInstall, Install, AfterInstall, DownloadBundle, ApplicationStart, ValidateService
ValidateService, ApplicationStop, ApplicationStart, DownloadBundle, BeforeInstall, Install, AfterInstall
ApplicationStop, DownloadBundle, ApplicationStart, BeforeInstall, Install, AfterInstall, ValidateService
ApplicationStop, DownloadBundle, BeforeInstall, Install, AfterInstall, ApplicationStart, ValidateService
Which Lifecycle Event hook should be used in the appspec.yml
file to ensure the application is properly running after being deployed?
AfterInstall
ValidateService
ApplicationStart
AllowTraffic
You manage a fleet of EC2 and on-premises instances where you're trying to run your first deployment using AWS CodeDeploy, but it doesn't work. Which of the following could be a possible cause?
You've probably forgotten to install and start the CodeDeploy agent on the instances
CodeDeploy doesn't work with on-premises instances
You've forgotten to allow inbound traffic on the CodeDeploy service's security group
You would like to have a one-stop dashboard for all the CICD needs for all of your projects. You don't need heavy control of the individual configuration for all the components in your CICD, but need to be able to get a holistic view of your projects. Which AWS service do you recommend?
AWS CodeBuild
AWS CodePipeline
AWS CodeStar
AWS CodeDeploy
You can configure CodeBuild to run its build containers in a VPC, so they can access private resources in a VPC such as databases, internal load balancers, ...
CodeBuild containers are run outside of a VPC and it's impossible to connect to resources in your VPC.
True
False
CodeDeploy supports the following predefined deployment configurations for EC2/on-premises instances, EXCEPT .......................
OneAtATime
HalfAtAtime
Immutable
AllAtOnce
Which AWS service allows you to share your Serverless applications packages using SAM with other AWS accounts?
AWS Serverless Application Model (AWS SAM)
Amazon S3
AWS Serverless Application Repository (AWS SAR)
AWS Resource Access Manager (AWS RAM)
........................ allows you to debug your Lambda functions locally, inspect variables, and execute code line-by-line.
SAM CLI + AWS Toolkits
SAM CLI only
AWS Toolkits only
You have an application that uses Cognito User Pools to authenticate its users. There's a requirement to log every successful login into Cognito in an RDS DB instance to keep website & users' activity for further analysis. What should you do?
Enable the Cognito & RDS integration
Write a Pre-Authentication hook with Lambda
Write a Post-Authentication hook with Lambda
You want to orchestrate multiple Lambda functions and wait for the result of all of them before making a final decision. What do you recommend?
Step Functions Parallel States and then one final Task State
Use AppSync Real-Time API
Build the flow with EventBridge
You have a Step Functions workflow which consists of a series of Task States invoking a set of Lambda functions. While execution, one of the states has an error with the following error code States.Permissions
. How do you solve this problem?
There's an execution failure in the Lambda function invoked by this Task State
You need to give the Step Functions workflow the required IAM permissions to execute the Lambda function invoked by this Task State
The Task State runs longer than the TimeoutSeconds
value and it's failed
Which of the following AWS services does not support the WebSocket protocol?
Application Load Balancer
API Gateway
AppSync
DynamoDB
AWS STS (Security Token Service) allows you to get cross-account access through the creation of an IAM Role in your AWS account authorized to assume an IAM Role in another AWS account. See more here: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.htmlQuestion 1:
A company hired you to get some tasks done in their AWS Account. What is the best way to get access to their AWS Account?
Ask them to create an IAM user for you
Ask them to send you an AWS Access Key and Secret Access Key
Use the STS service to assume an IAM Role in their AWS Account to gain temporary credentials
When evaluating an IAM policy of an EC2 instance doing actions on S3, the union of both the IAM policy of the EC2 instance and the bucket policy of the S3 bucket are taken into account.Question 2:
You have an EC2 instance that has an attached IAM role providing it with read/write access to the S3 bucket named my-bucket, and it has been successfully tested. Later, you removed the IAM role from the EC2 instance, but after testing you found that writes stopped working but reads are still working. What is the likely cause for this behavior?
The EC2 instance is using cached temporary IAM credentials
The IAM credentials are cached in the EC2 instance, removing an IAM role from an EC2 instance can take a few minutes to take effect
The S3 bucket policy authorizes reads
When a read is done on a bucket, there's a grace period of 5 minutes to do the same read again
What does this IAM policy allow you to do?
It allows you to assign any IAM role to RDS instances
It allows you to assign any IAM role to EC2 instances
It allows you to assign RDS full access to EC2 instances
It allows you to assign IAM Roles to EC2 if they start with RDS-
Your AWS account is now growing to 200 IAM users where each user has his own data in an S3 bucket named users-data. For each IAM user, you would like to provide personal space in the S3 bucket with the prefix /home/<username>
, where they have read/write access. How can you do this efficiently?
Create an S3 bucket policy and change it as IAM users are added and removed
Create one customer-managed policy with dynamic variables and attach it to a group of all the IAM users
Create inline policies for each IAM user as they on-boarded
Create one customer-managed policy per IAM user and attach them to the relevant IAM users
You are running an on-premises Microsoft Active Directory to manage your users. You have been tasked to to create another AD in AWS and establishes a trust relationship with your on-premises AD. Which AWS Directory service should you use?
Managed Microsoft AD
AD Connector
Simple AD
You can use the AWS Managed Service keys in KMS, therefore we don't need to create our own KMS keys.
You need to create KMS Keys in AWS KMS before you are able to use the encryption features for EBS, S3, RDS ...
True
False
KMS keys can be symmetric or asymmetric. Symmetric KMS key represents a 256-bit key used for encryption and decryption. An asymmetric KMS key represents an RSA key pair used for encryption and decryption or signing and verification, but not both. Or it represents an elliptic curve (ECC) key pair used for signing and verification.Question 6:
AWS KMS supports both symmetric and asymmetric KMS keys.
True
False
What should you use to control access to your KMS CMKs?
KMS IAM Policy
AWS GuardDuty
KMS Key Policies
KMS Access Control List (KMS ACL)
SSM Parameters Store can be used to store secrets and has built-in version tracking capability. Each time you edit the value of a parameter, SSM Parameter Store creates a new version of the parameter and retains the previous versions. You can view the details, including the values, of all versions in a parameter's history.Question 10:
You have a secret value that you use for encryption purposes, and you want to store and track the values of this secret over time. Which AWS service should you use?
SSM Parameter Store
AWS KMS Versioning feature
Amazon S3
Use Envelope Encryption if you want to encrypt > 4 KB of data.Question 13:
You would like to encrypt 400 KB of data. You should use ..........................
KMS Encrypt
API
KMS GenerateDataKey
API call and encrypt client-side
Because the bucket encrypted using SSE:KMS, you must give permissions to the EC2 instance to access the KMS Keys and to make decrypt operations.Question 14:
You have critical data stored in an S3 bucket with SSE:KMS
encryption enabled. You're running an application on an EC2 instance in which you want to download some files from the bucket. So, you have created an IAM role with s3:GetObject
permissions and attached it to the EC2 instance, but when the application tries to download files from the S3 bucket, it gets a denied exception. What is a possible cause for this issue?
Add an S3 bucket policy
Add permission for KMS:Decrypt
Add permission for KMS:Encrypt
Add permission for S3:Decrypt
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.htmlQuestion 15:
How would you encrypt existing CloudWatch Log Group with a KMS key?
You can enable encryption from CloudWatch Logs Console
Encrypt them with the create-log-group
API call
Encrypt them with the update-log-group
API call
Encrypt them with the associate-kms-key
API call
Which IAM policy condition allows you to enforce SSL requests to objects stored in your S3 bucket?
aws:SecureTransport
aws:EnforceSSL
s3:SecureTransport
s3:EnforceSSL
Which of the following AWS services allows you to send a large number of emails using AWS SDK?
Simple Queue Service (SQS)
Simple Notification Service (SNS)
Kinesis Data Streams
Simple Email Service (SES)
You are running a gaming website that is using DynamoDB as its data store. Users have been asking for a search feature to find other gamers by name, with partial matches if possible. Which AWS technology do you recommend to implement this feature?
Amazon DynamoDB
Amazon OpenSearch Service
Amazon ECS
Amazon S3