TJ demo t
Last updated
Last updated
1. QUESTION
A company is storing highly classified documents on its file server. These documents contain blueprints for electronic devices and are never to be made public due to a legal agreement. To comply with the strict policy, you must explore the capabilities of AWS KMS to improve data security.
Which of the following is the MOST suitable procedure for encrypting data?
Use a symmetric key for encryption and decryption.
Use a combination of symmetric and asymmetric encryption. Encrypt the data with a symmetric key and use the asymmetric private key to decrypt the data.
Generate a data key using a KMS key. Then, encrypt data with the plaintext data key.
Generate a data key using a KMS key. Then, encrypt data with the ciphertext version of the data key.
Correct
Your data is protected when you encrypt it, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key and then encrypting the data key under another key.
You can even encrypt the data encryption key under another encryption key and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key-encryption key is known as the master key.
Envelope encryption offers several benefits:
Protecting data keys When you encrypt a data key, you don’t have to worry about storing the encrypted data key, because the data key is inherently protected by encryption. You can safely store the encrypted data key alongside the encrypted data.
Encrypting the same data under multiple master keys Encryption operations can be time-consuming, particularly when the data being encrypted are large objects. Instead of re-encrypting raw data multiple times with different keys, you can re-encrypt only the data keys that protect the raw data.
Combining the strengths of multiple algorithms In general, symmetric key algorithms are faster and produce smaller ciphertexts than public-key algorithms. But public-key algorithms provide inherent separation of roles and easier key management. Envelope encryption lets you combine the strengths of each strategy.
To perform envelope encryption using KMS keys, you must first generate a data key using your KMS key and use its plaintext version to encrypt data.
Hence, the correct answer is the option that says: Generate a data key using a KMS key. Then, encrypt data with the plaintext data key.
The option that says: Generate a data key using a KMS key. Then, encrypt data with the ciphertext version data key is incorrect. A ciphertext data key cannot be used for encryption because it is an encrypted version of the data key. You must use the plaintext version of the data key.
The option that says: Use a KMS key for encryption and decryption is incorrect. KMS keys can only be used to encrypt data up to 4KB in size, making it not suitable for encrypting documents.
The option that says: Use a combination of symmetric and asymmetric encryption. Encrypt the data with a symmetric key and use the asymmetric private key to decrypt the data is incorrect because the keys on each encryption algorithm are separate entities and are in no way related. You cannot encrypt data with a symmetric key and expect it to be decrypted by an asymmetric private key.
References:
Check out this AWS KMS Cheat Sheet:
2. QUESTION
A code that runs on a Lambda function performs a GetItem
call from a DynamoDB table. The function runs three times every week. You noticed that the application kept receiving a ProvisionedThroughputExceededException
error for 10 seconds most of the time.
How should you handle this error?
Reduce the frequency of requests using error retries and exponential backoff.
Create a Local Secondary Index (LSI) to the existing DynamoDB table to increase the provisioned throughput.
Refactor the code in the Lambda function to optimize its performance.
Enable DynamoDB Accelerator (DAX) to reduce response times from milliseconds to microseconds.
Correct
When your program sends a request, DynamoDB attempts to process it. If the request is successful, DynamoDB returns an HTTP success status code (200 OK
), along with the results from the requested operation. If the request is unsuccessful, DynamoDB returns an error.
An HTTP 400
status code indicates a problem with your request, such as authentication failure, missing required parameters, or exceeding a table’s provisioned throughput. You have to fix the issue in your application before submitting the request again.
Hence, the correct answer is: Reduce the frequency of requests using error retries and exponential backoff.
The option that says: Enable DynamoDB Accelerator (DAX) to reduce response times from milliseconds to microseconds is incorrect because DAX is used to provide a fully managed, in-memory caching solution. This option is not the right way to handle errors due to high request rates.
The option that says: Refactor the code in the Lambda function to optimize its performance is incorrect because this will just improve the code’s readability and maintainability. This won’t have any impact on reducing the frequency of requests.
The option that says: Create a Local Secondary Index ( LSI ) to the existing DynamoDb table to increase the provisioned throughput is incorrect. LSI is used to give flexibility to your queries against the DynamoDB table. LSI uses an alternative sort key aside from the original sort key defined at the creation of the table. Additionally, you cannot create an LSI on an existing table. It can only be added during the creation of a DynamoDB table.
References:
Check out this Amazon DynamoDB Cheat Sheet:
Amazon DynamoDB Overview:
3. QUESTION
A company has launched a new serverless application using AWS Lambda. The app ran smoothly for a few weeks until it was featured on a popular website. As its popularity grew, so did the number of users receiving an error. Upon viewing the Lambda function’s monitoring graph, the developer discovered a lot of throttled invocation requests.
What can the developer do to troubleshoot this issue? (Select THREE.)
Use a compiled language like GoLang to improve the function’s performance
Increase Lambda function timeout
Request a service quota increase
Use exponential backoff in the application.
Deploy the Lambda function in VPC
Configure reserved concurrency
Correct
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume.
With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
Lambda Throttling refers to the rejection of the Lambda function to invocation requests. At this event, the Lambda will return a throttling error exception which you need to handle. This happens because your current concurrency execution count is greater than your concurrency limit.
Throttling is intended to protect your resources and downstream applications. Though Lambda automatically scales to accommodate your incoming traffic, your function can still be throttled for various reasons.
The following are the recommended solutions to handle throttling issues:
Configure reserved concurrency – by default, there are 900 unreserved concurrencies shared across all functions in a region. To prevent other functions from consuming the available concurrent executions, reserve a portion of it to your Lambda function based on the demand of your current workload.
Use exponential backoff in your app – a technique that uses progressively longer waits between retries for consecutive error responses. This can be used to handle throttling issues by preventing collision between simultaneous requests.
Use a dead-letter queue – If you’re using Amazon S3 and Amazon EventBridge (Amazon CloudWatch Events), configure your function with a dead letter queue to catch any events that are discarded due to constant throttles. This can protect your data if you’re seeing significant throttling.
Request a service quota increase – you can reach AWS support to request for a higher service quota for concurrent executions.
Hence, the correct answers are:
– Use exponential backoff in your application
– Configure reserved concurrency
– Request a service quota increase
The option that says: Deploy the Lambda function in VPC is incorrect because this has nothing to do with fixing throttling errors. The only time you should do this is if you need to have a connection between a Lambda function and a resource running in your VPC.
The option that says: Use a compiled language like GoLang to improve the function’s performance is incorrect because no matter what language you use, the Lambda function will still throw a throttling error if the current concurrency execution count is greater than your concurrency limit.
The option that says: Increase Lambda function timeout is incorrect. If the time a function runs exceeds its current timeout value, it will throw a timeout error. The scenario is a throttling issue and not a timeout.
References:
Check out this AWS Lambda Cheat Sheet:
4. QUESTION
A serverless application consists of multiple Lambda Functions and a DynamoDB table. The application must be deployed by calling the CloudFormation APIs using AWS CLI. The CloudFormation template and the files containing the code for all the Lambda functions are located on a local computer.
What should the Developer do to deploy the application?
Use the aws cloudformation package
command and deploy using aws cloudformation deploy
.
Use the aws cloudformation deploy
command.
Use the aws cloudformation update-stack
command and deploy using aws cloudformation deploy
.
Use the aws cloudformation validate-template
command and deploy using aws cloudformation deploy
.
Correct
The aws cloudformation package
command packages the local artifacts (local paths) that your AWS CloudFormation template references. The command uploads local artifacts, such as source code for an AWS Lambda function or a Swagger file for an AWS API Gateway REST API, to an S3 bucket. The command returns a copy of your template, replacing references to local artifacts with the S3 location where the command uploaded the artifacts.
Use this command to quickly upload local artifacts that might be required by your template. After you package your template’s artifacts, run the aws cloudformation deploy
command to deploy the returned template.
Since we have local artifacts (source code for the AWS Lambda functions), we should use the package command.
After running the package command, we must deploy the packaged output file by running the deploy command.
Hence, the correct answer is: Use the aws cloudformation package
command and deploy using aws cloudformation deploy
.
The option that says: Use the aws cloudformation validate-template
command and deploy using aws cloudformation deploy
is incorrect because the validate-template
command will just check the template if it is a valid JSON or YAML file.
The option that says: Use the aws cloudformation deploy
command is incorrect because the artifacts are located in the local computer and since the deploy command uses S3 as the location for artifacts that were defined in the Cloudformation template, using the deploy command alone will not work. You must package the template first.
The option that says: Use the aws cloudformation update-stack
command and deploy using aws cloudformation deploy
is incorrect because this just updates an existing stack.
References:
Check out this AWS CloudFormation Cheat Sheet:
5. QUESTION
A full-stack developer has developed an application written in Node.js to host an upcoming mobile game tournament. The developer has decided to deploy the application using AWS Elastic Beanstalk because of its ease-of-use. Upon experimenting, he learned that he could configure the webserver environment with several resources.
Which of the following services can the developer configure with Elastic Beanstalk? (Select THREE.)
Amazon EC2 Instance
AWS Lambda
Amazon CloudWatch
Application Load Balancer
Amazon Athena
Amazon CloudFront
Correct
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
You can upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources.
With ElasticBeanstalk, you can:
– Select the operating system that matches your application requirements (e.g., Amazon Linux or Windows Server 2016)
– Choose from several Amazon EC2 instances, including On-Demand, Reserved Instances, and Spot Instances.
– Choose from several available database and storage options.
– Enable login access to Amazon EC2 instances for immediate and direct troubleshooting
– Quickly improve application reliability by running in more than one Availability Zone.
– Enhance application security by enabling HTTPS protocol on the load balancer
– Access built-in Amazon CloudWatch monitoring and getting notifications on application health and other important events
– Adjust application server settings (e.g., JVM settings) and pass environment variables
– Run other application components, such as a memory caching service, side-by-side in Amazon EC2.
– Access log files without logging in to the application servers
Hence, the correct answers are: Amazon EC2 Instance, Amazon CloudWatch, and Application Load Balancer.
You cannot configure Amazon Athena, AWS Lambda, and Amazon CloudFront on ElasticBeanstalk.
References:
Check out this AWS Elastic Beanstalk Cheat Sheet:
6. QUESTION
A transcoding media service is being developed in AWS. Photos uploaded to Amazon S3 will trigger Step Functions to coordinate a series of processes that will perform image analysis tasks. The final output should contain the input plus the result of the final state to conform to the application’s logic flow.
What should the developer do?
Declare a Parameters
field filter on the Amazon States Language specification.
Declare an OutputPath
field filter on the Amazon States Language specification.
Declare an InputPath
field filter on the Amazon States Language specification.
Declare a ResultPath
field filter on the Amazon States Language specification.
Correct
A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state. Understanding how this information flows from state to state and learning how to filter and manipulate this data is key to effectively designing and implementing workflows in AWS Step Functions.
In the Amazon States Language, these fields filter and control the flow of JSON from state to state:
– InputPath
– OutputPath
– ResultPath
– Parameters
Both the InputPath and Parameters fields provide a way to manipulate JSON as it moves through your workflow. InputPath can limit the input that is passed by filtering the JSON notation by using a path. The Parameters field enables you to pass a collection of key-value pairs, where the values are either static values that you define in your state machine definition, or that are selected from the input using a path.
AWS Step Functions applies the InputPath field first, and then the Parameters field. You can first filter your raw input to a selection you want using InputPath, and then apply Parameters to manipulate that input further, or add new values.
The output of a state can be a copy of its input, the result it produces (for example, the output from a Task state’s Lambda function), or a combination of its input and result. Use ResultPath to control which combination of these is passed to the state output.
OutputPath enables you to select a portion of the state output to pass to the next state. This enables you to filter out unwanted information, and pass only the portion of JSON that you care about.
Out of these field filters, the ResultPath field filter is the only one that can control input values and its previous results to be passed to the state output. Hence, the correct answer is: Declare a ResultPath field filter on the Amazon States Language specification.
The option that says: Declare an InputPath field filter on the Amazon State Language specification is incorrect because it just operates on the input level by filtering the JSON notation by using a path. It cannot control both ends of a state (input and output).
The option that says: Declare an OutputPath field filter on the Amazon State Language specification is incorrect because it just operates on the output level. It is used to filter out unwanted information and pass only the portion of JSON that you care about that will be passed onto the next state.
The option that says: Declare a Parameters field filter on the Amazon State Language specification is incorrect because this is used in conjunction with the InputPath field filter, which means it can only be used on the input level of a state.
References:
Check out this AWS Step Functions Cheat Sheet:
7. QUESTION
An IAM user with programmatic access wants to get information about specific EC2 instances on the us-east-1 region. Due to strict policy, the user was compelled to use the describe-instances
operation using AWS Command Line Interface (CLI). He wants to check whether he has the required permission to initiate the command without actually making the request.
Which of the following actions should be done to solve the problem?
Add the --generate-cli-skeleton
parameter to the describe-instances command.
Add the --dry-run
parameter to the describe-instances command.
Add the --filters
parameter to the describe-instances command.
Add the --max-items
parameter to the describe-instances command.
Correct
The describe-instances command describes the specified instances or all instances. Optionally, you can add parameters to the describe-instances to modify its function.
Here is the list of the available parameters for describe-instances:
[–dry-run | –no-dry-run]
[–instance-ids ]
[–filters ]
[–cli-input-json ]
[–starting-token ]
[–page-size ]
[–max-items ]
[–generate-cli-skeleton]
The –dry-run parameter checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRun-Operation. Otherwise, it is UnauthorizedOperation.
Hence, the correct answer is: Add the --dry-run
parameter to the describe-instances command
The option that says: Add the --generate-cli-skeleton
parameter to the describe-instances command is incorrect because this is a parameter that will generate and display a parameter template that you can customize and use as input on a later command. The generated template includes all of the parameters that the command supports.
The option that says: Add the --filters
parameter to the describe-instances command is incorrect. Use this parameter if you want to get only the details that you like from the output of your command. It will not help you check for the permission required to initiate the command.
The option that says: Add the max-items
parameter to the describe-instances command is incorrect because it just defines the total number of items to be returned in the command’s output. It has nothing to do with checking the permission required to initiate the command.
References:
8. QUESTION
A company wants to centrally organize login credentials for its internal application. The application prompts users to change passwords every 35 days. Expired login credentials must be removed automatically, and an email notification should be sent to the users when their passwords are about to expire. A developer must create a solution with the least amount of development effort.
Which solution meets the requirements?
Use AWS Secrets Manager to store user credentials. Create a Lambda function that runs periodically to send Amazon SNS email notifications for passwords nearing expiration
Use AWS Secret Managers to store user credentials and turn on automatic rotation.
Store the credentials as Advanced Parameters in AWS Systems Manager (SSM) Parameter Store and configure Expiration
and ExpirationNotification
policies. Create an Amazon EventBridge rule that sends Amazon SNS email notifications.
Store the credentials as Standard Parameters in AWS Systems Manager (SSM) Parameter Store and configure Expiration
and ExpirationNotification
policies. Create an Amazon EventBridge rule that sends Amazon SNS email notifications.
Correct
Parameter Store, a capability of AWS Systems Manager, includes standard parameters and advanced parameters. You individually configure parameters to use either the standard-parameter tier (the default tier) or the advanced-parameter tier.
You can change a standard parameter to an advanced parameter at any time, but you can’t revert an advanced parameter to a standard parameter. This is because reverting an advanced parameter to a standard parameter would cause the system to truncate the size of the parameter from 8 KB to 4 KB, resulting in data loss. Reverting would also remove any policies attached to the parameter. Also, advanced parameters use a different form of encryption than standard parameters.
Parameter policies help you manage a growing set of parameters by allowing you to assign specific criteria to a parameter, such as an expiration date or time to live. Parameter policies are especially helpful in forcing you to update or delete passwords and configuration data stored in Parameter Store, a capability of AWS Systems Manager.
You can assign multiple policies to a parameter. For example, you can assign Expiration
and ExpirationNotification
policies so that the system initiates an EventBridge event to notify you about the impending deletion of a parameter. The Expiration
policy lets you delete a parameter at a specified time while the ExpirationNotification
policy is used to notify when a parameter is about to expire. These features are only available for Advanced Parameters in the AWS Systems Manager Parameter Store.
Hence, the correct answer is: Store the credentials as Advanced Parameters in AWS Systems Manager (SSM) Parameter Store and configure Expiration
and ExpirationNotification
policies. Create an Amazon EventBridge rule that sends Amazon SNS email notifications.
The option that says: Use AWS Secrets Manager to store user credentials. Create a Lambda function that runs periodically to send Amazon SNS email notifications for passwords nearing expiration is incorrect. Although it’s possible to store login credentials in Secrets Manager, creating a cron-based Lambda function for checking password expiration and sending notifications via SNS takes more development overhead than simply using the ExpirationNotification
policy in SSM Parameter Store.
The option that says: Use AWS Secrets Manager to store user credentials and turn on automatic rotation is incorrect. Automatic rotation in AWS Secrets Manager is typically used when secrets can be changed programmatically without user intervention (e.g, rotating database credentials). In the scenario’s case, users must manually change their passwords.
The option that says: Store the credentials as Standard Parameters in AWS Systems Manager (SSM) Parameter Store and configure Expiration
and ExpirationNotification
policies. Create an Amazon EventBridge rule that sends Amazon SNS email notifications is incorrect because Standard Parameters do not support Expiration
and ExpirationNotification
policies.
References:
Checkout this AWS Secrets Manager vs Systems Manager Parameter Store Cheat Sheet:
9. QUESTION
A software development team uses AWS CodePipeline to facilitate continuous integration and delivery (CI/CD) for a Node.js application. The team requires a centralized way of distributing internal npm packages used by the application. It’s crucial that the pipeline automatically starts to build whenever a new version of the package is released.
Which solution aligns with these requirements?
Store the npm packages in an Amazon S3 bucket. Use Amazon SNS to trigger a CodePipeline pipeline build when a new package version is uploaded to the bucket.
Set up an Amazon ECR private repository to host the npm package. Utilize an AWS Lambda function to initiate a CodePipeline pipeline build whenever a new package version is published to the repository.
Create an Amazon ECR private repository and store the npm packages in it. Use Amazon SNS notifications to initiate CodePipeline pipeline builds.
Establish an AWS CodeArtifact repository to store the npm packages. Configure an Amazon EventBridge rule to detect changes in the repository and trigger CodePipeline pipeline builds.
Correct
AWS CodeArtifact is a secure and scalable artifact management service offering a centralized repository for storing, managing, and distributing software packages and their dependencies.
CodeArtifact seamlessly integrates with Amazon EventBridge, a service that automates actions responding to specific events, including any activity within a CodeArtifact repository. This integration allows you to establish rules that dictate the actions to be taken when a particular event occurs.
Whenever an action is taken in CodeArtifact, such as creating, updating, or deleting a package version, it triggers an event. The event mechanism in CodeArtifact makes it easier to efficiently monitor and automate tasks based on the activities within your CodeArtifact repositories. This helps to streamline workflows and improve overall efficiency.
AWS CodePipeline is a comprehensive service that automates the steps in releasing software, known as continuous integration and continuous delivery (CI/CD). It manages the entire workflow required to take code from version control to deployment, streamlining the process of getting new features and updates to users.
Setting up an Amazon EventBridge rule to monitor changes within the CodeArtifact repository allows seamless integration with AWS CodePipeline.This configuration is beneficial as it automates processes within the pipeline. More precisely, when a new version of a Node.js file is added to the repository, EventBridge detects the update and triggers the relevant CodePipeline build process, streamlining the workflow without requiring manual intervention.
Hence, the correct answer is: Establish an AWS CodeArtifact repository to store the npm packages. Configure an Amazon EventBridge rule to detect changes in the repository and trigger CodePipeline pipeline builds.
The option that says: Store the npm packages in an Amazon S3 bucket. Use Amazon SNS to trigger a CodePipeline pipeline build when a new package version is uploaded to the bucket is incorrect. While Amazon S3 can store files, it isn’t tailored for managing software package dependencies and lacks the capabilities needed for version control and dependency management.
The option that says: Set up an Amazon ECR private repository to host the npm package. Utilize an AWS Lambda function to initiate a CodePipeline pipeline build whenever a new package version is published to the repository is incorrect. Amazon ECR is primarily used for Docker container images, not for the management of software packages.
The option that says: Create an Amazon ECR private repository and store the npm packages in it. Use Amazon SNS notifications to initiate CodePipeline pipeline builds is incorrect. Similar to the other incorrect option, using Amazon ECR as a repository for software packages is technically inappropriate. ECR’s features are specifically designed for Docker containers. Moreover, Amazon SNS is typically used for alerting rather than for directly triggering pipeline builds.
References:
Check out these Amazon EventBridge and AWS CodePipeline Cheat Sheets:
10. QUESTION
A developer is writing a custom script that will run in an Amazon EC2 instance. The script needs to access the local IP address from the instance to manage a connection to an application outside the AWS Cloud. The developer found out that the details about an instance can be viewed by visiting a certain Uniform Resource Identifier (URI).
Which of the following is the correct URI?
http://169.254.169.254/latest/meta-data/
http://169.254.169.254/latest/user-data/
http://254.169.254.169/latest/user-data/
http://254.169.254.169/latest/meta-data/
Correct
Instance metadata is the data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, hostname, events, and security groups.
To view all categories of instance metadata from within a running instance, use the http://169.254.169.254/latest/meta-data/ URI.
Note that the IP address 169.254.169.254 is a link-local address and is valid only from the instance.
Hence, the correct answer is http://169.254.169.254/latest/meta-data/.
The option that says: http://169.254.169.254/latest/user-data/ is incorrect because this URI is used to retrieve user data from within a running instance. The correct path should be “/latest/meta-data/”.
The option that says: http://254.169.254.169/latest/user-data/ is incorrect because that is not the right IP address. The IP address should be: 169.254.169.254.
The option that says: http://254.169.254.169/latest/meta-data/ is incorrect. Although the path is right, the IP address used is invalid.
References:
Check out this Amazon EC2 Cheat Sheet:
11. QUESTION
A developer manages a web application hosted on a fleet of Amazon EC2 instances behind a public Application Load Balancer (ALB). Each instance is equipped with an HTTP server that logs incoming requests. Upon reviewing the logs, the developer realizes that they are only capturing the IP address of the ALB instead of the client’s public IP address.
What modification should the developer make to ensure the log files include the client’s public IP address?
Configure the HTTP server to include the Host header in its logging configuration.
Deploy the Amazon CloudWatch Logs agent on each instance, customizing it to capture and log the client’s IP address.
Implement the AWS X-Ray daemon on all instances, adjusting its settings to record the client’s IP address in the logs.
Update the HTTP server’s logging configuration to log the X-Forwarded-For
header information.
Correct
The X-Forwarded-For
header helps keep track of a user’s IP address as their internet request moves through things like proxy servers or load balancers on its way to the final server. This way, the final server can still figure out where the request originated, even with these helpers in the middle that might hide user information. This detail is critical for correctly identifying users, safeguarding the network, and analyzing website traffic when the user’s request doesn’t come directly to the server.
When a request hits an Application Load Balancer, the balancer automatically includes or updates the X-Forwarded-For
header with the client’s IP address, ensuring the server knows the request’s source. If this header isn’t part of the request, the balancer adds it to the client’s IP. Should the header exist, the balancer appends the client’s IP. This header can list several IP addresses, separated by commas, indicating the path the request took through various proxies, with the original client’s IP being the first in the list.
Hence, the correct answer is: Update the HTTP server’s logging configuration to log the X-Forwarded-For
header information.
The option that says: Configure the HTTP server to include the Host header in its logging configuration is incorrect because the Host header is primarily used to specify the server’s domain name and the TCP port number on which the server is listening. While it helps identify the target server in requests, it does not provide information about the client’s IP address, thus not meeting the requirement to log the client’s public IP address.
The option that says: Deploy the Amazon CloudWatch Logs agent on each instance, customizing it to capture and log the client’s IP address is incorrect. While deploying the CloudWatch Logs agent can help in logging details to CloudWatch, the agent itself cannot capture the client’s IP address unless it’s already being logged by the HTTP server.
The option that says: Implement the AWS X-Ray daemon on all instances, adjusting its settings to record the client’s IP address in the logs is incorrect. AWS X-Ray is primarily used to analyze and debug distributed applications by tracking and providing traces of requests. Although it is helpful for performance analysis and troubleshooting, X-Ray does not change HTTP server logs to include the client’s IP address.
References:
Check out these AWS X-Ray and AWS Elastic Load Balancing Cheat Sheets:
12. QUESTION
A developer is looking for a way to decrease the latency in retrieving data from an Amazon RDS MySQL database. He wants to implement a caching solution that supports Multi-AZ replication with sub-millisecond response times.
What must the developer do that requires the LEAST amount of effort?
Set up AWS Global Accelerator and integrate it with your application to improve overall performance.
Set up an Elasticache for Memcached cluster between the application and database. Configure it to run with replication to achieve high availability.
Set up an Elasticache for Redis cluster between the application and database. Configure it to run with replication to achieve high availability.
Convert the database schema using the AWS Schema Conversion Tool and move the data to DynamoDB. Enable Amazon DynamoDB Accelerator (DAX).
Correct
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing.
Amazon ElastiCache offers fully managed Redis and Memcached for most demanding applications that require sub-millisecond response times. However, Redis is the only service in Elasticache that supports replication.
Hence, the correct answer is: Set up an Elasticache for Redis between the application and database. Configure it to run with replication to achieve high availability.
The option that says: Convert the database schema using the AWS Schema Conversion Tool and move the data to DynamoDB. Enable Amazon DynamoDB Accelerator (DAX) is incorrect. DAX is a fully managed in-memory cache for DynamoDB. You don’t have to change the schema of the MySQL database just to achieve a high-performing, high-availability caching solution. You can readily do that with Elasticache. Additionally, changing a schema takes a lot of work, which fails to meet the required solution for the problem.
The option that says: Set up an Elasticache for Memcached cluster between the application and database. Configure it to run with replication to achieve high availability is incorrect. While Memcached provides sub-millisecond latency, it does not support replication.
The option that says: Set up AWS Global Accelerator and integrate it with your application to improve overall performance is incorrect because Global Accelerator is not a caching solution. It is a service used to improve the performance of your network traffic by utilizing the AWS global infrastructure instead of the public Internet.
References:
Check out this Amazon Elasticache Cheat Sheet:
13. QUESTION
A development team has a serverless architecture composed of multiple Lambda functions that invoke one another. As the number of Lambda functions increases, the team finds it increasingly difficult to manage the coordination and dependencies between them, leading to errors, duplication of code, and difficulty debugging and troubleshooting issues.
Which refactorization should the team implement?
Create an AWS AppSync GraphQL API endpoint and configure each Lambda function as a resolver.
Create an AWS Step Functions state machine and convert each Lambda function into individual Task states.
Use AWS CodePipeline to define the source, build, and deployment stages for each Lambda function.
Use AWS AppConfig’s feature flag to gradually release new code changes to each Lambda function.
Correct
State Machine is a technique in modeling systems whose output depends on the entire history of their inputs, not just on the most recent input. In this case, the Lambda functions invoke one another, creating a large state machine.
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications.
Step Functions automatically triggers and tracks each step, and retries when there are errors so your application executes in order and as expected. With Step Functions, you can craft long-running workflows such as machine learning model training, report generation, and IT automation.
You can manage the coordination of a state machine in Step Functions using the Amazon States Language. The Amazon States Language is a JSON-based, structured language used to define your state machine, a collection of states, that can do work (Task states), determine which states to transition to next (Choice states), stop execution with an error (Fail states), and so on.
Hence, the correct answer is Create an AWS Step Functions state machine and convert each Lambda function into individual Task states.
The option that says: Use AWS CodePipeline to define the source, build, and deployment stages for each Lambda function is incorrect. While this approach can help with the deployment process, it does not directly address the coordination and dependency issues between the Lambda functions. AWS CodePipeline is primarily used for automating the build, test, and deploy phases of your release process every time there is a code change.
The option that says: Create an AWS AppSync GraphQL API endpoint and configure each Lambda function as a resolver is incorrect. AWS AppSync is a service that enables you to query multiple sources from a single GraphQL API endpoint. While it provides features such as schema generation, resolvers, and real-time subscriptions, it is not specifically designed as a management tool for coordinating and managing the dependencies between multiple Lambda functions.
The option that says: Use AWS AppConfig’s feature flag to gradually release new code changes to each Lambda function is incorrect. While this approach can help with rolling out new code changes, it does not directly address the coordination and dependency issues between the Lambda functions.
References:
Check out this AWS Step Functions Cheat Sheet:
ProvisionedThroughputExceededException means that your request rate is too high. The AWS SDKs for DynamoDB automatically retries requests that receive this exception. Your request is eventually successful unless your retry queue is too large to finish. To handle this error, you can reduce the frequency of requests using error retries and exponential backoff.