Security
Last updated
Last updated
Question 2: Incorrect
A developer is looking at establishing access control for an API that connects to a Lambda function downstream.
Which of the following represents a mechanism that CANNOT be used for authenticating with the API Gateway?
Standard AWS IAM roles and policies
(Incorrect)
Lambda Authorizer
AWS Security Token Service (STS)
(Correct)
Cognito User Pools
Explanation
Correct option:
Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.
How API Gateway Works: via -
AWS Security Token Service (STS) - AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). However, it is not supported by API Gateway.
API Gateway supports the following mechanisms for authentication and authorization: via -
Incorrect options:
Standard AWS IAM roles and policies - Standard AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods. IAM roles and policies can be used for controlling who can create and manage your APIs, as well as who can invoke them.
Lambda Authorizer - Lambda authorizers are Lambda functions that control access to REST API methods using bearer token authentication—as well as information described by headers, paths, query strings, stage variables, or context variables request parameters. Lambda authorizers are used to control who can invoke REST API methods.
Cognito User Pools - Amazon Cognito user pools let you create customizable authentication and authorization solutions for your REST APIs. Amazon Cognito user pools are used to control who can invoke REST API methods.
References:
Question 4: Incorrect
A company runs its flagship application on a fleet of Amazon EC2 instances. After misplacing a couple of private keys from the SSH key pairs, they have decided to re-use their SSH key pairs for the different instances across AWS Regions.
As a Developer Associate, which of the following would you recommend to address this use-case?
Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS Regions
(Correct)
Store the public and private SSH key pair in AWS Trusted Advisor and access it across AWS Regions
Encrypt the private SSH key and store it in the S3 bucket to be accessed from any AWS Region
It is not possible to reuse SSH key pairs across AWS Regions
(Incorrect)
Explanation
Correct option:
Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS Regions
Here is the correct way of reusing SSH keys in your AWS Regions:
Generate a public SSH key (.pub) file from the private SSH key (.pem) file.
Set the AWS Region you wish to import to.
Import the public SSH key into the new Region.
Incorrect options:
It is not possible to reuse SSH key pairs across AWS Regions - As explained above, it is possible to reuse with manual import.
Store the public and private SSH key pair in AWS Trusted Advisor and access it across AWS Regions - AWS Trusted Advisor is an application that draws upon best practices learned from AWS' aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. It does not store key pair credentials.
Encrypt the private SSH key and store it in the S3 bucket to be accessed from any AWS Region - Storing private key to Amazon S3 is possible. But, this will not make the key accessible for all AWS Regions, as is the need in the current use case.
Reference:
Question 5: Correct
Consider an application that enables users to store their mobile phone images in the cloud and supports tens of thousands of users. The application should utilize an Amazon API Gateway REST API that leverages AWS Lambda functions for photo processing while storing photo details in Amazon DynamoDB. The application should allow users to create an account, upload images, and retrieve previously uploaded images, with images ranging in size from 500 KB to 5 MB.
How will you design the application with the least operational overhead?
Leverage Cognito user pools to manage user accounts and set up an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
(Correct)
Use Cognito identity pools to create an IAM user for each user of the application during the sign-up process. Leverage IAM authentication in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
Leverage Cognito user pools to manage user accounts and set up an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images as well as the image metadata in a DynamoDB table. Have the Lambda function retrieve previously uploaded images from DynamoDB
Use Cognito identity pools to manage user accounts and set up an Amazon Cognito identity pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
Explanation
Correct option:
Leverage Cognito user pools to manage user accounts and set up an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Google, Facebook, Amazon, or Apple, and SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK).
User pools provide:
Sign-up and sign-in services.
A built-in, customizable web UI to sign in users.
Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers from your user pool.
User directory management and user profiles.
Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.
Customized workflows and user migration through AWS Lambda triggers.
To use an Amazon Cognito user pool with your Amazon API Gateway API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user into the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request's Authorization header.
For the given use case, you can use a Cognito user pool to manage user accounts and configure an Amazon Cognito user pool authorizer in API Gateway to control access to the API. You should use a Lambda function to store the actual images on S3 and the image metadata on DynamoDB. Finally, you can get the images using the Lambda function that leverages the metadata stored in DynamoDB.
Incorrect options:
Use Cognito identity pools to manage user accounts and set up an Amazon Cognito identity pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
Use Cognito identity pools to create an IAM user for each user of the application during the sign-up process. Leverage IAM authentication in API Gateway to control access to the API. Set up a Lambda function to store the images in Amazon S3 and save the image object's S3 key as part of the photo details in a DynamoDB table. Have the Lambda function retrieve previously uploaded images by querying DynamoDB for the S3 key
Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. With an identity pool, you can obtain temporary, limited-privilege AWS credentials to access other AWS services. You cannot use identity pools to manage users or to create IAM users. So both of these options are incorrect.
Leverage Cognito user pools to manage user accounts and set up an Amazon Cognito user pool authorizer in API Gateway to control access to the API. Set up a Lambda function to store the images as well as the image metadata in a DynamoDB table. Have the Lambda function retrieve previously uploaded images from DynamoDB - You cannot use DynamoDB to store images as the maximum allowed item size is 400KB and the images range in size from 500KB to 5MB. You should also note that storing images on DynamoDB is an anti-pattern. So this option is incorrect.
References:
Question 7: Skipped
A university has created a student portal that is accessible through a smartphone app and web application. The smartphone app is available in both Android and IOS and the web application works on most major browsers. Students will be able to do group study online and create forum questions. All changes made via smartphone devices should be available even when offline and should synchronize with other devices.
Which of the following AWS services will meet these requirements?
BeanStalk
Cognito Identity Pools
Cognito User Pools
Cognito Sync
(Correct)
Explanation
Correct option:
Cognito Sync
Amazon Cognito Sync is an AWS service and client library that enables cross-device syncing of application-related user data. You can use it to synchronize user profile data across mobile devices and the web without requiring your own backend. The client libraries cache data locally so your app can read and write data regardless of device connectivity status. When the device is online, you can synchronize data, and if you set up push sync, notify other devices immediately that an update is available.
Incorrect options:
Cognito Identity Pools - You can use Identity pools to grant your users access to other AWS services. With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the specific identity providers that you can use to authenticate users for identity pools.
Cognito User Pools - A Cognito user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.
Exam Alert:
Beanstalk - With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Reference:
Question 29: Skipped
A developer wants to securely store and retrieve various types of variables, such as remote API authentication information, API URL, and related credentials across different environments of an application deployed on Amazon Elastic Container Service (Amazon ECS).
What would be the best approach that needs minimal modifications in the application code?
Configure the application to fetch the variables from an encrypted file that is stored with the application by storing the API URL and credentials in unique files for each environment
Configure the application to fetch the variables and credentials from AWS Systems Manager Parameter Store by leveraging hierarchical unique paths in Parameter Store for each variable in each environment
(Correct)
Configure the application to fetch the variables from each of the deployed environments by defining the authentication information and API URL in the ECS task definition as unique names during the deployment process
Configure the application to fetch the variables from AWS KMS by storing the API URL and credentials as unique keys in KMS for each environment
Explanation
Correct option:
Configure the application to fetch the variables and credentials from AWS Systems Manager Parameter Store by leveraging hierarchical unique paths in Parameter Store for each variable in each environment
Parameter Stores is a capability of AWS Systems Manager that provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter.
Managing dozens or hundreds of parameters as a flat list is time-consuming and prone to errors. It can also be difficult to identify the correct parameter for a task. This means you might accidentally use the wrong parameter, or you might create multiple parameters that use the same configuration data.
You can use parameter hierarchies to help you organize and manage parameters. A hierarchy is a parameter name that includes a path that you define by using forward slashes (/).
Incorrect options:
Configure the application to fetch the variables from AWS KMS by storing the API URL and credentials as unique keys in KMS for each environment - AWS KMS lets you create, manage, and control cryptographic keys across your applications and AWS services. KMS is not a key-value service that can be used for the given use case.
Configure the application to fetch the variables from an encrypted file that is stored with the application by storing the API URL and credentials in unique files for each environment - It is not considered a security best practice to store sensitive data and credentials in an encrypted file with the application. So this option is incorrect.
Configure the application to fetch the variables from each of the deployed environments by defining the authentication information and API URL in the ECS task definition as unique names during the deployment process - ECS task definition can be thought of as a blueprint for your application. Task definitions specify various parameters for your application. Examples of task definition parameters are which containers to use, which launch type to use, which ports should be opened for your application, and what data volumes should be used with the containers in the task. The specific parameters available for the task definition depend on which launch type you are using. The task definition is a text file, in JSON format, that describes one or more containers, up to a maximum of ten, that form your application. A task is the instantiation of a task definition within a cluster. After you create a task definition for your application within Amazon ECS, you can specify the number of tasks to run on your cluster.
AWS recommends storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters. Environment variables specified in the task definition are readable by all users and roles that are allowed the DescribeTaskDefinition action for the task definition. So this option is incorrect.
References:
Question 47: Skipped
A company wants to share information with a third party via an HTTP API endpoint managed by the third party. The company has the necessary API key to access the endpoint and the integration of the API key with the company's application code must not impact the application's performance.
What is the most secure approach?
Keep the API credentials in an encrypted table in MySQL RDS and use the credentials to make the API call by fetching the API credentials from RDS at runtime by using the AWS SDK
Keep the API credentials in AWS Secrets Manager and use the credentials to make the API call by fetching the API credentials at runtime by using the AWS SDK
(Correct)
Keep the API credentials in a local code variable and use the local code variable at runtime to make the API call
Keep the API credentials in an encrypted file in S3 and use the credentials to make the API call by fetching the API credentials from S3 at runtime by using the AWS SDK
Explanation
Correct option:
Keep the API credentials in AWS Secrets Manager and use the credentials to make the API call by fetching the API credentials at runtime by using the AWS SDK
Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise.
In the past, when you created a custom application to retrieve information from a database, you typically embedded the credentials, the secret, for accessing the database directly in the application. When the time came to rotate the credentials, you had to do more than just create new credentials. You had to invest time to update the application to use the new credentials. Then you distributed the updated application. If you had multiple applications with shared credentials and you missed updating one of them, the application failed. Because of this risk, many customers choose not to regularly rotate credentials, which effectively substitutes one risk for another. You can also use caching with Secrets Manager to significantly improve the availability and latency of applications.
Incorrect options:
Keep the API credentials in an encrypted table in MySQL RDS and use the credentials to make the API call by fetching the API credentials from RDS at runtime by using the AWS SDK
Keep the API credentials in an encrypted file in S3 and use the credentials to make the API call by fetching the API credentials from S3 at runtime by using the AWS SDK
Keep the API credentials in a local code variable and use the local code variable at runtime to make the API call
It is considered a security bad practice to keep sensitive access credentials in code, database, or a flat file on a file system or object storage. Therefore, all three options are incorrect.
References:
Question 56: Skipped
You have launched several AWS Lambda functions written in Java. A new requirement was given that over 1MB of data should be passed to the functions and should be encrypted and decrypted at runtime.
Which of the following methods is suitable to address the given use-case?
Use Envelope Encryption and reference the data as file within the code
(Correct)
Use KMS direct encryption and store as file
Use KMS Encryption and store as environment variable
Use Envelope Encryption and store as environment variable
Explanation
Correct option:
Use Envelope Encryption and reference the data as file within the code
While AWS KMS does support sending data up to 4 KB to be encrypted directly, envelope encryption can offer significant performance benefits. When you encrypt data directly with AWS KMS it must be transferred over the network. Envelope encryption reduces the network load since only the request and delivery of the much smaller data key go over the network. The data key is used locally in your application or encrypting AWS service, avoiding the need to send the entire block of data to AWS KMS and suffer network latency.
AWS Lambda environment variables can have a maximum size of 4 KB. Additionally, the direct 'Encrypt' API of KMS also has an upper limit of 4 KB for the data payload. To encrypt 1 MB, you need to use the Encryption SDK and pack the encrypted file with the lambda function.
Incorrect options:
Use KMS direct encryption and store as file - You can only encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such as an RSA key, a database password, or other sensitive information, so this option is not correct for the given use-case.
Use Envelope Encryption and store as an environment variable - Environment variables must not exceed 4 KB, so this option is not correct for the given use-case.
Use KMS Encryption and store as an environment variable - You can encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such as an RSA key, a database password, or other sensitive information. Lambda Environment variables must not exceed 4 KB. So this option is not correct for the given use-case.
References:
Question 59: Skipped
A pharmaceutical company uses Amazon EC2 instances for application hosting and Amazon CloudFront for content delivery. A new research paper with critical findings has to be shared with a research team that is spread across the world.
Which of the following represents the most optimal solution to address this requirement without compromising the security of the content?
Configure AWS Web Application Firewall (WAF) to monitor and control the HTTP and HTTPS requests that are forwarded to CloudFront
Using CloudFront's Field-Level Encryption to help protect sensitive data
Use CloudFront signed URL feature to control access to the file
(Correct)
Use CloudFront signed cookies feature to control access to the file
Explanation
Correct option:
Use CloudFront signed URL feature to control access to the file
A signed URL includes additional information, for example, expiration date and time, that gives you more control over access to your content.
Here's an overview of how you configure CloudFront for signed URLs and how CloudFront responds when a user uses a signed URL to request a file:
In your CloudFront distribution, specify one or more trusted key groups, which contain the public keys that CloudFront can use to verify the URL signature. You use the corresponding private keys to sign the URLs.
Develop your application to determine whether a user should have access to your content and to create signed URLs for the files or parts of your application that you want to restrict access to.
A user requests a file for which you want to require signed URLs. Your application verifies that the user is entitled to access the file: they've signed in, they've paid for access to the content, or they've met some other requirement for access.
Your application creates and returns a signed URL to the user. The signed URL allows the user to download or stream the content.
This step is automatic; the user usually doesn't have to do anything additional to access the content. For example, if a user is accessing your content in a web browser, your application returns the signed URL to the browser. The browser immediately uses the signed URL to access the file in the CloudFront edge cache without any intervention from the user.
CloudFront uses the public key to validate the signature and confirm that the URL hasn't been tampered with. If the signature is invalid, the request is rejected. If the request meets the requirements in the policy statement, CloudFront does the standard operations: determines whether the file is already in the edge cache, forwards the request to the origin if necessary, and returns the file to the user.
Incorrect options:
Use CloudFront signed cookies feature to control access to the file - CloudFront signed cookies allow you to control who can access your content when you don't want to change your current URLs or when you want to provide access to multiple restricted files, for example, all of the files in the subscribers' area of a website. Our requirement has only one file that needs to be shared and hence signed URL is the optimal solution.
Signed URLs take precedence over signed cookies. If you use both signed URLs and signed cookies to control access to the same files and a viewer uses a signed URL to request a file, CloudFront determines whether to return the file to the viewer based only on the signed URL.
Configure AWS Web Application Firewall (WAF) to monitor and control the HTTP and HTTPS requests that are forwarded to CloudFront - AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the values of query strings or the IP addresses that requests originate from, CloudFront responds to requests either with the requested content or with an HTTP status code 403 (Forbidden). A firewall is optimal for broader use cases than restricted access to a single file.
Using CloudFront's Field-Level Encryption to help protect sensitive data - CloudFront's field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys (which you supply) before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain components or services in your application stack. This feature is not useful for the given use case.
References:
Question 63: Skipped
A developer is defining the signers that can create signed URLs for their Amazon CloudFront distributions.
Which of the following statements should the developer consider while defining the signers? (Select two)
Both the signers (trusted key groups and CloudFront key pairs) can be managed using the CloudFront APIs
When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per AWS account
(Correct)
You can also use AWS Identity and Access Management (IAM) permissions policies to restrict what the root user can do with CloudFront key pairs
When you create a signer, the public key is with CloudFront and private key is used to sign a portion of URL
(Correct)
CloudFront key pairs can be created with any account that has administrative permissions and full access to CloudFront resources
Explanation
Correct options:
When you create a signer, the public key is with CloudFront and private key is used to sign a portion of URL - Each signer that you use to create CloudFront signed URLs or signed cookies must have a public–private key pair. The signer uses its private key to sign the URL or cookies, and CloudFront uses the public key to verify the signature.
When you create signed URLs or signed cookies, you use the private key from the signer’s key pair to sign a portion of the URL or the cookie. When someone requests a restricted file, CloudFront compares the signature in the URL or cookie with the unsigned URL or cookie, to verify that it hasn’t been tampered with. CloudFront also verifies that the URL or cookie is valid, meaning, for example, that the expiration date and time haven’t passed.
When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per AWS account - When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per AWS account.
Whereas, with CloudFront key groups, you can associate a higher number of public keys with your CloudFront distribution, giving you more flexibility in how you use and manage the public keys. By default, you can associate up to four key groups with a single distribution, and you can have up to five public keys in a key group.
Incorrect options:
You can also use AWS Identity and Access Management (IAM) permissions policies to restrict what the root user can do with CloudFront key pairs - When you use the AWS account root user to manage CloudFront key pairs, you can’t restrict what the root user can do or the conditions in which it can do them. You can’t apply IAM permissions policies to the root user, which is one reason why AWS best practices recommend against using the root user.
CloudFront key pairs can be created with any account that has administrative permissions and full access to CloudFront resources - CloudFront key pairs can only be created using the root user account and hence is not a best practice to create CloudFront key pairs as signers.
Both the signers (trusted key groups and CloudFront key pairs) can be managed using the CloudFront APIs - With CloudFront key groups, you can manage public keys, key groups, and trusted signers using the CloudFront API. You can use the API to automate key creation and key rotation. When you use the AWS root user, you have to use the AWS Management Console to manage CloudFront key pairs, so you can’t automate the process.
Reference:
Question 7: Skipped
Your application is deployed automatically using AWS Elastic Beanstalk. Your YAML configuration files are stored in the folder .ebextensions and new files are added or updated often. The DevOps team does not want to re-deploy the application every time there are configuration changes, instead, they would rather manage configuration externally, securely, and have it load dynamically into the application at runtime.
What option allows you to do this?
Use SSM Parameter Store
(Correct)
Use Stage Variables
Use Environment variables
Use S3
Explanation
Correct option:
Use SSM Parameter Store
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. For the given use-case, as the DevOps team does not want to re-deploy the application every time there are configuration changes, so they can use the SSM Parameter Store to store the configuration externally.
Incorrect options:
Use Environment variables - Environment variables provide another way to specify configuration options and credentials, and can be useful for scripting or temporarily setting a named profile as the default. Your application is not running AWS CLI. Since the use-case requires the configuration to be stored securely, so using Environment variables is ruled out, as these are not encrypted at rest and these are visible in clear text in the AWS Console as well as in the response of some actions of the Elastic BeanStalk API.
Use Stage Variables - You can use stage variables for managing multiple release stages for API Gateway, this is not what you are looking for here.
Use S3 - S3 offers the same benefit as the SSM Parameter Store where there are no servers to manage. With S3 you have to set encryption and choose other security options and there are more chances of misconfiguring security if you share your S3 bucket with other objects. You would have to create a custom setup to come close to the parameter store. Use Parameter Store and let AWS handle the rest.
Reference:
Question 18: Skipped
Two policies are attached to an IAM user. The first policy states that the user has explicitly been denied all access to EC2 instances. The second policy states that the user has been allowed permission for EC2:Describe action.
When the user tries to use 'Describe' action on an EC2 instance using the CLI, what will be the output?
The user will get access because it has an explicit allow
The user will be denied access because one of the policies has an explicit deny on it
(Correct)
The IAM user stands in an invalid state, because of conflicting policies
The order of the policy matters. If policy 1 is before 2, then the user is denied access. If policy 2 is before 1, then the user is allowed access
Explanation
Correct option:
The user will be denied access because the policy has an explicit deny on it - User will be denied access because any explicit deny overrides the allow.
Incorrect options:
The IAM user stands in an invalid state, because of conflicting policies - This is an incorrect statement. Access policies can have allow and deny permissions on them and based on policy rules they are evaluated. A user account does not get invalid because of policies.
The user will get access because it has an explicit allow - As discussed above, explicit deny overrides all other permissions and hence the user will be denied access.
The order of the policy matters. If policy 1 is before 2, then the user is denied access. If policy 2 is before 1, then the user is allowed access - If policies that apply to a request include an Allow statement and a Deny statement, the Deny statement trumps the Allow statement. The request is explicitly denied.
Reference:
Question 21: Skipped
A financial services company wants to ensure that the customer data is always kept encrypted on Amazon S3 but wants an AWS managed solution that allows full control to create, rotate and remove the encryption keys.
As a Developer Associate, which of the following would you recommend to address the given use-case?
Server-Side Encryption with Secrets Manager
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
Server-Side Encryption with Customer-Provided Keys (SSE-C)
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
(Correct)
Explanation
Correct option:
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
You have the following options for protecting data at rest in Amazon S3:
Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.
Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
When you use server-side encryption with AWS KMS (SSE-KMS), you can use the default AWS managed CMK, or you can specify a customer-managed CMK that you have already created.
Creating your own customer-managed CMK gives you more flexibility and control over the CMK. For example, you can create, rotate, and disable customer-managed CMKs. You can also define access controls and audit the customer-managed CMKs that you use to protect your data.
Incorrect options:
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key. As an additional safeguard, AWS encrypts the key itself with a master key that it regularly rotates. So this option is incorrect for the given use-case.
Server-Side Encryption with Customer-Provided Keys (SSE-C) - With Server-Side Encryption with Customer-Provided Keys (SSE-C), you will need to create the encryption keys as well as manage the corresponding process to rotate and remove the encryption keys. Amazon S3 manages the data encryption, as it writes to disks, as well as the data decryption when you access your objects. So this option is incorrect for the given use-case.
Server-Side Encryption with Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You cannot combine Server-Side Encryption with Secrets Manager to create, rotate, or disable the encryption keys.
Reference:
Question 22: Skipped
An application runs on an EC2 instance and processes orders on a nightly basis. This EC2 instance needs to access the orders that are stored in S3.
How would you recommend the EC2 instance access the orders securely?
Create an IAM programmatic user and store the access key and secret access key on the EC2 ~/.aws/credentials
file.
Use EC2 User Data
Create an S3 bucket policy that authorises public access
Use an IAM role
(Correct)
Explanation
Correct option:
Use an IAM role
IAM roles have been incorporated so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles.
Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the IAM console, the console creates an instance profile automatically and gives it the same name as the role to which it corresponds.
This is the most secure option as the role assigned to EC2 can be used to access S3 without storing any credentials onto the EC2 instance.
Incorrect options:
Create an IAM programmatic user and store the access key and secret access key on the EC2 ~/.aws/credentials
file. - While this would work, this is highly insecure as anyone gaining access to the EC2 instance would be able to steal the credentials stored in that file.
Use EC2 User Data - EC2 User Data is used to run bootstrap scripts at an instance's first launch. This option is not the right fit for the given use-case.
Create an S3 bucket policy that authorizes public access - While this would work, it would allow anyone to access your S3 bucket files. So this option is ruled out.
Reference:
Question 25: Skipped
A developer wants to integrate user-specific file upload and download features in an application that uses both Amazon Cognito user pools and Cognito identity pools for secure access with Amazon S3. The developer also wants to ensure that only authorized users can access their own files and that the files are securely saved and retrieved. The files are 5 KB to 500 MB in size.
What do you recommend as the most efficient solution?
Use CloudFront Lambda@Edge to validate that the given file is uploaded to S3 and downloaded from S3 only by the authorized user
Use S3 Event Notifications to trigger a Lambda function that validates that the given file is uploaded and downloaded only by the authorized user
Integrate Amazon API Gateway with a Lambda function that validates that the given file is uploaded to S3 and downloaded from S3 only by the authorized user
Leverage an IAM policy with the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3
(Correct)
Explanation
Correct option:
Leverage an IAM policy with the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3
Amazon Cognito identity pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. With an identity pool, you can obtain temporary, limited-privilege AWS credentials to access other AWS services. Amazon Cognito identity pools support the following identity providers:
Public providers: Login with Amazon (identity pools), Facebook (identity pools), Google (identity pools), Sign in with Apple (identity pools).
Amazon Cognito user pools
OpenID Connect providers (identity pools)
SAML identity providers (identity pools)
Developer authenticated identities (identity pools)
You can create an identity-based policy that allows Amazon Cognito users to access objects in a specific S3 bucket. This policy allows access only to objects with a name that includes Cognito, the name of the application, and the federated user's ID, represented by the ${cognito-identity.amazonaws.com:sub} variable.
Incorrect options:
Use S3 Event Notifications to trigger a Lambda function that validates that the given file is uploaded and downloaded only by the authorized user - While it is certainly possible to build this solution, however, it is not the most optimal solution as it does not prevent an invalid upload of a file into another user's designated folder. So this option is incorrect.
Integrate Amazon API Gateway with a Lambda function that validates that the given file is uploaded to S3 and downloaded from S3 only by the authorized user - Again, it is certainly possible to build this solution, however, it is not the most optimal solution as it does not prevent an invalid upload of a file into another user's designated folder. So this option is incorrect.
Use CloudFront Lambda@Edge to validate that the given file is uploaded to S3 and downloaded from S3 only by the authorized user - This option assumes that the solution comprises a CloudFront distribution. This introduces inefficiency in the solution, as one needs to pay for CloudFront/Lambda@Edge and adds unnecessary hops in the data flow for both uploads and downloads.
References:
Question 26: Skipped
A HealthCare mobile app uses proprietary Machine Learning algorithms to provide early diagnosis using patient health metrics. To protect this sensitive data, the development team wants to transition to a scalable user management system with log-in/sign-up functionality that also supports Multi-Factor Authentication (MFA)
Which of the following options can be used to implement a solution with the LEAST amount of development effort? (Select two)
Use Amazon SNS to send Multi-Factor Authentication (MFA) code via SMS to mobile app users
Use Lambda functions and DynamoDB to create a custom solution for user management
Use Amazon Cognito for user-management and facilitating the log-in/sign-up process
(Correct)
Use Amazon Cognito to enable Multi-Factor Authentication (MFA) when users log-in
(Correct)
Use Lambda functions and RDS to create a custom solution for user management
Explanation
Correct options:
Use Amazon Cognito for user-management and facilitating the log-in/sign-up process
Use Amazon Cognito to enable Multi-Factor Authentication (MFA) when users log-in
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
A Cognito user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.
Cognito user pools provide support for sign-up and sign-in services as well as security features such as multi-factor authentication (MFA).
Exam Alert:
Incorrect options:
Use Lambda functions and DynamoDB to create a custom solution for user management
Use Lambda functions and RDS to create a custom solution for user management
As the problem statement mentions that the solution should require the least amount of development effort, so you cannot use Lambda functions with DynamoDB or RDS to create a custom solution. So both these options are incorrect.
Use Amazon SNS to send Multi-Factor Authentication (MFA) code via SMS to mobile app users - Amazon SNS cannot be used to send MFA codes via SMS to the user's mobile devices as this functionality is only meant to be used for IAM users. An SMS (short message service) MFA device can be any mobile device with a phone number that can receive standard SMS text messages. AWS will soon end support for SMS multi-factor authentication (MFA).
Please see this for more details:
Reference:
Question 27: Skipped
A telecom service provider stores its critical customer data on Amazon Simple Storage Service (Amazon S3).
Which of the following options can be used to control access to data stored on Amazon S3? (Select two)
Query String Authentication, Permissions boundaries
Query String Authentication, Access Control Lists (ACLs)
(Correct)
IAM database authentication, Bucket policies
Permissions boundaries, Identity and Access Management (IAM) policies
Bucket policies, Identity and Access Management (IAM) policies
(Correct)
Explanation
Correct options:
Bucket policies, Identity and Access Management (IAM) policies
Query String Authentication, Access Control Lists (ACLs)
Customers may use four mechanisms for controlling access to Amazon S3 resources: Identity and Access Management (IAM) policies, bucket policies, Access Control Lists (ACLs), and Query String Authentication.
IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, customers can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do.
With bucket policies, customers can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address.
With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object.
With Query String Authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time. Using query parameters to authenticate requests is useful when you want to express a request entirely in a URL. This method is also referred as presigning a URL.
Incorrect options:
Permissions boundaries, Identity and Access Management (IAM) policies
Query String Authentication, Permissions boundaries
IAM database authentication, Bucket policies
Permissions boundary - A Permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity's permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries. When you use a policy to set the permissions boundary for a user, it limits the user's permissions but does not provide permissions on its own.
IAM database authentication - IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. It is a database authentication technique and cannot be used to authenticate for S3.
Therefore, all three options are incorrect.
References:
Question 30: Skipped
A company has a workload that requires 14,000 consistent IOPS for data that must be durable and secure. The compliance standards of the company state that the data should be secure at every stage of its lifecycle on all of the EBS volumes they use.
Which of the following statements are true regarding data security on EBS?
EBS volumes do not support in-flight encryption but do support encryption at rest using KMS
EBS volumes support in-flight encryption but does not support encryption at rest
EBS volumes support both in-flight encryption and encryption at rest using KMS
(Correct)
EBS volumes don't support any encryption
Explanation
Correct option:
Amazon EBS works with AWS KMS to encrypt and decrypt your EBS volume. You can encrypt both the boot and data volumes of an EC2 instance. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
Data at rest inside the volume
All data moving between the volume and the instance
All snapshots created from the volume
All volumes created from those snapshots
EBS volumes support both in-flight encryption and encryption at rest using KMS - This is a correct statement. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.
Incorrect options:
EBS volumes support in-flight encryption but do not support encryption at rest - This is an incorrect statement. As discussed above, all data moving between the volume and the instance is encrypted.
EBS volumes do not support in-flight encryption but do support encryption at rest using KMS - This is an incorrect statement. As discussed above, data at rest is also encrypted.
EBS volumes don't support any encryption - This is an incorrect statement. Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources associated with your EC2 instances. With Amazon EBS encryption, you aren't required to build, maintain, and secure your own key management infrastructure.
Reference:
Question 33: Skipped
The Development team at a media company is working on securing their databases.
Which of the following AWS database engines can be configured with IAM Database Authentication? (Select two)
RDS PostGreSQL
(Correct)
RDS SQL Server
RDS Db2
RDS Oracle
RDS MySQL
(Correct)
Explanation
Correct options:
You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of characters that Amazon RDS generates on request. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM.
RDS MySQL - IAM database authentication works with MySQL and PostgreSQL engines for Aurora as well as MySQL, MariaDB and RDS PostgreSQL engines for RDS.
RDS PostGreSQL - IAM database authentication works with MySQL and PostgreSQL engines for Aurora as well as MySQL, MariaDB and RDS PostgreSQL engines for RDS.
Incorrect options:
RDS Oracle
RDS SQL Server
These two options contradict the details in the explanation above, so these are incorrect.
RDS Db2 - This option has been added as a distractor. Db2 is a family of data management products, including database servers, developed by IBM. RDS does not support Db2 database engine.
Reference:
Question 35: Skipped
A digital marketing company has its website hosted on an Amazon S3 bucket A. The development team notices that the web fonts that are hosted on another S3 bucket B are not loading on the website.
Which of the following solutions can be used to address this issue?
Configure CORS on the bucket A that is hosting the website to allow any origin to respond to requests
Configure CORS on the bucket B that is hosting the web fonts to allow Bucket A origin to make the requests
(Correct)
Enable versioning on both the buckets to facilitate correct functioning of the website
Update bucket policies on both bucket A and bucket B to allow successful loading of the web fonts on the website
Explanation
Correct option:
Configure CORS on the bucket B that is hosting the web fonts to allow Bucket A origin to make the requests
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
To configure your bucket to allow cross-origin requests, you create a CORS configuration, which is an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that will support for each origin, and other operation-specific information.
For the given use-case, you would create a <CORSRule>
in <CORSConfiguration>
for bucket B to allow access from the S3 website origin hosted on bucket A.
Incorrect options:
Enable versioning on both the buckets to facilitate the correct functioning of the website - This option is a distractor and versioning will not help to address the web fonts loading issue on the website.
Update bucket policies on both bucket A and bucket B to allow successful loading of the web fonts on the website - It's not the bucket policies but the CORS configuration on bucket B that needs to be updated to allow web fonts to be loaded on the website. Updating bucket policies will not help to address the web fonts loading issue on the website.
Configure CORS on the bucket A that is hosting the website to allow any origin to respond to requests - The CORS configuration needs to be updated on bucket B to allow web fonts to be loaded on the website hosted on bucket A. So this option in incorrect.
Reference:
Question 38: Skipped
You team maintains a public API Gateway that is accessed by clients from another domain. Usage has been consistent for the last few months but recently it has more than doubled. As a result, your costs have gone up and would like to prevent other unauthorized domains from accessing your API.
Which of the following actions should you take?
Restrict access by using CORS
(Correct)
Use Account-level throttling
Assign a Security Group to your API Gateway
Use Mapping Templates
Explanation
Correct option:
Restrict access by using CORS - Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. When your API's resources receive requests from a domain other than the API's own domain and you want to restrict servicing these requests, you must disable cross-origin resource sharing (CORS) for selected methods on the resource.
Incorrect options:
Use Account-level throttling - To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API. By default, API Gateway limits the steady-state request rate to 10,000 requests per second (rps). It limits the burst (that is, the maximum bucket size) to 5,000 requests across all APIs within an AWS account. This is Account-level throttling. As you see, this is about limit on the number of requests and is not a suitable answer for the current scenario.
Use Mapping Templates - A mapping template is a script expressed in Velocity Template Language (VTL) and applied to the payload using JSONPath expressions. Mapping templates help format/structure the data in a way that it is easily readable, unlike a server response that might always be easy to ready. Mapping Templates have nothing to do with access and are not useful for the current scenario.
Assign a Security Group to your API Gateway - API Gateway does not use security groups but uses resource policies, which are JSON policy documents that you attach to an API to control whether a specified principal (typically an IAM user or role) can invoke the API. You can restrict IP address using this, the downside being, an IP address can be changed by the accessing user. So, this is not an optimal solution for the current use case.
References:
Question 49: Skipped
A developer wants to securely store an access token that allows a transaction-processing application running on Amazon EC2 instances to authenticate and send a chat message (via the chat API) to the company's support team when an invalid transaction is detected. While minimizing management overhead, the chat API access token must be encrypted both at rest and in transit, and also be accessible from other AWS accounts.
What is the most efficient solution to address this scenario?
Leverage AWS Secrets Manager with an AWS KMS customer-managed key to store the access token as a secret and configure a resource-based policy for the secret to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access Secrets Manager. Fetch the token from Secrets Manager and then use the decrypted access token to send the message to the chat
(Correct)
Store AWS KMS encrypted access token in a DynamoDB table and configure a resource-based policy for the DynamoDB table to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access the DynamoDB table. Fetch the token from the Dynamodb table and then use the decrypted access token to send the message to the chat
Leverage SSE-KMS to store the access token as an encrypted object on S3 and configure a resource-based policy for the S3 bucket to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access the S3 object. Fetch the token from S3 and then use the decrypted access token to send the message to the chat
Leverage AWS Systems Manager Parameter Store with an AWS KMS customer-managed key to store the access token as a SecureString parameter and configure a resource-based policy for the parameter to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access Parameter Store. Fetch the token from Parameter Store using the with decryption
flag and then use the decrypted access token to send the message to the chat
Explanation
Correct option:
Leverage AWS Secrets Manager with an AWS KMS customer-managed key to store the access token as a secret and configure a resource-based policy for the secret to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access Secrets Manager. Fetch the token from Secrets Manager and then use the decrypted access token to send the message to the chat
AWS Secrets Manager is an AWS service that encrypts and stores your secrets, and transparently decrypts and returns them to you in plaintext. It's designed especially to store application secrets, such as login credentials, that change periodically and should not be hard-coded or stored in plaintext in the application. In place of hard-coded credentials or table lookups, your application calls Secrets Manager.
Secrets Manager also supports features that periodically rotate the secrets associated with commonly used databases. It always encrypts newly rotated secrets before they are stored.
Secrets Manager integrates with AWS Key Management Service (AWS KMS) to encrypt every version of every secret value with a unique data key that is protected by an AWS KMS key. This integration protects your secrets under encryption keys that never leave AWS KMS unencrypted. It also enables you to set custom permissions on the KMS key and audit the operations that generate, encrypt, and decrypt the data keys that protect your secrets.
To grant permission to retrieve secret values, you can attach policies to secrets or identities.
For the given use case, you can use the resource-based policy to the secret to allow access from other accounts. Then you need to update the IAM role of the EC2 instances with permissions to access Secrets Manager which will retrieve the token from Secrets Manager and use the decrypted access token to send the message to the support team via the chat API.
Incorrect options:
Leverage AWS Systems Manager Parameter Store with an AWS KMS customer-managed key to store the access token as a SecureString parameter and configure a resource-based policy for the parameter to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access Parameter Store. Fetch the token from Parameter Store using the with decryption
flag and then use the decrypted access token to send the message to the chat - You cannot use a resource-based policy with a parameter in the Parameter Store. Parameter Store supports parameter policies that are available for parameters that use the advanced parameters tier. Parameter policies help you manage a growing set of parameters by allowing you to assign specific criteria to a parameter such as an expiration date or time to live. Parameter policies are especially helpful in forcing you to update or delete passwords and configuration data stored in Parameter Store, a capability of AWS Systems Manager. So this option is incorrect.
Store AWS KMS encrypted access token in a DynamoDB table and configure a resource-based policy for the DynamoDB table to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access the DynamoDB table. Fetch the token from the Dynamodb table and then use the decrypted access token to send the message to the chat - You should note that DynamoDB does not support resource-based policies. Moreover, it's a security bad practice to keep sensitive access credentials in code, database or a flat file on a file system or object storage. Therefore, this option is incorrect.
Leverage SSE-KMS to store the access token as an encrypted object on S3 and configure a resource-based policy for the S3 bucket to allow access from other accounts. Modify the IAM role of the EC2 instances with permissions to access the S3 object. Fetch the token from S3 and then use the decrypted access token to send the message to the chat - It is considered a security bad practice to keep sensitive access credentials in code, database, or a flat file on a file system or object storage. Therefore, this option is incorrect.
References:
Question 50: Skipped
An e-commerce company has an order processing workflow with several tasks to be done in parallel as well as decision steps to be evaluated for successful processing of the order. All the tasks are implemented via Lambda functions.
Which of the following is the BEST solution to meet these business requirements?
Use AWS Glue to orchestrate the workflow
Use AWS Step Functions state machines to orchestrate the workflow
(Correct)
Use AWS Batch to orchestrate the workflow
Use AWS Step Functions activities to orchestrate the workflow
Explanation
Correct option:
Use AWS Step Functions state machines to orchestrate the workflow
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
The following are key features of AWS Step Functions:
Step Functions are based on the concepts of tasks and state machines. You define state machines using the JSON-based Amazon States Language. A state machine is defined by the states it contains and the relationships between them. States are elements in your state machine. Individual states can make decisions based on their input, perform actions, and pass output to other states. In this way, a state machine can orchestrate workflows.
Incorrect options:
Use AWS Step Functions activities to orchestrate the workflow - In AWS Step Functions, activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine. When a Step Function reaches an activity task state, the workflow waits for an activity worker to poll for a task. For example, an activity worker can be an application running on an Amazon EC2 instance or an AWS Lambda function. AWS Step Functions activities cannot orchestrate a workflow.
Use AWS Glue to orchestrate the workflow - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue cannot orchestrate a workflow.
Use AWS Batch to orchestrate the workflow - AWS Batch runs batch computing jobs on the AWS Cloud. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch cannot orchestrate a workflow.
Reference:
Question 54: Skipped
A large firm stores its static data assets on Amazon S3 buckets. Each service line of the firm has its own AWS account. For a business use case, the Finance department needs to give access to their S3 bucket's data to the Human Resources department.
Which of the below options is NOT feasible for cross-account access of S3 bucket objects?
Use Cross-account IAM roles for programmatic and console access to S3 bucket objects
Use Access Control List (ACL) and IAM policies for programmatic-only access to S3 bucket objects
Use Resource-based policies and AWS Identity and Access Management (IAM) policies for programmatic-only access to S3 bucket objects
Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only
(Correct)
Explanation
Correct option:
Use IAM roles and resource-based policies delegate access across accounts within different partitions via programmatic access only - This statement is incorrect and hence the right choice for this question. IAM roles and resource-based policies delegate access across accounts only within a single partition. For example, assume that you have an account in US West (N. California) in the standard aws
partition. You also have an account in China (Beijing) in the aws-cn
partition. You can't use an Amazon S3 resource-based policy in your account in China (Beijing) to allow access for users in your standard AWS account.
Incorrect options:
Use Resource-based policies and AWS Identity and Access Management (IAM) policies for programmatic-only access to S3 bucket objects - Use bucket policies to manage cross-account control and audit the S3 object's permissions. If you apply a bucket policy at the bucket level, you can define who can access (Principal element), which objects they can access (Resource element), and how they can access (Action element). Applying a bucket policy at the bucket level allows you to define granular access to different objects inside the bucket by using multiple policies to control access. You can also review the bucket policy to see who can access objects in an S3 bucket.
Use Access Control List (ACL) and IAM policies for programmatic-only access to S3 bucket objects - Use object ACLs to manage permissions only for specific scenarios and only if ACLs meet your needs better than IAM and S3 bucket policies. Amazon S3 ACLs allow users to define only the following permissions sets: READ, WRITE, READ_ACP, WRITE_ACP, and FULL_CONTROL. You can use only an AWS account or one of the predefined Amazon S3 groups as a grantee for the Amazon S3 ACL.
Use Cross-account IAM roles for programmatic and console access to S3 bucket objects - Not all AWS services support resource-based policies. This means that you can use cross-account IAM roles to centralize permission management when providing cross-account access to multiple services. Using cross-account IAM roles simplifies provisioning cross-account access to S3 objects that are stored in multiple S3 buckets, removing the need to manage multiple policies for S3 buckets. This method allows cross-account access to objects that are owned or uploaded by another AWS account or AWS services. If you don't use cross-account IAM roles, the object ACL must be modified.
References:
Question 57: Skipped
A high-frequency stock trading firm is migrating their messaging queues from self-managed message-oriented middleware systems to Amazon SQS. The development team at the company wants to minimize the costs of using SQS.
As a Developer Associate, which of the following options would you recommend to address the given use-case?
Use SQS message timer to retrieve messages from your Amazon SQS queues
Use SQS long polling to retrieve messages from your Amazon SQS queues
(Correct)
Use SQS short polling to retrieve messages from your Amazon SQS queues
Use SQS visibility timeout to retrieve messages from your Amazon SQS queues
Explanation
Correct option:
Use SQS long polling to retrieve messages from your Amazon SQS queues
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
Amazon SQS provides short polling and long polling to receive messages from a queue. By default, queues use short polling. With short polling, Amazon SQS sends the response right away, even if the query found no messages. With long polling, Amazon SQS sends a response after it collects at least one available message, up to the maximum number of messages specified in the request. Amazon SQS sends an empty response only if the polling wait time expires.
Long polling makes it inexpensive to retrieve messages from your Amazon SQS queue as soon as the messages are available. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response). When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds.
Exam Alert:
Incorrect options:
Use SQS short polling to retrieve messages from your Amazon SQS queues - With short polling, Amazon SQS sends the response right away, even if the query found no messages. You end up paying more because of the increased number of empty receives.
Use SQS visibility timeout to retrieve messages from your Amazon SQS queues - Visibility timeout is a period during which Amazon SQS prevents other consumers from receiving and processing a given message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. You cannot use visibility timeout to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.
Use SQS message timer to retrieve messages from your Amazon SQS queues - You can use message timers to set an initial invisibility period for a message added to a queue. So, if you send a message with a 60-second timer, the message isn't visible to consumers for its first 60 seconds in the queue. The default (minimum) delay for a message is 0 seconds. The maximum is 15 minutes. You cannot use message timer to retrieve messages from your Amazon SQS queues. This option has been added as a distractor.
Reference:
Question 5: Skipped
You have a popular web application that accesses data stored in an Amazon Simple Storage Service (S3) bucket. Developers use the SDK to maintain the application and add new features. Security compliance requests that all new objects uploaded to S3 be encrypted using SSE-S3 at the time of upload. Which of the following headers must the developers add to their request?
'x-amz-server-side-encryption': 'aws:kms'
'x-amz-server-side-encryption': 'AES256'
(Correct)
'x-amz-server-side-encryption': 'SSE-S3'
'x-amz-server-side-encryption': 'SSE-KMS'
Explanation
Correct option:
'x-amz-server-side-encryption': 'AES256'
Server-side encryption protects data at rest. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).
'x-amz-server-side-encryption': 'SSE-S3' - SSE-S3 (Amazon S3-Managed Keys) is an option available but it's not a valid header value.
'x-amz-server-side-encryption': 'SSE-KMS' - SSE-KMS (AWS KMS-Managed Keys) is an option available but it's not a valid header value. A valid value would be 'aws:kms'
'x-amz-server-side-encryption': 'aws:kms' - Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs) to encrypt your Amazon S3 objects. AWS KMS encrypts only the object data. Any object metadata is not encrypted.
This is a valid header value and you can use if you need more control over your keys like create, rotating, disabling them using AWS KMS. Otherwise, if you wish to let AWS S3 manage your keys just stick with SSE-S3.
References:
Question 7: Skipped
An Amazon Simple Queue Service (SQS) has to be configured between two AWS accounts for shared access to the queue. AWS account A has the SQS queue in its account and AWS account B has to be given access to this queue.
Which of the following options need to be combined to allow this cross-account access? (Select three)
The account B administrator creates an IAM role and attaches a trust policy to the role with account B as the principal
The account A administrator creates an IAM role and attaches a permissions policy
(Correct)
The account B administrator delegates the permission to assume the role to any users in account B
(Correct)
The account A administrator attaches a trust policy to the role that identifies account B as the principal who can assume the role
(Correct)
The account A administrator delegates the permission to assume the role to any users in account A
The account A administrator attaches a trust policy to the role that identifies account B as the AWS service principal who can assume the role
Explanation
Correct options:
The account A administrator creates an IAM role and attaches a permissions policy
The account A administrator attaches a trust policy to the role that identifies account B as the principal who can assume the role
The account B administrator delegates the permission to assume the role to any users in account B
To grant cross-account permissions, you need to attach an identity-based permissions policy to an IAM role. For example, the AWS account A administrator can create a role to grant cross-account permissions to AWS account B as follows:
The account A administrator creates an IAM role and attaches a permissions policy—that grants permissions on resources in account A—to the role.
The account A administrator attaches a trust policy to the role that identifies account B as the principal who can assume the role.
The account B administrator delegates the permission to assume the role to any users in account B. This allows users in account B to create or access queues in account A.
Incorrect options:
The account B administrator creates an IAM role and attaches a trust policy to the role with account B as the principal - As mentioned above, the account A administrator needs to create an IAM role and then attach a permissions policy. So, this option is incorrect.
The account A administrator delegates the permission to assume the role to any users in account A - This is irrelevant, as users in account B need to be given access.
The account A administrator attaches a trust policy to the role that identifies account B as the AWS service principal who can assume the role - AWS service principal is given as principal in the trust policy when you need to grant the permission to assume the role to an AWS service. The given use case talks about giving permission to another account. So, service principal is not an option here.
Reference:
Question 8: Skipped
A cybersecurity company is publishing critical log data to a log group in Amazon CloudWatch Logs, which was created 3 months ago. The company must encrypt the log data using an AWS KMS customer master key (CMK), so any future data can be encrypted to meet the company’s security guidelines.
How can the company address this use-case?
Enable the encrypt feature on the log group via the CloudWatch Logs console
Use the AWS CLI describe-log-groups
command and specify the KMS key ARN
Use the AWS CLI create-log-group
command and specify the KMS key ARN
Use the AWS CLI associate-kms-key
command and specify the KMS key ARN
(Correct)
Explanation
Correct option:
Use the AWS CLI associate-kms-key
command and specify the KMS key ARN
Log group data is always encrypted in CloudWatch Logs. You can optionally use AWS AWS Key Management Service for this encryption. If you do, the encryption is done using an AWS KMS (AWS KMS) customer master key (CMK). Encryption using AWS KMS is enabled at the log group level, by associating a CMK with a log group, either when you create the log group or after it exists.
After you associate a CMK with a log group, all newly ingested data for the log group is encrypted using the CMK. This data is stored in an encrypted format throughout its retention period. CloudWatch Logs decrypts this data whenever it is requested. CloudWatch Logs must have permissions for the CMK whenever encrypted data is requested.
To associate the CMK with an existing log group, you can use the associate-kms-key
command.
Incorrect options:
Enable the encrypt feature on the log group via the CloudWatch Logs console - CloudWatch Logs console does not have an option to enable encryption for a log group.
Use the AWS CLI describe-log-groups
command and specify the KMS key ARN - You can use the describe-log-groups
command to find whether a log group already has a CMK associated with it.
Use the AWS CLI create-log-group
command and specify the KMS key ARN - You can use the create-log-group
command to associate the CMK with a log group when you create it.
Reference:
Question 9: Skipped
You are storing your video files in a separate S3 bucket than your main static website in an S3 bucket. When accessing the video URLs directly the users can view the videos on the browser, but they can't play the videos while visiting the main website.
What is the root cause of this problem?
Amend the IAM policy
Enable CORS
(Correct)
Disable Server-Side Encryption
Change the bucket policy
Explanation
Correct option:
Enable CORS
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
To configure your bucket to allow cross-origin requests, you create a CORS configuration, which is an XML document with rules that identify the origins that you will allow to access your bucket, the operations (HTTP methods) that will support for each origin, and other operation-specific information.
For the given use-case, you would create a <CORSRule>
in <CORSConfiguration>
for bucket B to allow access from the S3 website origin hosted on bucket A.
Incorrect options:
Change the bucket policy - A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that grants permissions. With this policy, you can do things such as allow one IP address to access the video file in the S3 bucket. In this scenario, we know that's not the case because it works using the direct URL but it doesn't work when you click on a link to access the video.
Amend the IAM policy - You attach IAM policies to IAM users, groups, or roles, which are then subject to the permissions you've defined. This scenario refers to public users of a website and they need not have an IAM user account.
Disable Server-Side Encryption - Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it, if the video file is encrypted at rest then there is nothing you need to do because AWS handles encrypt and decrypt. Disabling encryption is not an issue because you can access the video directly using an URL but not from the main website.
Reference:
Question 12: Skipped
Your development team uses the AWS SDK for Java on a web application that uploads files to several Amazon Simple Storage Service (S3) buckets using the SSE-KMS encryption mechanism. Developers are reporting that they are receiving permission errors when trying to push their objects over HTTP. Which of the following headers should they include in their request?
'x-amz-server-side-encryption': 'aws:kms'
(Correct)
'x-amz-server-side-encryption': 'AES256'
'x-amz-server-side-encryption': 'SSE-S3'
'x-amz-server-side-encryption': 'SSE-KMS'
Explanation
Correct option:
'x-amz-server-side-encryption': 'aws:kms'
Server-side encryption is the encryption of data at its destination by the application or service that receives it. AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Amazon S3 uses AWS KMS customer master keys (CMKs) to encrypt your Amazon S3 objects. AWS KMS encrypts only the object data. Any object metadata is not encrypted.
If the request does not include the x-amz-server-side-encryption header, then the request is denied.
Incorrect options:
'x-amz-server-side-encryption': 'SSE-S3' - This is an invalid header value. The correct value is 'x-amz-server-side-encryption': 'AES256'. This refers to Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3).
'x-amz-server-side-encryption': 'SSE-KMS' - Invalid header value. SSE-KMS is an encryption option.
'x-amz-server-side-encryption': 'AES256' - This is the correct header value if you are using SSE-S3 server-side encryption.
Reference:
Question 15: Skipped
Consider the following IAM policy:
Which of the following statements is correct per the given policy?
The policy denies PutObject and GetObject access to all buckets except the EXAMPLE-BUCKET/private
bucket
The policy provides PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket as well as provides access to all s3 actions on objects starting with private
in the EXAMPLE-BUCKET
bucket
The policy provides PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket except the objects that start with private
(Correct)
The policy provides PutObject and GetObject access to all buckets except the EXAMPLE-BUCKET/private
bucket
Explanation
Correct option:
The policy provides PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket except the objects that start with private
The first statement denies access to any objects that start with private
in the EXAMPLE-BUCKET
bucket. The second statement allows PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket. So the net effect is to allow PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket except the objects that start with private
.
Incorrect options:
The policy provides PutObject and GetObject access to all buckets except the EXAMPLE-BUCKET/private
bucket
The policy provides PutObject and GetObject access to all objects in the EXAMPLE-BUCKET
bucket as well as provides access to all s3 actions on objects starting with private
in the EXAMPLE-BUCKET
bucket
The policy denies PutObject and GetObject access to all buckets except the EXAMPLE-BUCKET/private
bucket
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
Question 26: Skipped
You have a web application hosted on EC2 that makes GET and PUT requests for objects stored in Amazon Simple Storage Service (S3) using the SDK for PHP. As the security team completed the final review of your application for vulnerabilities, they noticed that your application uses hardcoded IAM access key and secret access key to gain access to AWS services. They recommend you leverage a more secure setup, which should use temporary credentials if possible.
Which of the following options can be used to address the given use-case?
Hardcode the credentials in the application code
Use an IAM Instance Role
(Correct)
Use the SSM parameter store
Use environment variables
Explanation
Correct option:
Use an IAM Instance Role
An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. The AWS SDK will use the EC2 metadata service to obtain temporary credentials thanks to the IAM instance role. This is the most secure and common setup when deploying any kind of applications onto an EC2 instance.
Incorrect options:
Use environment variables - This is another option if you configure AWS CLI on the EC2 instance. When configuring the AWS CLI you will set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. This practice may not be bad for one instance but once you start running more EC2 instances this is not a good practice because you may have to change credentials on each instance whereas an IAM Role gets temporary permissions.
Hardcode the credentials in the application code - It will work for sure, but it's not a good practice from a security point of view.
Use the SSM parameter store - With parameter store you can store data such as passwords. The problem is that you need the SDK to access parameter store and without credentials, you cannot use the SDK. Use parameter store for other uses such as database connection strings or other secret codes when you have already authenticated to AWS.
Reference:
Question 28: Skipped
A development team had enabled and configured CloudTrail for all the Amazon S3 buckets used in a project. The project manager owns all the S3 buckets used in the project. However, the manager noticed that he did not receive any object-level API access logs when the data was read by another AWS account.
What could be the reason for this behavior/error?
The bucket owner also needs to be object owner to get the object access logs
(Correct)
CloudTrail needs to be configured on both the AWS accounts for receiving the access logs in cross-account access
The meta-data of the bucket is in an invalid state and needs to be corrected by the bucket owner from AWS console to fix the issue
CloudTrail always delivers object-level API access logs to the requester and not to object owner
Explanation
Correct option:
The bucket owner also needs to be object owner to get the object access logs
If the bucket owner is also the object owner, the bucket owner gets the object access logs. Otherwise, the bucket owner must get permissions, through the object ACL, for the same object API to get the same object-access API logs.
Incorrect options:
CloudTrail always delivers object-level API access logs to the requester and not to object owner - CloudTrail always delivers object-level API access logs to the requester. In addition, CloudTrail also delivers the same logs to the bucket owner only if the bucket owner has permissions for the same API actions on that object.
CloudTrail needs to be configured on both the AWS accounts for receiving the access logs in cross-account access
The meta-data of the bucket is in an invalid state and needs to be corrected by the bucket owner from AWS console to fix the issue
These two options are incorrect and are given only as distractors.
Reference:
Question 29: Skipped
An organization recently began using AWS CodeCommit for its source control service. A compliance security team visiting the organization was auditing the software development process and noticed developers making many git push commands within their development machines. The compliance team requires that encryption be used for this activity.
How can the organization ensure source code is encrypted in transit and at rest?
Enable KMS encryption
Repositories are automatically encrypted at rest
(Correct)
Use a git command line hook to encrypt the code client side
Use AWS Lambda as a hook to encrypt the pushed code
Explanation
Correct option:
Repositories are automatically encrypted at rest
Data in AWS CodeCommit repositories is encrypted in transit and at rest. When data is pushed into an AWS CodeCommit repository (for example, by calling git push), AWS CodeCommit encrypts the received data as it is stored in the repository.
Incorrect options:
Enable KMS encryption - You don't have to. The first time you create an AWS CodeCommit repository in a new region in your AWS account, CodeCommit creates an AWS-managed key in that same region in AWS Key Management Service (AWS KMS) that is used only by CodeCommit.
Use AWS Lambda as a hook to encrypt the pushed code - This is not needed as CodeCommit handles it for you.
Use a git command line hook to encrypt the code client-side - This is not needed as CodeCommit handles it for you.
Reference:
For more information visit https://docs.aws.amazon.com/codecommit/latest/userguide/encryption.html
Question 40: Skipped
An IT company has a web application running on Amazon EC2 instances that needs read-only access to an Amazon DynamoDB table.
As a Developer Associate, what is the best-practice solution you would recommend to accomplish this task?
Run application code with AWS account root user credentials to ensure full access to all AWS services
Create an IAM user with Administrator access and configure AWS credentials for this user on the given EC2 instance
Create a new IAM user with access keys. Attach an inline policy to the IAM user with read-only access to DynamoDB. Place the keys in the code. For security, redeploy the code whenever the keys rotate
Create an IAM role with an AmazonDynamoDBReadOnlyAccess policy and apply it to the EC2 instance profile
(Correct)
Explanation
Correct option:
Create an IAM role with an AmazonDynamoDBReadOnlyAccess policy and apply it to the EC2 instance profile
As an AWS security best practice, you should not create an IAM user and pass the user's credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance. When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role.
So for the given use-case, you should create an IAM role with an AmazonDynamoDBReadOnlyAccess policy and apply it to the EC2 instance profile.
Incorrect options:
Create a new IAM user with access keys. Attach an inline policy to the IAM user with read-only access to DynamoDB. Place the keys in the code. For security, redeploy the code whenever the keys rotate
Create an IAM user with Administrator access and configure AWS credentials for this user on the given EC2 instance
Run application code with AWS account root user credentials to ensure full access to all AWS services
As mentioned in the explanation above, it is dangerous to pass an IAM user's credentials to the application or embed the credentials in the application. The security implications are even higher when you use an IAM user with admin privileges or use the AWS account root user. So all three options are incorrect.
Reference:
Question 41: Skipped
A company would like to migrate the existing application code from a GitHub repository to AWS CodeCommit.
As an AWS Certified Developer Associate, which of the following would you recommend for migrating the cloned repository to CodeCommit over HTTPS?
Use IAM Multi-Factor authentication
Use IAM user secret access key and access key ID
Use Git credentials generated from IAM
(Correct)
Use authentication offered by GitHub secure tokens
Explanation
Correct option:
Use Git credentials generated from IAM - CodeCommit repositories are Git-based and support the basic functionalities of Git such as Git credentials. AWS recommends that you use an IAM user when working with CodeCommit. You can access CodeCommit with other identity types, but the other identity types are subject to limitations.
The simplest way to set up connections to AWS CodeCommit repositories is to configure Git credentials for CodeCommit in the IAM console, and then use those credentials for HTTPS connections. You can also use these same credentials with any third-party tool or individual development environment (IDE) that supports HTTPS authentication using a static user name and password.
An IAM user is an identity within your Amazon Web Services account that has specific custom permissions. For example, an IAM user can have permissions to create and manage Git credentials for accessing CodeCommit repositories. This is the recommended user type for working with CodeCommit. You can use an IAM user name and password to sign in to secure AWS webpages like the AWS Management Console, AWS Discussion Forums, or the AWS Support Center.
Incorrect options:
Use IAM Multi-Factor authentication - AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.
Use IAM user secret access key and access key ID - Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). As a best practice, AWS suggests using temporary security credentials (IAM roles) instead of access keys.
Use authentication offered by GitHub secure tokens - Personal access tokens (PATs) are an alternative to using passwords for authentication to GitHub when using the GitHub API or the command line. This option is specific to GitHub only and hence not useful for the given use case.
References:
Question 44: Skipped
Your company leverages Amazon CloudFront to provide content via the internet to customers with low latency. Aside from latency, security is another concern and you are looking for help in enforcing end-to-end connections using HTTPS so that content is protected.
Which of the following options is available for HTTPS in AWS CloudFront?
Neither between clients and CloudFront nor between CloudFront and backend
Between clients and CloudFront only
Between clients and CloudFront as well as between CloudFront and backend
(Correct)
Between CloudFront and backend only
Explanation
Correct option:
Between clients and CloudFront as well as between CloudFront and backend
For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects, so connections are encrypted when CloudFront communicates with viewers.
You also can configure CloudFront to use HTTPS to get objects from your origin, so connections are encrypted when CloudFront communicates with your origin.
Incorrect options:
Between clients and CloudFront only - This is incorrect as you can choose to require HTTPS between CloudFront and your origin.
Between CloudFront and backend only - This is incorrect as you can choose to require HTTPS between viewers and CloudFront.
Neither between clients and CloudFront nor between CloudFront and backend - This is incorrect as you can choose HTTPS settings both for communication between viewers and CloudFront as well as between CloudFront and your origin.
References:
Question 45: Skipped
You are planning to build a fleet of EBS-optimized EC2 instances to handle the load of your new application. Due to security compliance, your organization wants any secret strings used in the application to be encrypted to prevent exposing values as clear text.
The solution requires that decryption events be audited and API calls to be simple. How can this be achieved? (select two)
Audit using SSM Audit Trail
Audit using CloudTrail
(Correct)
Encrypt first with KMS then store in SSM Parameter store
Store the secret as SecureString in SSM Parameter Store
(Correct)
Store the secret as PlainText in SSM Parameter Store
Explanation
Correct options:
Store the secret as SecureString in SSM Parameter Store
With AWS Systems Manager Parameter Store, you can create SecureString parameters, which are parameters that have a plaintext parameter name and an encrypted parameter value. Parameter Store uses AWS KMS to encrypt and decrypt the parameter values of Secure String parameters. Also, if you are using customer-managed CMKs, you can use IAM policies and key policies to manage to encrypt and decrypt permissions. To retrieve the decrypted value you only need to do one API call.
Audit using CloudTrail
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.
CloudTrail will allow you to see all API calls made to SSM and KMS.
Incorrect options:
Encrypt first with KMS then store in SSM Parameter store - This could work but will require two API calls to get the decrypted value instead of one. So this is not the right option.
Store the secret as PlainText in SSM Parameter Store - Plaintext parameters are not secure and shouldn't be used to store secrets.
Audit using SSM Audit Trail - This is a made-up option and has been added as a distractor.
Reference:
Question 53: Skipped
As part of internal regulations, you must ensure that all communications to Amazon S3 are encrypted.
For which of the following encryption mechanisms will a request get rejected if the connection is not using HTTPS?
SSE-KMS
SSE-S3
SSE-C
(Correct)
Client Side Encryption
Explanation
Correct option:
SSE-C
Server-side encryption is about protecting data at rest. Server-side encryption encrypts only the object data, not object metadata. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your encryption keys.
When you upload an object, Amazon S3 uses the encryption key you provide to apply AES-256 encryption to your data and removes the encryption key from memory. When you retrieve an object, you must provide the same encryption key as part of your request. Amazon S3 first verifies that the encryption key you provided matches and then decrypts the object before returning the object data to you.
Amazon S3 will reject any requests made over HTTP when using SSE-C. For security considerations, AWS recommends that you consider any key you send erroneously using HTTP to be compromised.
Incorrect options:
SSE-KMS - It is not mandatory to use HTTPS.
Client-Side Encryption - Client-side encryption is the act of encrypting data before sending it to Amazon S3. It is not mandatory to use HTTPS for this.
SSE-S3 - It is not mandatory to use HTTPS. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers.
References:
Question 54: Skipped
A company developed an app-based service for citizens to book transportation rides in the local community. The platform is running on AWS EC2 instances and uses Amazon Relational Database Service (RDS) for storing transportation data. A new feature has been requested where receipts would be emailed to customers with PDF attachments retrieved from Amazon Simple Storage Service (S3).
Which of the following options will provide EC2 instances with the right permissions to upload files to Amazon S3 and generate S3 Signed URL?
Create an IAM Role for EC2
(Correct)
Run aws configure
on the EC2 instance
CloudFormation
EC2 User Data
Explanation
Correct option:
Create an IAM Role for EC2
IAM roles have been incorporated so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles.
Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the IAM console, the console creates an instance profile automatically and gives it the same name as the role to which it corresponds.
Incorrect options:
EC2 User Data - You can specify user data when you launch an instance and you would not want to hard code the AWS credentials in the user data.
Run aws configure
on the EC2 instance - When you first configure the CLI you have to run this command, afterward you should not need to if you want to obtain credentials to authenticate to other AWS services. An IAM role will receive temporary credentials for you so you can focus on using the CLI to get access to other AWS services if you have the permissions.
CloudFormation - AWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion.
Reference:
Question 58: Skipped
You are a manager for a tech company that has just hired a team of developers to work on the company's AWS infrastructure. All the developers are reporting to you that when using the AWS CLI to execute commands it fails with the following exception: You are not authorized to perform this operation. Encoded authorization failure message: 6h34GtpmGjJJUm946eDVBfzWQJk6z5GePbbGDs9Z2T8xZj9EZtEduSnTbmrR7pMqpJrVYJCew2m8YBZQf4HRWEtrpncANrZMsnzk.
Which of the following actions will help developers decode the message?
AWS Cognito Decoder
AWS IAM decode-authorization-message
Use KMS decode-authorization-message
AWS STS decode-authorization-message
(Correct)
Explanation
Correct option:
AWS STS decode-authorization-message
Use decode-authorization-message to decode additional information about the authorization status of a request from an encoded message returned in response to an AWS request. If a user is not authorized to perform an action that was requested, the request returns a Client.UnauthorizedOperation response (an HTTP 403 response). The message is encoded because the details of the authorization status can constitute privileged information that the user who requested the operation should not see. To decode an authorization status message, a user must be granted permissions via an IAM policy to request the DecodeAuthorizationMessage (sts:DecodeAuthorizationMessage) action.
Incorrect options:
AWS IAM decode-authorization-message - The IAM service does not have this command, as it's a made-up option.
Use KMS decode-authorization-message - The KMS service does not have this command, as it's a made-up option.
AWS Cognito Decoder - The Cognito service does not have this command, as it's a made-up option.
Reference:
Question 64: Skipped
A development team is storing sensitive customer data in S3 that will require encryption at rest. The encryption keys must be rotated at least annually.
What is the easiest way to implement a solution for this requirement?
Encrypt the data before sending it to Amazon S3
Use SSE-C with automatic key rotation on an annual basis
Import a custom key into AWS KMS and automate the key rotation on an annual basis by using a Lambda function
Use AWS KMS with automatic key rotation
(Correct)
Explanation
Correct option:
Use AWS KMS with automatic key rotation - Server-side encryption is the encryption of data at its destination by the application or service that receives it. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. You have three mutually exclusive options, depending on how you choose to manage the encryption keys: Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS), Server-Side Encryption with Customer-Provided Keys (SSE-C).
When you use server-side encryption with AWS KMS (SSE-KMS), you can use the default AWS managed CMK, or you can specify a customer managed CMK that you have already created. If you don't specify a customer managed CMK, Amazon S3 automatically creates an AWS managed CMK in your AWS account the first time that you add an object encrypted with SSE-KMS to a bucket. By default, Amazon S3 uses this CMK for SSE-KMS.
You can choose to have AWS KMS automatically rotate CMKs every year, provided that those keys were generated within AWS KMS HSMs.
Incorrect options:
Encrypt the data before sending it to Amazon S3 - The act of encrypting data before sending it to Amazon S3 is called Client-Side encryption. You will have to handle the key generation, maintenance and rotation process. Hence, this is not the right choice here.
Import a custom key into AWS KMS and automate the key rotation on an annual basis by using a Lambda function - When you import a custom key, you are responsible for maintaining a copy of your imported keys in your key management infrastructure so that you can re-import them at any time. Also, automatic key rotation is not supported for imported keys. Using Lambda functions to rotate keys is a possible solution, but not an optimal one for the current use case.
Use SSE-C with automatic key rotation on an annual basis - With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your objects. The keys are not stored anywhere in Amazon S3. There is no automatic key rotation facility for this option.
Reference:
via -
Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via -
How Elastic BeanStalk Works: via -
via -
via -
via -
Policy Evaluation explained: via -
Please see this note for more details: via -
via -
via -
Please review the following note to understand the differences between Cognito User Pools and Cognito Identity Pools: via -
Please see this note for more details: via -
via -
How Step Functions Work: via -
Please see this note for a simple example of a State Machine: via -
Please review the differences between Short Polling vs Long Polling: via -
via -
SSE-S3 Overview: via -
via -
Please see this note for more details: via -
via -
via -
via -
Authentication and access control for AWS CodeCommit: via -
Requiring HTTPS for Communication Between Viewers and CloudFront: via -
Requiring HTTPS for Communication Between CloudFront and Your Custom Origin: via -
via -
via -
How CloudFormation Works: via -