Deployment
Question 10: Skipped
A developer in your company was just promoted to Team Lead and will be in charge of code deployment on EC2 instances via AWS CodeCommit and AWS CodeDeploy. Per the new requirements, the deployment process should be able to change permissions for deployed files as well as verify the deployment success.
Which of the following actions should the new Developer take?
Define a
buildspec.yml
file in the codebuild/ directoryDefine an
appspec.yml
file in the root directory(Correct)
Define a
buildspec.yml
file in the root directoryDefine an
appspec.yml
file in the codebuild/ directory
Explanation
Correct option:
Define an appspec.yml
file in the root directory: An AppSpec file must be a YAML-formatted file named appspec.yml and it must be placed in the root of the directory structure of an application's source code.
The AppSpec file is used to:
Map the source files in your application revision to their destinations on the instance.
Specify custom permissions for deployed files.
Specify scripts to be run on each instance at various stages of the deployment process.
During deployment, the CodeDeploy agent looks up the name of the current event in the hooks section of the AppSpec file. If the event is not found, the CodeDeploy agent moves on to the next step. If the event is found, the CodeDeploy agent retrieves the list of scripts to execute. The scripts are run sequentially, in the order in which they appear in the file. The status of each script is logged in the CodeDeploy agent log file on the instance.
If a script runs successfully, it returns an exit code of 0 (zero). If the CodeDeploy agent installed on the operating system doesn't match what's listed in the AppSpec file, the deployment fails.
Incorrect options:
Define a buildspec.yml
file in the root directory - This is a file used by AWS CodeBuild to run a build. This is not relevant to the given use case.
Define a buildspec.yml
file in the codebuild/ directory - This is a file used by AWS CodeBuild to run a build. This is not relevant to the given use case.
Define an appspec.yml
file in the codebuild/ directory - This file is for AWS CodeDeploy and must be placed in the root of the directory structure of an application's source code.
Reference:
Question 25: Skipped
Other than the Resources
section, which of the following sections in a Serverless Application Model (SAM) Template is mandatory?
Parameters
Mappings
Globals
Transform
(Correct)
Explanation
Correct option:
Transform
The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS.
A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings.
Serverless Application Model (SAM) Templates include several major sections. Transform and Resources are the only required sections.
Incorrect options:
Parameters
Mappings
Globals
These three options contradict the details provided in the explanation above, so these options are not correct.
Reference:
Question 26: Skipped
Version Control System -> CodeCommit
A company needs a version control system for their fast development lifecycle with incremental changes, version control, and support to existing Git tools.
Which AWS service will meet these requirements?
AWS CodePipeline
Amazon Versioned S3 Bucket
AWS CodeBuild
AWS CodeCommit
(Correct)
Explanation
Correct option:
AWS CodeCommit - AWS CodeCommit is a fully-managed Source Control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. AWS CodeCommit helps you collaborate on code with teammates via pull requests, branching and merging. AWS CodeCommit keeps your repositories close to your build, staging, and production environments in the AWS cloud. You can transfer incremental changes instead of the entire application. AWS CodeCommit supports all Git commands and works with your existing Git tools. You can keep using your preferred development environment plugins, continuous integration/continuous delivery systems, and graphical clients with CodeCommit.
Incorrect options:
Amazon Versioned S3 Bucket - AWS CodeCommit is designed for collaborative software development. It manages batches of changes across multiple files, offers parallel branching, and includes version differencing ("diffing"). In comparison, Amazon S3 versioning supports recovering past versions of individual files but doesn’t support tracking batched changes that span multiple files or other features needed for collaborative software development.
AWS CodePipeline - AWS CodePipeline is a fully managed "continuous delivery" service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
References:
Question 31:
A developer needs to automate software package deployment to both Amazon EC2 instances and virtual servers running on-premises, as part of continuous integration and delivery that the business has adopted.
Which AWS service should he use to accomplish this task?
AWS CodeBuild
AWS CodePipeline
AWS Elastic Beanstalk
AWS CodeDeploy
(Correct)
Explanation
Correct option:
Continuous integration is a DevOps software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run.
Continuous delivery is a software development practice where code changes are automatically prepared for a release to production. A pillar of modern application development, continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage.
AWS CodeDeploy - AWS CodeDeploy is a fully managed "deployment" service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. This is the right choice for the current use case.
Incorrect options:
AWS CodePipeline - AWS CodePipeline is a fully managed "continuous delivery" service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. Whereas CodeDeploy is a deployment service, CodePipeline is a continuous delivery service. For our current scenario, CodeDeploy is the correct choice.
AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
References:
Question 33: Skipped
An e-commerce company manages a microservices application that receives orders from various partners through a customized API for each partner exposed via Amazon API Gateway. The orders are processed by a shared Lambda function.
How can the company notify each partner regarding the status of their respective orders in the most efficient manner, without affecting other partners' orders? Also, the solution should be scalable to accommodate new partners with minimal code changes required.
Set up a separate SNS topic for each partner and subscribe each partner to the respective SNS topic. Modify the Lambda function to publish messages with specific attributes to the partner's SNS topic and apply the appropriate filter policy to the topic subscriptions
Set up a separate Lambda function for each partner. Set up an SNS topic and subscribe each partner to the SNS topic. Modify each partner's Lambda function to publish messages with specific attributes to the SNS topic and apply the appropriate filter policy to the topic subscriptions
Set up a separate SNS topic for each partner. Modify the Lambda function to publish messages for each partner to the partner's SNS topic
Set up an SNS topic and subscribe each partner to the SNS topic. Modify the Lambda function to publish messages with specific attributes to the SNS topic and apply the appropriate filter policy to the topic subscriptions
(Correct)
Explanation
Correct option:
Set up an SNS topic and subscribe each partner to the SNS topic. Modify the Lambda function to publish messages with specific attributes to the SNS topic and apply the appropriate filter policy to the topic subscriptions
An Amazon SNS topic is a logical access point that acts as a communication channel. A topic lets you group multiple endpoints (such as AWS Lambda, Amazon SQS, HTTP/S, or an email address). For example, to broadcast the messages of a message-producer system (such as, an e-commerce website) working with multiple other services that require its messages (for example, checkout and fulfillment systems), you can create a topic for your producer system.
By default, an Amazon SNS topic subscriber receives every message that's published to the topic. To receive only a subset of the messages, a subscriber must assign a filter policy to the topic subscription. A filter policy is a JSON object containing properties that define which messages the subscriber receives. Amazon SNS supports policies that act on the message attributes or the message body, according to the filter policy scope that you set for the subscription. Filter policies for the message body assume that the message payload is a well-formed JSON object.
For the given use case, you can change the Lambda function to publish messages with specific attributes to the single SNS topic and apply the appropriate filter policy to the topic subscriptions for each of the partners. This is also easily scalable for new partners since only the filter policy needs to be set up for the new partner.
Incorrect options:
Set up a separate SNS topic for each partner. Modify the Lambda function to publish messages for each partner to the partner's SNS topic
Set up a separate SNS topic for each partner and subscribe each partner to the respective SNS topic. Modify the Lambda function to publish messages with specific attributes to the partner's SNS topic and apply the appropriate filter policy to the topic subscriptions
Both of these options represent an inefficient solution as there is no need to segregate each partner's updates into a separate SNS topic. A single SNS topic with distinct filter policies is sufficient.
Set up a separate Lambda function for each partner. Set up an SNS topic and subscribe each partner to the SNS topic. Modify each partner's Lambda function to publish messages with specific attributes to the SNS topic and apply the appropriate filter policy to the topic subscriptions - This is again an inefficient solution as there is no need to create a separate Lambda function for each partner as just a shared Lambda function is sufficient to process the orders and send an update to the single SNS topic with distinct filter policies.
References:
Question 42: Skipped
A developer wants to package the code and dependencies for the application-specific Lambda functions as container images to be hosted on Amazon Elastic Container Registry (ECR).
Which of the following options are correct for the given requirement? (Select two)
Lambda supports both Windows and Linux-based container images
To deploy a container image to Lambda, the container image must implement the Lambda Runtime API
(Correct)
AWS Lambda service does not support Lambda functions that use multi-architecture container images
(Correct)
You can test the containers locally using the Lambda Runtime API
You can deploy Lambda function as a container image, with a maximum size of 15 GB
Explanation
Correct options:
To deploy a container image to Lambda, the container image must implement the Lambda Runtime API - To deploy a container image to Lambda, the container image must implement the Lambda Runtime API. The AWS open-source runtime interface clients implement the API. You can add a runtime interface client to your preferred base image to make it compatible with Lambda.
AWS Lambda service does not support Lambda functions that use multi-architecture container images - Lambda provides multi-architecture base images. However, the image you build for your function must target only one of the architectures. Lambda does not support functions that use multi-architecture container images.
Incorrect options:
Lambda supports both Windows and Linux-based container images - Lambda currently supports only Linux-based container images.
You can test the containers locally using the Lambda Runtime API - You can test the containers locally using the Lambda Runtime Interface Emulator.
You can deploy Lambda function as a container image, with a maximum size of 15 GB - You can deploy Lambda function as container image with the maximum size of 10GB.
Reference:
Question 49: Skipped
Immutable -> another ASG, new instances, rollback fast and automated based on health checks
The development team at an e-commerce company completed the last deployment for their application at a reduced capacity because of the deployment policy. The application took a performance hit because of the traffic spike due to an on-going sale.
Which of the following represents the BEST deployment option for the upcoming application version such that it maintains at least the FULL capacity of the application and MINIMAL impact of failed deployment?
Deploy the new application version using 'Rolling with additional batch' deployment policy
Deploy the new application version using 'Rolling' deployment policy
Deploy the new application version using 'All at once' deployment policy
Deploy the new application version using 'Immutable' deployment policy
(Correct)
Explanation
Correct option:
Deploy the new application version using 'Immutable' deployment policy
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
The 'Immutable' deployment policy ensures that your new application version is always deployed to new instances, instead of updating existing instances. It also has the additional advantage of a quick and safe rollback in case the deployment fails. In an immutable update, a second Auto Scaling group is launched in your environment and the new version serves traffic alongside the old version until the new instances pass health checks. In case of deployment failure, the new instances are terminated, so the impact is minimal.
Incorrect options:
Deploy the new application version using 'All at once' deployment policy - Although 'All at once' is the quickest deployment method, but the application may become unavailable to users (or have low availability) for a short time. Also in case of deployment failure, the application sees a downtime, so this option is not correct.
Deploy the new application version using 'Rolling' deployment policy - This policy avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. However in case of deployment failure, the rollback process is via manual redeploy, so it's not as quick as the Immutable deployment.
Deploy the new application version using 'Rolling with additional batch' deployment policy - This policy avoids any reduced availability, at a cost of an even longer deployment time compared to the Rolling method. Suitable if you must maintain the same bandwidth throughout the deployment. However in case of deployment failure, the rollback process is via manual redeploy, so it's not as quick as the Immutable deployment.
References:
Question 54: Skipped
Your company has embraced cloud-native microservices architectures. New applications must be dockerized and stored in a registry service offered by AWS. The architecture should support dynamic port mapping and support multiple tasks from a single service on the same container instance. All services should run on the same EC2 instance.
Which of the following options offers the best-fit solution for the given use-case?
Application Load Balancer + ECS
(Correct)
Application Load Balancer + Beanstalk
Classic Load Balancer + Beanstalk
Classic Load Balancer + ECS
Explanation
Correct option:
Application Load Balancer + ECS
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You can host your cluster on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks using the Fargate launch type. For more control over your infrastructure, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage by using the EC2 launch type.
An Application load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. A listener checks for connection requests from clients, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets. Each rule consists of a priority, one or more actions, and one or more conditions.
When you deploy your services using Amazon Elastic Container Service (Amazon ECS), you can use dynamic port mapping to support multiple tasks from a single service on the same container instance. Amazon ECS manages updates to your services by automatically registering and deregistering containers with your target group using the instance ID and port for each container.
Incorrect options:
Classic Load Balancer + Beanstalk - The Classic Load Balancer doesn't allow you to run multiple copies of a task on the same instance. Instead, with the Classic Load Balancer, you must statically map port numbers on a container instance. So this option is ruled out.
Application Load Balancer + Beanstalk - You can create docker environments that support multiple containers per Amazon EC2 instance with a multi-container Docker platform for Elastic Beanstalk. However, ECS gives you finer control.
Classic Load Balancer + ECS - The Classic Load Balancer doesn't allow you to run multiple copies of a task in the same instance. Instead, with the Classic Load Balancer, you must statically map port numbers on a container instance. So this option is ruled out.
References:
Question 55: Skipped
You are a developer working on a web application written in Java and would like to use AWS Elastic Beanstalk for deployment because it would handle deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. In the past, you connected to your provisioned instances through SSH to issue configuration commands. Now, you would like a configuration mechanism that automatically applies settings for you.
Which of the following options would help do this?
Use an AWS Lambda hook
Deploy a CloudFormation wrapper
Include config files in .ebextensions/ at the root of your source code
(Correct)
Use SSM parameter store as an input to your Elastic Beanstalk Configurations
Explanation
Correct option:
Include config files in .ebextensions/ at the root of your source code
The option_settings section of a configuration file defines values for configuration options. Configuration options let you configure your Elastic Beanstalk environment, the AWS resources in it, and the software that runs your application. Configuration files are only one of several ways to set configuration options.
Incorrect options:
Deploy a CloudFormation wrapper - This is a made-up option. This has been added as a distractor.
Use SSM parameter store as an input to your Elastic Beanstalk Configurations - SSM parameter is still not supported for Elastic Beanstalk. So this option is incorrect.
Use an AWS Lambda hook - Lambda functions are not the best-fit to trigger these configuration changes as it would involve significant development effort.
References:
Test 3 Question 10: Skipped
A company follows collaborative development practices. The engineering manager wants to isolate the development effort by setting up simulations of API components owned by various development teams.
Which API integration type is best suited for this requirement?
HTTP
MOCK
(Correct)
AWS_PROXY
HTTP_PROXY
Explanation
Correct option:
MOCK
This type of integration lets API Gateway return a response without sending the request further to the backend. This is useful for API testing because it can be used to test the integration setup without incurring charges for using the backend and to enable collaborative development of an API.
In collaborative development, a team can isolate their development effort by setting up simulations of API components owned by other teams by using the MOCK integrations. It is also used to return CORS-related headers to ensure that the API method permits CORS access. In fact, the API Gateway console integrates the OPTIONS method to support CORS with a mock integration.
Incorrect options:
AWS_PROXY - This type of integration lets an API method be integrated with the Lambda function invocation action with a flexible, versatile, and streamlined integration setup. This integration relies on direct interactions between the client and the integrated Lambda function.
HTTP_PROXY - The HTTP proxy integration allows a client to access the backend HTTP endpoints with a streamlined integration setup on single API method. You do not set the integration request or the integration response. API Gateway passes the incoming request from the client to the HTTP endpoint and passes the outgoing response from the HTTP endpoint to the client.
HTTP - This type of integration lets an API expose HTTP endpoints in the backend. With the HTTP integration, you must configure both the integration request and integration response. You must set up necessary data mappings from the method request to the integration request, and from the integration response to the method response.
Reference:
Question 15: Skipped
You have a workflow process that pulls code from AWS CodeCommit and deploys to EC2 instances associated with tag group ProdBuilders. You would like to configure the instances to archive no more than two application revisions to conserve disk space.
Which of the following will allow you to implement this?
AWS CloudWatch Log Agent
Integrate with AWS CodePipeline
Have a load balancer in front of your instances
CodeDeploy Agent
(Correct)
Explanation
Correct option:
"CodeDeploy Agent"
The CodeDeploy agent is a software package that, when installed and configured on an instance, makes it possible for that instance to be used in CodeDeploy deployments. The CodeDeploy agent archives revisions and log files on instances. The CodeDeploy agent cleans up these artifacts to conserve disk space. You can use the :max_revisions: option in the agent configuration file to specify the number of application revisions to the archive by entering any positive integer. CodeDeploy also archives the log files for those revisions. All others are deleted, except for the log file of the last successful deployment.
Incorrect options:
AWS CloudWatch Log Agent - The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. This is an incorrect choice for the current use case.
Integrate with AWS CodePipeline - AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodeCommit and CodePipeline are already integrated services. CodePipeline cannot help in version control and management of archives on an EC2 instance.
Have a load balancer in front of your instances - Load Balancer helps balance incoming traffic across different EC2 instances. It is an incorrect choice for the current use case.
References:
Question 17: Skipped
Your web application architecture consists of multiple Amazon EC2 instances running behind an Elastic Load Balancer with an Auto Scaling group having the desired capacity of 5 EC2 instances. You would like to integrate AWS CodeDeploy for automating application deployment. The deployment should re-route traffic from your application's original environment to the new environment.
Which of the following options will meet your deployment criteria?
Opt for Blue/Green deployment
(Correct)
Opt for Immutable deployment
Opt for Rolling deployment
Opt for In-place deployment
Explanation
Correct option:
Opt for Blue/Green deployment - A Blue/Green deployment is used to update your applications while minimizing interruptions caused by the changes of a new application version. CodeDeploy provisions your new application version alongside the old version before rerouting your production traffic. The behavior of your deployment depends on which compute platform you use:
AWS Lambda: Traffic is shifted from one version of a Lambda function to a new version of the same Lambda function.
Amazon ECS: Traffic is shifted from a task set in your Amazon ECS service to an updated, replacement task set in the same Amazon ECS service.
EC2/On-Premises: Traffic is shifted from one set of instances in the original environment to a replacement set of instances.
Incorrect options:
Opt for Rolling deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.
Opt for Immutable deployment - This deployment type is present for AWS Elastic Beanstalk and not for EC2 instances directly.
Opt for In-place deployment - Under this deployment type, the application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete.
References:
Question 37: Skipped
AWS CloudFormation helps model and provision all the cloud infrastructure resources needed for your business.
Which of the following services rely on CloudFormation to provision resources (Select two)?
AWS Elastic Beanstalk
(Correct)
AWS Serverless Application Model (AWS SAM)
(Correct)
AWS Autoscaling
AWS Lambda
AWS CodeBuild
Explanation
Correct option:
AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes.
AWS Serverless Application Model (AWS SAM) - You use the AWS SAM specification to define your serverless application. AWS SAM templates are an extension of AWS CloudFormation templates, with some additional components that make them easier to work with. AWS SAM needs CloudFormation templates as a basis for its configuration.
Incorrect options:
AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Hence, Lamda does not need CloudFormation to run its services.
AWS Autoscaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. Auto Scaling used CloudFormation but is not a mandatory requirement.
AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. AWS CodePipeline uses AWS CloudFormation as a deployment action but is not a mandatory service.
References:
Question 43: Skipped
A developer is designing an AWS CloudFormation template for deploying Amazon EC2 instances in numerous AWS accounts. The developer needs to select EC2 instances from a list of pre-approved instance types.
What measures could the developer take to integrate the list of authorized instance types into the CloudFormation template?
Configure a parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template
(Correct)
Configure separate parameters for each EC2 instance type in the CloudFormation template
Configure a mapping having a list of EC2 instance types as parameters in the CloudFormation template
Configure a pseudo parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template
Explanation
Correct option:
Configure a parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template
You can use the Parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack.
AllowedValues refers to an array containing the list of values allowed for the parameter. When applied to a parameter of type String, the parameter value must be one of the allowed values. When applied to a parameter of type CommaDelimitedList, each value in the list must be one of the specified allowed values.
Incorrect options:
Configure separate parameters for each EC2 instance type in the CloudFormation template - Creating separate parameters for each instance type is semantically incorrect as the underlying value will point to the same resource but have multiple inputs.
Configure a mapping having a list of EC2 instance types as parameters in the CloudFormation template - The Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a region, you can create a mapping that uses the region name as a key and contains the values you want to specify for each specific region. You use the Fn::FindInMap intrinsic function to retrieve values in a map. A mapping is not a list, rather, it consists of key value pairs. You can't include parameters, pseudo parameters, or intrinsic functions in the Mappings section. So, this option is incorrect.
Configure a pseudo parameter with the list of EC2 instance types as AllowedValues in the CloudFormation template - Pseudo parameters are parameters that are predefined by AWS CloudFormation. You don't declare them in your template. So, this option is incorrect.
References:
Question 45: Skipped
As part of employee skills upgrade, the developers of your team have been delegated few responsibilities of DevOps engineers. Developers now have full control over modeling the entire software delivery process, from coding to deployment. As the team lead, you are now responsible for any manual approvals needed in the process.
Which of the following approaches supports the given workflow?
Create one CodePipeline for your entire flow and add a manual approval step
(Correct)
Use CodePipeline with Amazon Virtual Private Cloud
Create multiple CodePipelines for each environment and link them using AWS Lambda
Create deeply integrated AWS CodePipelines for each environment
Explanation
Correct option:
Create one CodePipeline for your entire flow and add a manual approval step - You can add an approval action to a stage in a CodePipeline pipeline at the point where you want the pipeline to stop so someone can manually approve or reject the action. Approval actions can't be added to Source stages. Source stages can contain only source actions.
Incorrect options:
Create multiple CodePipelines for each environment and link them using AWS Lambda - You can create Lambda functions and add them as actions in your pipelines but the approval step is confined to a workflow process and cannot be outsourced to any other AWS service.
Create deeply integrated AWS CodePipelines for each environment - You can use an AWS CloudFormation template in conjunction with AWS CodePipeline and AWS CodeCommit to create a test environment that deploys to your production environment when the changes to your application are approved, helping you automate a continuous delivery workflow. This is a possible answer but not an optimized way of achieving what the client needs.
Use CodePipeline with Amazon Virtual Private Cloud - AWS CodePipeline supports Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS PrivateLink. This means you can connect directly to CodePipeline through a private endpoint in your VPC, keeping all traffic inside your VPC and the AWS network. This is a robust security feature but is of no value for our current use-case.
References:
Question 51: Skipped
A development team has deployed a REST API in Amazon API Gateway to two different stages - a test stage and a prod stage. The test stage is used as a test build and the prod stage as a stable build. After the updates have passed the test, the team wishes to promote the test stage to the prod stage.
Which of the following represents the optimal solution for this use-case?
Delete the existing prod stage. Create a new stage with the same name (prod) and deploy the tested version on this stage
API performance is optimized in a different way for prod environments. Hence, promoting test to prod is not correct. The promotion should be done by redeploying the API to the prod stage
Update stage variable value from the stage name of test to that of prod
(Correct)
Deploy the API without choosing a stage. This way, the working deployment will be updated in all stages
Explanation
Correct option:
Update stage variable value from the stage name of test to that of prod
After creating your API, you must deploy it to make it callable by your users. To deploy an API, you create an API deployment and associate it with a stage. A stage is a logical reference to a lifecycle state of your API (for example, dev, prod, beta, v2). API stages are identified by the API ID and stage name. They're included in the URL that you use to invoke the API. Each stage is a named reference to a deployment of the API and is made available for client applications to call.
Stages enable robust version control of your API. In our current use-case, after the updates pass the test, you can promote the test stage to the prod stage. The promotion can be done by redeploying the API to the prod stage or updating a stage variable value from the stage name of test to that of prod.
Incorrect options:
Deploy the API without choosing a stage. This way, the working deployment will be updated in all stages - An API can only be deployed to a stage. Hence, it is not possible to deploy an API without choosing a stage.
Delete the existing prod stage. Create a new stage with the same name (prod) and deploy the tested version on this stage* - This is possible, but not an optimal way of deploying a change. Also, as prod refers to real production system, this option will result in downtime.
API performance is optimized in a different way for prod environments. Hence, promoting test to prod is not correct. The promotion should be done by redeploying the API to the prod stage - For each stage, you can optimize API performance by adjusting the default account-level request throttling limits and enabling API caching. And these settings can be changed/updated at any time.
Reference:
Question 65: Skipped
An organization is moving its on-premises resources to the cloud. Source code will be moved to AWS CodeCommit and AWS CodeBuild will be used for compiling the source code using Apache Maven as a build tool. The organization wants the build environment should allow for scaling and running builds in parallel.
Which of the following options should the organization choose for their requirement?
Run CodeBuild in an Auto Scaling group
Choose a high-performance instance type for your CodeBuild instances
CodeBuild scales automatically, the organization does not have to do anything for scaling or for parallel builds
(Correct)
Enable CodeBuild Auto Scaling
Explanation
Correct option:
CodeBuild scales automatically, the organization does not have to do anything for scaling or for parallel builds - AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests.
Incorrect options:
Choose a high-performance instance type for your CodeBuild instances - For the current requirement, this is will not make any difference.
Run CodeBuild in an Auto Scaling Group - AWS CodeBuild is a managed service and scales automatically, does not need Auto Scaling Group to scale it up.
Enable CodeBuild Auto Scaling - This has been added as a distractor. CodeBuild scales automatically to meet peak build requests.
References:
Question 4: Skipped
Your AWS CodeDeploy deployment to T2 instances succeed. The new application revision makes API calls to Amazon S3 however the application is not working as expected due to authorization exceptions and you were assigned to troubleshoot the issue.
Which of the following should you do?
Fix the IAM permissions for the EC2 instance role
(Correct)
Make the S3 bucket public
Enable CodeDeploy Proxy
Fix the IAM permissions for the CodeDeploy service role
Explanation
Correct option:
Fix the IAM permissions for the EC2 instance role
You should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. In this case, make sure your role has access to the S3 bucket.
Incorrect options:
Fix the IAM permissions for the CodeDeploy service role - The fact that CodeDeploy deployed the application to EC2 instances tells us that there was no issue between those two. The actual issue is between the EC2 instances and S3.
Make the S3 bucket public - This is not a good practice, you should strive to provide least privilege access. You may have files in here that should not be allowed public access and you are opening the door to security breaches.
Enable CodeDeploy Proxy - This is not correct as we don't need to look into CodeDeploy settings but rather between EC2 and S3 permissions.
Reference:
Question 14: Skipped
A development team is considering Amazon ElastiCache for Redis as its in-memory caching solution for its relational database.
Which of the following options are correct while configuring ElastiCache? (Select two)
While using Redis with cluster mode enabled, asynchronous replication mechanisms are used to keep the read replicas synchronized with the primary. If cluster mode is disabled, the replication mechanism is done synchronously
You can scale write capacity for Redis by adding replica nodes
If you have no replicas and a node fails, you experience no loss of data when using Redis with cluster mode enabled
All the nodes in a Redis cluster must reside in the same region
(Correct)
While using Redis with cluster mode enabled, you cannot manually promote any of the replica nodes to primary
(Correct)
Explanation
Correct options:
All the nodes in a Redis cluster must reside in the same region
All the nodes in a Redis cluster (cluster mode enabled or cluster mode disabled) must reside in the same region.
While using Redis with cluster mode enabled, you cannot manually promote any of the replica nodes to primary
While using Redis with cluster mode enabled, there are some limitations:
You cannot manually promote any of the replica nodes to primary.
Multi-AZ is required.
You can only change the structure of a cluster, the node type, and the number of nodes by restoring from a backup.
Incorrect options:
While using Redis with cluster mode enabled, asynchronous replication mechanisms are used to keep the read replicas synchronized with the primary. If cluster mode is disabled, the replication mechanism is done synchronously - When you add a read replica to a cluster, all of the data from the primary is copied to the new node. From that point on, whenever data is written to the primary, the changes are asynchronously propagated to all the read replicas, for both the Redis offerings (cluster mode enabled or cluster mode disabled).
If you have no replicas and a node fails, you experience no loss of data when using Redis with cluster mode enabled - If you have no replicas and a node fails, you experience loss of all data in that node's shard, when using Redis with cluster mode enabled. If you have no replicas and the node fails, you experience total data loss in Redis with cluster mode disabled.
You can scale write capacity for Redis by adding replica nodes - This increases only the read capacity of the Redis cluster, write capacity is not enhanced by read replicas.
Reference:
Question 18: Skipped
You have been hired at a company that needs an experienced developer to help with a continuous integration/continuous delivery (CI/CD) workflow on AWS. You configure the company’s workflow to run an AWS CodePipeline pipeline whenever the application’s source code changes in a repository hosted in AWS Code Commit and compiles source code with AWS Code Build. You are configuring ProjectArtifacts in your build stage.
Which of the following should you do?
Give AWS CodeCommit permissions to upload the build output to your Amazon S3 bucket
Give AWS CodeBuild permissions to upload the build output to your Amazon S3 bucket
(Correct)
Contact AWS Support to allow AWS CodePipeline to manage build outputs
Configure AWS CodeBuild to store output artifacts on EC2 servers
Explanation
Correct option:
Give AWS CodeBuild permissions to upload the build output to your Amazon S3 bucket
If you choose ProjectArtifacts and your value type is S3 then the build project stores build output in Amazon Simple Storage Service (Amazon S3). For that, you will need to give AWS CodeBuild permissions to upload.
Incorrect options:
Configure AWS CodeBuild to store output artifacts on EC2 servers - EC2 servers are not a valid output location, so this option is ruled out.
Give AWS CodeCommit permissions to upload the build output to your Amazon S3 bucket - AWS CodeCommit is the repository that holds source code and has no control over compiling the source code, so this option is incorrect.
Contact AWS Support to allow AWS CodePipeline to manage build outputs - You can set AWS CodePipeline to manage its build output locations instead of AWS CodeBuild. There is no need to contact AWS Support.
Reference:
Question 20: Skipped
An e-commerce company has implemented AWS CodeDeploy as part of its AWS cloud CI/CD strategy. The company has configured automatic rollbacks while deploying a new version of its flagship application to Amazon EC2.
What occurs if the deployment of the new version fails?
CodeDeploy switches the Route 53 alias records back to the known good green deployment and terminates the failed blue deployment
The last known working deployment is automatically restored using the snapshot stored in Amazon S3
A new deployment of the last known working version of the application is deployed with a new deployment ID
(Correct)
AWS CodePipeline promotes the most recent working deployment with a SUCCEEDED status to production
Explanation
Correct option:
A new deployment of the last known working version of the application is deployed with a new deployment ID
AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications.
CodeDeploy rolls back deployments by redeploying a previously deployed revision of an application as a new deployment. These rolled-back deployments are technically new deployments, with new deployment IDs, rather than restored versions of a previous deployment.
To roll back an application to a previous revision, you just need to deploy that revision. AWS CodeDeploy keeps track of the files that were copied for the current revision and removes them before starting a new deployment, so there is no difference between redeploy and rollback. However, you need to make sure that the previous revisions are available for rollback.
Incorrect options:
The last known working deployment is automatically restored using the snapshot stored in Amazon S3 - CodeDeploy deployment does not have a snapshot stored on S3, so this option is incorrect.
AWS CodePipeline promotes the most recent working deployment with a SUCCEEDED status to production - The use-case does not talk about using CodePipeline, so this option just acts as a distractor.
CodeDeploy switches the Route 53 alias records back to the known good green deployment and terminates the failed blue deployment - The use-case does not talk about the blue/green deployment, so this option has just been added as a distractor.
Reference:
Question 25: Skipped
Your company is in the process of building a DevOps culture and is moving all of its on-premise resources to the cloud using serverless architectures and automated deployments. You have created a CloudFormation template in YAML that uses an AWS Lambda function to pull HTML files from GitHub and place them into an Amazon Simple Storage Service (S3) bucket that you specify.
Which of the following AWS CLI commands can you use to upload AWS Lambda functions and AWS CloudFormation templates to AWS?
cloudformation package
andcloudformation upload
cloudformation zip
andcloudformation deploy
cloudformation package
andcloudformation deploy
(Correct)
cloudformation zip
andcloudformation upload
Explanation
Correct option:
cloudformation package
and cloudformation deploy
AWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion.
The cloudformation package
command packages the local artifacts (local paths) that your AWS CloudFormation template references. The command will upload local artifacts, such as your source code for your AWS Lambda function.
The cloudformation deploy
command deploys the specified AWS CloudFormation template by creating and then executing a changeset.
Incorrect options:
cloudformation package
and cloudformation upload
- The cloudformation upload
command does not exist.
cloudformation zip
and cloudformation upload
- Both commands do not exist, this is a made-up option.
cloudformation zip
and cloudformation deploy
- The cloudformation zip
command does not exist, this is a made-up option.
Reference:
Question 31: Skipped
You have a Java-based application running on EC2 instances loaded with AWS CodeDeploy agents. You are considering different options for deployment, one is the flexibility that allows for incremental deployment of your new application versions and replaces existing versions in the EC2 instances. The other option is a strategy in which an Auto Scaling group is used to perform a deployment.
Which of the following options will allow you to deploy in this manner? (Select two)
Pilot Light Deployment
Cattle Deployment
Blue/green Deployment
(Correct)
Warm Standby Deployment
In-place Deployment
(Correct)
Explanation
Correct options:
In-place Deployment
The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete.
Blue/green Deployment
With a blue/green deployment, you provision a new set of instances on which CodeDeploy installs the latest version of your application. CodeDeploy then re-routes load balancer traffic from an existing set of instances running the previous version of your application to the new set of instances running the latest version. After traffic is re-routed to the new instances, the existing instances can be terminated.
Incorrect options:
Cattle Deployment - This is a good option if you have cattle in a farm
Warm Standby Deployment - This is not a valid CodeDeploy deployment option. The term "Warm Standby" is used to describe a Disaster Recovery scenario in which a scaled-down version of a fully functional environment is always running in the cloud.
Pilot Light Deployment - This is not a valid CodeDeploy deployment option. "Pilot Light" is a Disaster Recovery approach where you simply replicate part of your IT structure for a limited set of core services so that the AWS cloud environment seamlessly takes over in the event of a disaster.
References:
Question 32: Skipped
The development team at a company is looking at building an AWS CloudFormation template that self-populates the AWS Region variable while deploying the CloudFormation template.
What is the MOST operationally efficient way to determine the Region in which the template is being deployed?
Use the AWS::Region pseudo parameter
(Correct)
Create an AWS Lambda-backed custom resource for Region and let the desired value be populated at the time of deployment by the Lambda
Create a CloudFormation parameter for Region and let the desired value be populated at the time of deployment
Set up a mapping containing the key and the named values for all AWS Regions and then have the CloudFormation template auto-select the desired value
Explanation
Correct option:
Use the AWS::Region pseudo parameter
Pseudo parameters are parameters that are predefined by AWS CloudFormation. You don't declare them in your template. Use them the same way as you would a parameter, as the argument for the Ref function.
You can access pseudo parameters in a CloudFormation template like so:
The AWS::Region
pseudo parameter returns a string representing the Region in which the encompassing resource is being created, such as us-west-2.
Incorrect options:
Create a CloudFormation parameter for Region and let the desired value be populated at the time of deployment - Although it is certainly possible to use a CloudFormation parameter to populate the desired value of the Region at the time of deployment, however, this is not operationally efficient, as you can directly use the AWS::Region pseudo parameter for this.
Set up a mapping containing the key and the named values for all AWS Regions and then have the CloudFormation template auto-select the desired value - The Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a Region, you can create a mapping that uses the Region name as a key and contains the values you want to specify for each specific Region. You use the Fn::FindInMap intrinsic function to retrieve values in a map. This option is incorrect as the CloudFormation template cannot auto-select the desired value of the Region from a mapping.
Create an AWS Lambda-backed custom resource for Region and let the desired value be populated at the time of deployment by the Lambda - Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. When you associate a Lambda function with a custom resource, the function is invoked whenever the custom resource is created, updated, or deleted. This option is a distractor, as Region is not a custom resource that needs to be provisioned.
References:
Question 33: Skipped
An IT company is using AWS CloudFormation to manage its IT infrastructure. It has created a template to provision a stack with a VPC and a subnet. The output value of this subnet has to be used in another stack.
As a Developer Associate, which of the following options would you suggest to provide this information to another stack?
Use Fn::ImportValue
Use 'Expose' field in the Output section of the stack's template
Use 'Export' field in the Output section of the stack's template
(Correct)
Use Fn::Transform
Explanation
Correct option:
Use 'Export' field in the Output section of the stack's template
To share information between stacks, export a stack's output values. Other stacks that are in the same AWS account and region can import the exported values.
To export a stack's output value, use the Export field in the Output section of the stack's template. To import those values, use the Fn::ImportValue function in the template for the other stacks.
Incorrect options:
Use 'Expose' field in the Output section of the stack's template - 'Expose' is a made-up option, and only given as a distractor.
Use Fn::ImportValue - To import the values exported by another stack, we use the Fn::ImportValue function in the template for the other stacks. This function is not useful for the current scenario.
Use Fn::Transform - The intrinsic function Fn::Transform specifies a macro to perform custom processing on part of a stack template. Macros enable you to perform custom processing on templates, from simple actions like find-and-replace operations to extensive transformations of entire templates. This function is not useful for the current scenario.
Reference:
Question 34: Skipped
A data analytics company with its IT infrastructure on the AWS Cloud wants to build and deploy its flagship application as soon as there are any changes to the source code.
As a Developer Associate, which of the following options would you suggest to trigger the deployment? (Select two)
Keep the source code in an Amazon S3 bucket and start AWS CodePipeline whenever a file in the S3 bucket is updated
(Correct)
Keep the source code in an Amazon EBS volume and start AWS CodePipeline whenever there are updates to the source code
Keep the source code in Amazon EFS and start AWS CodePipeline whenever a file is updated
Keep the source code in an Amazon S3 bucket and set up AWS CodePipeline to recur at an interval of every 15 minutes
Keep the source code in an AWS CodeCommit repository and start AWS CodePipeline whenever a change is pushed to the CodeCommit repository
(Correct)
Explanation
Correct option:
Keep the source code in an AWS CodeCommit repository and start AWS CodePipeline whenever a change is pushed to the CodeCommit repository
Keep the source code in an Amazon S3 bucket and start AWS CodePipeline whenever a file in the S3 bucket is updated
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.
Using change detection methods that you specify, you can make your pipeline start when a change is made to a repository. You can also make your pipeline start on a schedule.
When you use the console to create a pipeline that has a CodeCommit source repository or S3 source bucket, CodePipeline creates an Amazon CloudWatch Events rule that starts your pipeline when the source changes. This is the recommended change detection method.
If you use the AWS CLI to create the pipeline, the change detection method defaults to starting the pipeline by periodically checking the source (CodeCommit, Amazon S3, and GitHub source providers only). AWS recommends that you disable periodic checks and create the rule manually.
Incorrect options:
Keep the source code in Amazon EFS and start AWS CodePipeline whenever a file is updated
Keep the source code in an Amazon EBS volume and start AWS CodePipeline whenever there are updates to the source code
Both EFS and EBS are not supported as valid source providers for CodePipeline to check for any changes to the source code, hence these two options are incorrect.
Keep the source code in an Amazon S3 bucket and set up AWS CodePipeline to recur at an interval of every 15 minutes - As mentioned in the explanation above, although you could have the change detection method start the pipeline by periodically checking the S3 bucket, but this method is inefficient.
Reference:
Question 38: Skipped
What is the run order of the hooks for in-place deployments using CodeDeploy?
Before Install -> Application Stop -> Application Start -> ValidateService
Before Install -> Application Stop -> ValidateService -> Application Start
Application Stop -> Before Install -> Application Start -> ValidateService
(Correct)
Application Stop -> Before Install -> ValidateService -> Application Start
Explanation
Correct option:
Application Stop -> Before Install -> Application Start -> ValidateService
In CodeDeploy, a deployment is a process of installing content on one or more instances. This content can consist of code, web and configuration files, executables, packages, scripts, and so on. CodeDeploy deploys content that is stored in a source repository, according to the configuration rules you specify.
The content in the 'hooks' section of the AppSpec file varies, depending on the compute platform for your deployment. The 'hooks' section for an EC2/On-Premises deployment contains mappings that link deployment lifecycle event hooks to one or more scripts. The 'hooks' section for a Lambda or an Amazon ECS deployment specifies Lambda validation functions to run during a deployment lifecycle event. If an event hook is not present, no operation is executed for that event.
Incorrect options:
Before Install -> Application Stop -> ValidateService -> Application Start
Application Stop -> Before Install -> ValidateService -> Application Start
Before Install -> Application Stop -> Application Start -> ValidateService
As explained above, these three options contradict the correct order of hooks, so these are incorrect.
References:
Question 51: Skipped
A developer wants a seamless ability to return to older versions of a Lambda function that is being deployed.
Which of the following solutions offers the LEAST operational overhead?
Use Lambda function layers that can point to the different versions
Use a Route 53 weighted policy that can point to the different Lambda function versions
Use CodeDeploy to configure blue/green deployments for the different Lambda function versions
Use a Lambda function alias that can point to the different versions
(Correct)
Explanation
Correct option:
Use a Lambda function alias that can point to the different versions
You can use versions to manage the deployment of your functions. For example, you can publish a new version of a function for beta testing without affecting users of the stable production version. Lambda creates a new version of your function each time that you publish the function. The new version is a copy of the unpublished version of the function.
By publishing a version of your function, you can store your code and configuration as a separate resource that cannot be changed.
A Lambda alias is like a pointer to a specific function version. Users can access the function version using the alias Amazon Resource Name (ARN). Each alias has a unique ARN. An alias can point only to a function version, not to another alias. You can update an alias to point to the different versions of the Lambda function.
Incorrect options:
Use a Route 53 weighted policy that can point to the different Lambda function versions - This option is a distractor, as Route 53 cannot be used for the given use case. Route 53 weighted policy lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.
Use CodeDeploy to configure blue/green deployments for the different Lambda function versions - A deployment to the AWS Lambda compute platform is always a blue/green deployment. You do not specify a deployment type option. When you deploy to an AWS Lambda compute platform, the deployment configuration specifies the way traffic is shifted to the new Lambda function versions in your application. You can shift traffic using a canary, linear, or all-at-once deployment configuration. Once deployed, you cannot go back to the previous versions of your Lambda function. So this option is incorrect.
Use Lambda function layers that can point to the different versions - Lambda layers provide a convenient way to package libraries and other dependencies that you can use with your Lambda functions. Using layers reduces the size of uploaded deployment archives and makes it faster to deploy your code. You cannot use the Lambda function layers to point to the different versions of the Lambda function.
References:
Question 56: Skipped
You have moved your on-premise infrastructure to AWS and are in the process of configuring an AWS Elastic Beanstalk deployment environment for production, development, and testing. You have configured your production environment to use a rolling deployment to prevent your application from becoming unavailable to users. For the development and testing environment, you would like to deploy quickly and are not concerned about downtime.
Which of the following deployment policies meet your needs?
All at once
(Correct)
Rolling
Rolling with additional batches
Immutable
Explanation
Correct option:
All at once
This is the quickest deployment method. Suitable if you can accept a short loss of service, and if quick deployments are important to you. With this method, Elastic Beanstalk deploys the new application version to each instance. Then, the web proxy or application server might need to restart. As a result, your application might be unavailable to users (or have low availability) for a short time.
Incorrect options:
Rolling - With this method, your application is deployed to your environment one batch of instances at a time. Most bandwidth is retained throughout the deployment. Avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. Suitable if you can't accept any period of completely lost service.
Rolling with additional batches - With this method, Elastic Beanstalk launches an extra batch of instances, then performs a rolling deployment. Launching the extra batch takes time, and ensures that the same bandwidth is retained throughout the deployment. This policy also avoids any reduced availability, although at a cost of an even longer deployment time compared to the Rolling method. Finally, this option is suitable if you must maintain the same bandwidth throughout the deployment.
Immutable - A slower deployment method, that ensures your new application version is always deployed to new instances, instead of updating existing instances. It also has the additional advantage of a quick and safe rollback in case the deployment fails. With this method, Elastic Beanstalk performs an immutable update to deploy your application. In an immutable update, a second Auto Scaling group is launched in your environment and the new version serves traffic alongside the old version until the new instances pass health checks.
Reference:
Question 60: Skipped
A communication platform serves millions of customers and deploys features in a production environment on AWS via CodeDeploy. You are reviewing scripts for the deployment process located in the AppSpec file.
Which of the following options lists the correct order of lifecycle events?
BeforeInstall => ValidateService =>DownloadBundle => ApplicationStart
DownloadBundle => BeforeInstall => ApplicationStart => ValidateService
(Correct)
ValidateService => BeforeInstall =>DownloadBundle => ApplicationStart
BeforeInstall => ApplicationStart => DownloadBundle => ValidateService
Explanation
Correct option:
DownloadBundle => BeforeInstall => ApplicationStart => ValidateService
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.
You can specify one or more scripts to run in a hook. Each hook for a lifecycle event is specified with a string on a separate line.
Incorrect options:
BeforeInstall => ApplicationStart => DownloadBundle => ValidateService
ValidateService => BeforeInstall =>DownloadBundle => ApplicationStart
BeforeInstall => ValidateService =>DownloadBundle => ApplicationStart
These three options contradict the details provided in the explanation above, so these options are not correct.
Reference:
Last updated