20 Apr evening study
Last updated
Last updated
Question 26: Skipped
A global finance company has multiple data centers around the globe. Due to the ever-growing data that your company is storing, the solutions architect was instructed to set up a durable, cost-effective solution to archive sensitive data from the existing on-premises tape-based backup [-> Tape Gateway] infrastructure to AWS Cloud.
Which of the following options is the recommended implementation to achieve the company requirements?
Set up a File Gateway to back up your data in Amazon S3 and archive in Amazon Glacier using your existing tape-based processes.
Set up a Tape Gateway to back up your data in Amazon S3 and archive it in Amazon Glacier using your existing tape-based processes.
(Correct)
Set up a Tape Gateway to back up your data in Amazon S3 with point-in-time backups as tapes which will be stored in the Virtual Tape Shelf.
Set up a Stored Volume Gateway to back up your data in Amazon S3 with point-in-time backups as EBS snapshots.
Explanation
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the Amazon Web Services Cloud for scalable and cost-effective storage that helps maintain data security.
AWS Storage Gateway offers file-based file gateways (Amazon S3 File and Amazon FSx File), volume-based (Cached and Stored), and tape-based storage solutions.
Tape Gateway offers a durable, cost-effective solution to archive your data in the AWS Cloud. With its virtual tape library (VTL) interface, you use your existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway. Each tape gateway is preconfigured with a media changer and tape drives. These are available to your existing client backup applications as iSCSI devices. You add tape cartridges as you need to archive your data.
Therefore, the correct answer is: Set up a Tape Gateway to back up your data in Amazon S3 and archive it in Amazon Glacier using your existing tape-based processes.
The option that says: Set up a Stored Volume Gateway to back up your data in Amazon S3 with point-in-time backups as EBS snapshots is incorrect. Stored Volume Gateway is not recommended for Tape archives, you should use Tape Gateway instead.
The option that says: Set up a Tape Gateway to back up your data in Amazon S3 with point-in-time backups as tapes which will be stored in the Virtual Tape Shelf is incorrect. The archival should be on Amazon Glacier for cost-effectiveness.
The option that says: Set up a File Gateway to back up your data in Amazon S3 and archive in Amazon Glacier using your existing tape-based processes is incorrect. A File Gateway is not recommended for Tape archives, you should use Tape Gateway instead.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Question 60: Skipped
A fashion company in France sells bags, clothes, and other luxury items in its online web store. The online store is currently hosted on the company’s on-premises data center. The company has recently decided to move all of its on-premises infrastructure to the AWS cloud. The main application is running on an NGINX web server and a database with an Oracle Real Application Clusters (RAC) One Node configuration.
Which of the following is the best way to migrate the application to AWS and set up an automated backup?
Launch an EC2 instance and run an NGINX server to host the application. Deploy an RDS instance and enable automated backups on the RDS RAC cluster.
Launch an On-Demand EC2 instance and run an NGINX server to host the application. Deploy an RDS instance with a Multi-AZ deployment configuration and enable automated backups on the RDS RAC cluster.
Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS Volumes on the EC2 instance of the database and then write a shell script that runs the manual snapshot of the volumes.
Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS volumes to the EC2 instance of the database and then use the Data Lifecycle Manager to automatically create scheduled snapshots against the EBS volumes.
(Correct)
Explanation
Oracle RAC is not supported by RDS. That is why you need to deploy the database in an EC2 instance and then either create a shell script to automate the backup or use the Data Lifecycle Manager to automate the process.
An Oracle Real Application Clusters (RAC) One Node option provides virtualized servers on a single machine. This provides an 'always on' availability for single-instance databases for a fraction of a cost.
Amazon Data Lifecycle Manager (DLM) for EBS Snapshots provides a simple, automated way to back up data stored on Amazon EBS volumes. You can define backup and retention schedules for EBS snapshots by creating lifecycle policies based on tags. With this feature, you no longer have to rely on custom scripts to create and manage your backups.
Hence, the correct answer is: Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS volumes to the EC2 instance of the database and then use the Data Lifecycle Manager to automatically create scheduled snapshots against the EBS volumes.
The option that says: Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS Volumes on the EC2 instance of the database and then write a shell script that runs the manual snapshot of the volumes is incorrect. Although this approach is valid, a more suitable option is to use the Data Lifecycle Manager (DLM) to automatically take the snapshot of the EC2 instance. The DLM can also reduce storage costs by deleting outdated backups.
The following options are incorrect as these two use Amazon RDS, which doesn't natively support Oracle RAC:
- Launch an EC2 instance and run an NGINX server to host the application. Deploy an RDS instance and enable automated backups on the RDS RAC cluster.
- Launch an On-Demand EC2 instance and run an NGINX server to host the application. Deploy an RDS instance with a Multi-AZ deployment configuration and enable automated backups on the RDS RAC cluster.
References:
Check out this Amazon RDS Cheat Sheet:
Question 63: Skipped
A company offers a service that allows users to upload media files through a web portal. The web servers accept the media files and are directly uploaded on the on-premises Network Attached Storage (NAS). For each uploaded media file, a corresponding message is sent to the message queue. A processing server picks up each message and processes each media file which can take up to 30 minutes to process. The company noticed that the number of media files waiting in the processing queue is significantly higher during business hours, but the processing server quickly catches up after business hours. To save costs, the company hired a Solutions Architect to improve the media processing by migrating the workload to AWS Cloud.
Which of the following options is the most cost-effective solution?
Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon S3 bucket.
Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon EFS mount point and shut down the EC2 instances after processing is complete.
Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon EFS volume.
Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon S3 bucket.
(Correct)
Explanation
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers "standard" as the default queue type. Standard queues support at-least-once message delivery.
You can use standard message queues in many scenarios, as long as your application can process messages that arrive more than once and out of order, for example:
- Decouple live user requests from intensive background work – Let users upload media while resizing or encoding it.
- Allocate tasks to multiple worker nodes – Process a high number of credit card validation requests.
- Batch messages for future processing – Schedule multiple entries to be added to a database.
You can scale-in/scale-out your Amazon EC2 Auto Scaling group in response to changes in system load in an Amazon Simple Queue Service (Amazon SQS) queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it's configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times.
The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group. If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively.
Amazon SQS is a queue service that is highly scalable, simple to use, and doesn't require you to set up message brokers. AWS recommends this service for new applications that can benefit from nearly unlimited scalability and simple APIs.
Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers. AWS recommends Amazon MQ for migrating applications from existing message brokers that rely on compatibility with APIs such as JMS or protocols such as AMQP, MQTT, OpenWire, and STOMP.
Therefore, the correct answer is: Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon S3 bucket.
The option that says: Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon S3 bucket is incorrect. Although this answer is the most cost-effective, AWS Lambda only allows functions to run up to 15 minutes. Remember that the media files can take up to 30 minutes to process.
The option that says: Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon EFS mount point and shut down the EC2 instances after processing is complete is incorrect. It is recommended to use Amazon SQS for this because it is not stated in the scenario to have compatibility with JMS or other protocols like MQTT AMQP, etc. Also, storing media files on Amazon EFS is more expensive than using Amazon S3.
The option that says: Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon EFS volume is incorrect. Lambda functions cannot run for more than 15 minutes, while in the scenario, the processing time is 30 minutes. Moreover, storing media files on Amazon EFS is more expensive than using Amazon S3.
References:
Check out these Amazon SQS and Amazon MQ Cheat Sheets:
Question 73: Skipped
A company plans to migrate its on-premises workload to the AWS cloud. The solutions architect has been tasked to perform a Total Cost of Ownership (TCO) [-> App Discovery Service] analysis and prepare a cost-optimized migration plan for the systems hosted in your on-premises network to AWS. It is required to collect detail about configuration, usage, and behavioral data from the on-premises servers to help better understand the current workloads before doing the migration.
Which of the following option is the recommended solution that should be implemented to meet the company requirements?
Use the AWS SAM service to move your data to AWS which will also help you perform the TCO analysis.
Use the AWS Application Discovery Service to gather data about your on-premises data center and perform the TCO analysis.
(Correct)
Use the AWS Migration Hub service to collect data from each server in your on-premises data center and perform the TCO analysis.
Use the AWS Application Migration Service (MGN) to migrate VM servers to AWS and collect the data required to complete your TCO analysis.
Explanation
AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers.
Planning data center migrations can involve thousands of workloads that are often deeply interdependent. Server utilization data and dependency mapping are important early first steps in the migration process. AWS Application Discovery Service collects and presents configuration, usage, and behavior data from your servers to help you better understand your workloads.
The collected data is retained in encrypted format in an AWS Application Discovery Service data store. You can export this data as a CSV file and use it to estimate the Total Cost of Ownership (TCO) of running on AWS and to plan your migration to AWS. In addition, this data is also available in AWS Migration Hub, where you can migrate the discovered servers and track their progress as they get migrated to AWS.
Therefore, the correct answer is: Use the AWS Application Discovery Service to gather data about your on-premises data center and perform the TCO analysis.
The option that says: Use the AWS SAM service to move your data to AWS which will also help you perform the TCO analysis is incorrect. The AWS Serverless Application Model (AWS SAM) service is just an open-source framework that you can use to build serverless applications on AWS. It is not a suitable migration service to be used for your on-premises systems.
The option that says: Use the AWS Migration Hub service to collect data from each server in your on-premises data center and performing the TCO analysis is incorrect. AWS Migration Hub simply provides a single location to track the progress of application migrations across multiple AWS and partner solutions. Using Migration Hub just allows you to choose the AWS and partner migration tools that best fits your needs while providing visibility into the status of migrations across your portfolio of applications. Although AWS Application Discovery Service can be integrated with AWS Migration Hub, this service alone is not enough to meet the requirement in this scenario.
The option that says: Use the AWS Application Migration Service (MGN) to migrate VM servers to AWS and collect the data required to complete your TCO analysis is incorrect. Although AWS Application Migration Service is a migration tool, it is used to replicate or mirror your on-premises VMs to the AWS cloud. It does not provide a helpful dashboard on applications running on each VM, unlike AWS Application Discovery Service.
References:
Check out this AWS Application Migration (MGN) Service Cheat Sheet:
Question 14: Skipped
A company is hosting its application and MySQL database in its on-premises data center. The database increases at about 10GB per day and is approximately 25TB in total size. The company wants to migrate the database workload to the AWS cloud. A 50Mbps VPN connection is currently in place to connect the corporate network to AWS. The company plans to complete the migration to AWS within 3 weeks with the LEAST downtime possible.
Which of the following solutions should be implemented to meet the company requirements?
Create a snapshot of the on-premises database server and use VM Import/Export service to import the snapshot to AWS. Provision a new Amazon EC2 instance from the imported snapshot. Configure replication from the on-premises database server to the EC2 instance through the VPN connection. Wait until the replication is complete then update the database DNS entry to point to the EC2 instance. Stop the database replication.
Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Using the VPN connection, configure replication from the on-premises database server to the Amazon Aurora DB instance. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication.
(Correct)
Temporarily stop the on-premises application to stop any database I/O operation. Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Update the database DNS entry to point to the Aurora DB instance. Start the application again to resume normal operations.
Create an AWS Database Migration Service (DMS) instance and an Amazon Aurora MySQL DB instance. Define the on-premises database server details and the Amazon Aurora MySQL instance details on the AWS DMS instance. Use the VPN connection to start the replication from the on-premises database server to AWS. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication.
Explanation
AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale-data transfer up to 80TB. Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.
Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data, and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS.
Each import job uses a single Snowball appliance. After you create a job in the AWS Snow Family Management Console or the job management API, AWS ships a Snowball to you. When it arrives in a few days, you connect the Snowball Edge device to your network and transfer the data that you want to be imported into Amazon S3 onto the device. When you’re done transferring data, ship the Snowball back to AWS, and AWS will import your data into Amazon S3.
Amazon Aurora MySQL integrates with other AWS services so that you can extend your Aurora MySQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to load data from text or XML files stored in an Amazon Simple Storage Service (Amazon S3) bucket into your DB cluster using the LOAD DATA FROM S3 or LOAD XML FROM S3 command.
Therefore, the correct answer is: Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Using the VPN connection, configure replication from the on-premises database server to the Amazon Aurora DB instance. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication. You can use the Snowball device to import TBs of data to AWS. Then you can load the data to the Amazon Aurora DB instance and replicate the missing data from the on-premises database server. You can cut over to the Aurora database after replication.
The option that says: Create a snapshot of the on-premises database server and use VM Import/Export service to import the snapshot to AWS. Provision a new Amazon EC2 instance from the imported snapshot. Configure replication from the on-premises database server to the EC2 instance through the VPN connection. Wait until the replication is complete then update the database DNS entry to point to the EC2 instance. Stop the database replication is incorrect. You cannot import a 25 TB snapshot using VM Import/Export. This option does not specify if it uses the company network to export the snapshot to AWS. If so, the 50Mbps won't be enough to export the entire database within the 3-week window.
The option that says: Create an AWS Database Migration Service (DMS) instance and an Amazon Aurora MySQL DB instance. Define the on-premises database server details and the Amazon Aurora MySQL instance details on the AWS DMS instance. Use the VPN connection to start the replication from the on-premises database server to AWS. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication is incorrect. Replicating an entire 25 TB database via the 50Mbps VPN connection will take several weeks for the replication to catch up. Not to mention that there will be additional 10 GB of data to be replicated for each passing day.
The option that says: Temporarily stop the on-premises application to stop any database I/O operation. Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Update the database DNS entry to point to the Aurora DB instance. Start the application again to resume normal operations is incorrect. This is possible but the downtime is too much. From the data export, loading to the Snowball device, and importing to the Amazon Aurora DB instance, the downtime will take at least a few days.
References:
Check out these AWS Snowball Edge and Amazon Aurora Cheat Sheets:
Question 16: Skipped
A leading aerospace engineering company has over 1 TB of aeronautical data stored on the corporate file server of their on-premises network. This data is used by a lot of their in-house analytical and engineering applications. The aeronautical data consists of technical files which can have a file size of a few megabytes to multiple gigabytes. The data scientists typically modify an average of 10 percent of these files every day. Recently, the management decided to adopt a hybrid cloud architecture to better serve their clients around the globe. The management requested to migrate its applications to AWS over the weekend to minimize any business impact and system downtime. The on-premises data center has a 50-Mbps Internet connection which can be used to transfer all of the 1 TB of data in AWS but based on the calculations, it will take at least 48 hours to complete this task.
Which of the following options will allow the solutions architect to move all of the aeronautical data to AWS to meet the above requirement?
1. Set up a Gateway-Stored volume gateway using the AWS Storage Gateway service.
2. Establish an iSCSI connection between your on-premises data center and your AWS Cloud then copy the data to the Storage Gateway volume.
3. After all of your data has been successfully copied, create an EBS snapshot of the volume.
4. Restore the snapshots as EBS volumes and attach them to your EC2 instances on Sunday.
1. At the end of business hours on Friday, start copying the data to a Snowball Edge device.
2. When the Snowball Edge have completely transferred your data to your AWS Cloud, copy all of the data to multiple EBS Volumes.
3. On Sunday afternoon, mount the generated EBS volume to your EC2 instances.
1. Synchronize the data from your on-premises data center to an S3 bucket using Multipart upload for large files from Saturday morning to Sunday evening.
2. Configure your application hosted in AWS to use the S3 bucket to serve the aeronautical data files.
1. Synchronize the on-premises data to an S3 bucket one week before the migration schedule using the AWS CLI's S3 sync
command.
2. Perform a final synchronization task on Friday after the end of business hours.
3. Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket.
(Correct)
Explanation
The most effective choice here is to use the S3 sync
feature that is available in AWS CLI. In this way, you can comfortably synchronize the data in your on-premises server and in AWS a week before the migration. And on Friday, just do another sync to complete the task. Remember that the sync feature of S3 only uploads the "delta" or in other words, the "difference" in the subset. Therefore, it will only take just a fraction of the time to complete the data synchronization compared to the other methods.
Therefore, the correct answer is:
1. Synchronize the on-premise data to an S3 bucket one week before the migration schedule using the AWS CLI's S3 sync
command.
2. Perform a final synchronization task on Friday after the end of business hours.
3. Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket.
The idea is to synchronize the data days before the migration weekend to avoid the risks and possible delays of transferring the data in such a short period of allowable time i.e. 48 hours.
The following option is incorrect:
1. At the end of business hours on Friday, start copying the data to a Snowball Edge device.
2. When the Snowball Edge have completely transferred your data to your AWS Cloud, copy all of the data to multiple EBS Volumes.
3. On Sunday afternoon, mount the generated EBS volume to your EC2 instances.
Although using a Snowball appliance is a valid option, there is a risk that your data won't make it on time on Monday. Remember that you have to consider the time it takes to transfer the data to Snowball including the process of shipping it back to AWS, which may take a day or two. Factoring all of these considerations, it is clear that this is not the best option as the risk is just too high that the Snowball delivery might not make it in time.
The following option is incorrect:
1. Set up a Gateway-Stored volume gateway using the AWS Storage Gateway service.
2. Establish an iSCSI connection between your on-premise data center and your AWS Cloud then copy the data to the Storage Gateway volume.
3. After all of your data has been successfully copied, create an EBS snapshot of the volume.
4. Restore the snapshots as EBS volumes and attach them to your EC2 instances on Sunday.
In this scenario, you have to create storage volumes in the AWS Cloud that your on-premises applications can access as Internet Small Computer System Interface (iSCSI) targets, which is not mentioned in the option. This is not a cost-effective solution considering that you only have to migrate 1 TB of data and the fact that you can just use the S3 sync
command via AWS CLI.
The following option is incorrect:
1. Synchronize the data from your on-premise data center to an S3 bucket using Multipart upload for large files from Saturday morning to Sunday evening.
2. Configure your application hosted in AWS to use the S3 bucket to serve the aeronautical data files.
Although using multi-part upload in S3 is faster than regular S3 upload, this method still carries a risk considering that the company only has 50 Mbps internet connection. It is still better to use S3 sync
command via AWS CLI days before the actual migration to mitigate the risks.
References:
Check out this Amazon S3 Cheat Sheet:
Question 29: Skipped
A company has a suite of IBM products in its on-premises data centers, such as IBM WebSphere, IBM MQ, and IBM DB2 servers. The solutions architect has been tasked to migrate all of the current systems to the AWS Cloud in the most cost-effective way and improve the availability of the cloud infrastructure.
Which of the following options is the MOST suitable solution that the solutions architect should implement to meet the company's requirements?
Use the AWS Application Migration Service to migrate your servers to AWS. Set up Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon SQS Standard Queue.
Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, re-architect, and migrate the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Re-host and migrate the IBM MQ service to Amazon SQS FIFO Queue.
Use the AWS Application Migration Service to migrate your servers to AWS. Upload the IBM licenses to AWS License Manager and use the licenses when configuring Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon MQ.
Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, migrate, and re-architect the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Migrate and re-platform IBM MQ to Amazon MQ in a phased approach.
(Correct)
Explanation
On Amazon EC2, you can run many of the proven IBM technologies with which you're already familiar. You may be eligible to bring many of your own IBM software and licenses (BYOSL) to run on Amazon EC2 instances.
AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) can allow you to convert and migrate IBM Db2 databases on Linux, UNIX, and Windows (Db2 LUW) to any DMS-supported target. This can accelerate your move to the cloud by allowing you to migrate more of your legacy databases. The new Db2 LUW source adds to the existing list of the relational database, NoSQL, and object store sources supported by DMS. If the database migration target is Amazon Aurora, Amazon Redshift, or Amazon DynamoDB, you can use DMS free for six months.
Amazon MQ is a managed message broker service from AWS that makes it easy to set up and operate message brokers in the cloud. To migrate and re-platform your on-premises IBM MQ to Amazon MQ, you can opt for a phased approach for the migration process. You can move the producers (senders) and consumers (receivers) in phases from your on-premises to the cloud. This process uses Amazon MQ as the message broker and decommissions IBM MQ once all producers/consumers have been successfully migrated.
Hence, the correct option is: Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, migrate and re-architect the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Migrate and re-platform IBM MQ to Amazon MQ in phased approach.
The option that says: Use the AWS Application Migration Service to migrate your servers to AWS. Set up Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon SQS Standard Queue is incorrect because the AWS Application Migration Service simply automates the migration of your on-premises virtual machines to the AWS Cloud. Moreover, you can't re-host and migrate your IBM MQ service to Amazon SQS. A better solution is to re-platform IBM MQ to Amazon MQ.
The option that says: Use the AWS Application Migration Service to migrate your servers to AWS. Upload the IBM licenses to AWS License Manager and use the licenses when configuring Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon MQ is incorrect because the AWS Application Migration Service simply automates the migration of your on-premises virtual machines to the AWS Cloud. AWS License Manager is used to create customized licensing rules that emulate the terms of their licensing agreements and then enforce these rules. It is not used for storing software licenses. This service is limited to migrating virtual machines (VMs) and in addition, you cannot directly re-host and migrate your IBM MQ to Amazon MQ since these are two completely different systems. Instead, you have to re-platform IBM MQ to Amazon MQ in a phased approach.
The option that says: Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, re-architect and migrate the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Re-host and migrate the IBM MQ service to Amazon SQS FIFO Queue is incorrect. Although the use of AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) is correct, the migration process for your IBM MQ is wrong. Amazon Simple Queue Service (SQS) is just a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. There are a lot of features in IBM MQ that are not available in Amazon SQS, whether it is a Standard or a FIFO Queue. You have to re-platform IBM MQ to Amazon MQ instead.
References:
Check out these cheat sheets on AWS Database Migration Service and AWS Application Migration Service:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 31: Skipped
A company recently launched its new e-commerce platform that is hosted on its on-premises data center. The webservers connect to a MySQL database. The e-commerce platform is quickly gaining popularity and the management is worried that the on-premises servers won’t be able to keep up with user traffic in the coming months. They decided to migrate the entire application to AWS to take advantage of the scalability of the cloud. The following are required for this migration:
- Improve the security of the application.
- Increase the reliability and availability of the application.
- Reduce the latency between the users and the application.
- Reduce the maintenance overhead after the migration to the cloud.
Which of the following options should the Solutions Architect implement to meet the company's requirements? (Select TWO.)
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. Use an Amazon Aurora for MySQL with Multi-AZ enabled as the database.
(Correct)
Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, set this bucket as the origin for an Amazon CloudFront distribution. Create AWS WAF rules to block common web exploits.
(Correct)
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers and the highly available MySQL database cluster in a master and slave configuration.
Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, enable S3 Transfer Acceleration. Create AWS WAF rules to block common web exploits.
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. To reduce the latency when serving content, create an AWS Global Accelerator endpoint. Migrate the database to an Amazon RDS MySQL instance.
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure and is engineered to be highly reliable.
Adding Amazon EC2 Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Amazon EC2 Auto Scaling, your applications gain the following benefits:
- Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Amazon EC2 Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Amazon EC2 Auto Scaling can launch instances in another one to compensate.
- Better availability. Amazon EC2 Auto Scaling helps ensure that your application always has the right amount of capacity to handle the current traffic demand.
- Better cost management. Amazon EC2 Auto Scaling can dynamically increase and decrease capacity as needed. Because you pay for the EC2 instances you use, you save money by launching instances when they are needed and terminating them when they aren't.
Amazon CloudFront works seamlessly with Amazon Simple Storage Service (S3) to accelerate the delivery of your web content and reduce the load on your origin servers. The CloudFront edge locations will cache and deliver your content closer to your users to reduce latency and offload capacity from your origin. CloudFront will also restrict access to your S3 bucket to only CloudFront endpoints rendering your content and application more secure and performant.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. Use an Amazon Aurora for MySQL with Multi-AZ enabled as the database is correct. The Auto Scaling group ensures there are enough web servers to answer user requests. Spreading the instances on at least two AZ and enabling Multi-AZ on the database ensure that your application is protected if one AZ in AWS goes down.
The option that says: Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, set this bucket as the origin for an Amazon CloudFront distribution. Create AWS WAF rules to block common web exploits is correct. Amazon S3 can serve static content and Amazon CloudFront can cache frequently requested content, which greatly improves latency. AWS WAF has default rules that you can enable to block common web exploits to improve your application security in AWS.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers and the highly available MySQL database cluster in a master and slave configuration is incorrect. Spreading EC2 instances in multiple AZ is good for availability however, hosting the database on the EC2 instances requires a lot of management overhead. It is recommended to use Amazon Aurora for the database.
The option that says: Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, enable S3 Transfer Acceleration. Create AWS WAF rules to block common web exploits is incorrect. Amazon S3 can serve static content, but S3 Transfer Acceleration is used for accelerating transfers going to and from the S3 bucket. Caching with CloudFront would be even faster since the user doesn't need to go to the S3 bucket for their cached requests.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. To reduce the latency when serving content, create an AWS Global Accelerator endpoint. Migrate the database to an Amazon RDS MySQL instance is incorrect. Without enabling Multi-AZ, your RDS is not protected in the event of a crash or failure. When you promote read-replicas to become master, a small downtime is required. This reduces the application availability.
References:
Check out these Amazon Aurora, Amazon S3, and Amazon CloudFront Cheat Sheets:
Question 34: Skipped
A company has deployed a multi-tier web application on AWS that uses Compute Optimized Instances for server-side processing and Storage Optimized EC2 Instances to store various media files [-> S3]. To ensure data durability, there is a scheduled job that replicates the files to each EC2 instance. [-> EFS? S3 cheaper and EC2 is not needed in the first place] The current architecture worked for a few months but it started to fail as the number of files grew, which is why the management decided to redesign the system.
Which of the following options should the solutions architect implement in order to launch a new architecture with improved data durability and cost-efficiency?
Migrate all media files to an Amazon S3 bucket and use this as the origin for the new CloudFront web distribution. Set up an Elastic Load Balancer with an Auto Scaling of EC2 instances to host the web servers. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
(Correct)
Migrate all media files to Amazon EFS then attach this new drive as a mount point to a new set of Storage Optimized EC2 Instances. For the web servers, set up an Elastic Load Balancer with an Auto Scaling of EC2 instances and use this as the origin for a new Amazon CloudFront web distribution. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Migrate the web application to AWS Elastic Beanstalk and move all media files to Amazon EFS for a durable and scalable storage. Set up an Amazon CloudFront distribution with EFS as the origin. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Migrate and host the entire web application to Amazon S3 for a more cost-effective web hosting. Enable cross-region replication to improve data durability. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Explanation
Cloud storage is a cloud computing model that stores data on the Internet through a cloud computing provider who manages and operates data storage as a service. It’s delivered on demand with just-in-time capacity and costs, and eliminates buying and managing your own data storage infrastructure. This gives you agility, global scale and durability, with 'anytime, anywhere' data access.
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Get started quickly by creating custom reports that analyze cost and usage data, both at a high level and for highly-specific requests. Using AWS Cost Explorer, you can dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.
In this scenario, you can use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings. For data storage, you can either use S3 or EFS to store the media files. However, S3 is cheaper than EFS and is more suitable to use in storing static media files.
Therefore, the correct answer is: Migrate all media files to an Amazon S3 bucket and use this as the origin for the new CloudFront web distribution. Set up an Elastic Load Balancer with an Auto Scaling of EC2 instances to host the web servers. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
The option that says: Migrate and host the entire web application to Amazon S3 for a more cost-effective web hosting. Enable cross-region replication to improve data durability. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. Amazon S3 is not capable to handle server-side processing as this can only be used for static websites. Moreover, you can only use Consolidated Billing if your account is configured to use AWS Organizations.
The option that says: Migrate all media files to Amazon EFS then attach this new drive as a mount point to a new set of Storage Optimized EC2 Instances. For the web servers, set up an Elastic Load Balancer with an Auto Scaling of EC2 instances and use this as the origin for a new Amazon CloudFront web distribution. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. Although this new setup may work, it entails a higher cost to maintain a new set of Storage Optimized EC2 Instances along with EFS. There is also an added cost of maintaining your web-tier which is comprised of an Elastic Load Balancer with another set of Auto Scaling of EC2 instances. It is also better to use S3 instead of EFS since you are only storing media files and not documents which requires file locking or POSIX-compliant storage.
The option that says: Migrate the web application to AWS Elastic Beanstalk and move all media files to Amazon EFS for a durable and scalable storage. Set up an Amazon CloudFront distribution with EFS as the origin. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. You cannot set EFS as the origin of your CloudFront web distribution. The scenario also doesn't mention the use of AWS Organization, which is why Consolidated Billing is not applicable.
References:
Check out this Amazon S3 Cheat Sheet:
Check out this Amazon S3, EBS, and EFS comparison:
Question 59: Skipped
An analytics company hosts its data processing application on its on-premises data center. Data scientists upload input files through a web portal which then are then stored in the company NAS. For every uploaded file, the web server sends a message to the processing server over a message queue. It could take up to 30 minutes to process each file on the NAS. During business hours, the number of files awaiting processing is significantly higher and it could take a while for the processing servers to catch up. The number of files significantly declines after business hours [-> ASG]. The company has tasked the solutions architect to migrate this workload to the AWS cloud.
Which of the following options is the recommended solution while being cost-effective?
Reconfigure the web application to publish messages to a new Amazon SQS queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files on an Amazon S3 bucket.
Reconfigure the web application to publish messages to a new Amazon MQ queue. Create an auto-scaling group of Amazon EC2 instances to pull messages from the queue and process the files. Store the processed files on an Amazon EFS volume. Power off the EC2 when there are no messages left on the queue.
Reconfigure the web application to publish messages to a new Amazon SQS queue. Create an auto-scaling group of Amazon EC2 instances based on the SQS queue length to pull messages from the queue and process the files. Store the processed files on an Amazon S3 bucket.
(Correct)
Reconfigure the web application to publish messages to a new Amazon MQ queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files in Amazon EFS
Explanation
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.
There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it's configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
There are three main parts to this configuration:
-An Auto Scaling group to manage EC2 instances for the purposes of processing messages from an SQS queue.
-A custom metric to send to Amazon CloudWatch that measures the number of messages in the queue per EC2 instance in the Auto Scaling group.
-A target tracking policy that configures your Auto Scaling group to scale based on the custom metric and a set target value. CloudWatch alarms invoke the scaling policy.
The following diagram illustrates the architecture of this configuration.
Therefore, the correct answer is: Reconfigure the web application to publish messages to a new Amazon SQS queue. Create an auto-scaling group of Amazon EC2 instances based on the SQS queue length to pull messages from the queue and process the files. Store the processed files on an Amazon S3 bucket. This is a cost-effective solution as the EC2 instances scale out depending on the length of the SQS queue, and Amazon S3 is cheaper compared to Amazon EFS.
The option that says: Reconfigure the web application to publish messages to a new Amazon SQS queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files on an Amazon S3 bucket is incorrect. The files may take up to 30 minutes to be processed. AWS Lambda has an execution limit of 15 minutes.
The option that says: Reconfigure the web application to publish messages to a new Amazon MQ queue. Create an auto-scaling group of Amazon EC2 instances to pull messages from the queue and process the files. Store the processed files on an Amazon EFS volume. Power off the EC2 when there are no messages left on the queue is incorrect. This is possible but not the most cost-effective solution. Amazon EFS is significantly more expensive than Amazon S3.
The option that says: Reconfigure the web application to publish messages to a new Amazon MQ queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files in Amazon EFS is incorrect. The files may take up to 30 minutes to be processed. AWS Lambda has an execution limit of 15 minutes. Amazon EFS is significantly more expensive than Amazon S3.
References:
Check out these Amazon EC2 and Amazon SQS Cheat Sheets:
Question 61: Skipped
A multinational consumer goods company is currently using a VMWare vCenter Server to manage their virtual machines, multiple ESXi hosts, and all dependent components from a single centralized location. To save costs and to avail the benefits of cloud computing, the company decided to move its virtual machines to AWS. The Solutions Architect is required to generate new AMIs of the existing virtual machines which can then be launched as an EC2 instance in the company VPC.
Which combination of steps should the Solutions Architect do to properly execute the cloud migration? (Select TWO.)
Use Serverless Application Model (SAM) to migrate the virtual machines (VMs) to AWS and automatically launch an Amazon ECS Cluster to host the VMs.
Use the AWS Application Migration Service to migrate your on-premises workloads to the AWS cloud.
(Correct)
Create an AWS CloudFormation template that mirrors the on-premises virtualized environment. Deploy the stack to the AWS cloud.
Establish a Direct Connect connection between your data center and your VPC. Use AWS Service Catalog to centrally manage all your IT services and to quickly migrate virtual machines to your virtual private cloud.
Install the AWS Replication Agent in your on-premises virtualization environment.
(Correct)
Explanation
AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to lift and shift a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows.
MGN replicates source servers into your AWS account. When you’re ready, it automatically converts and launches your servers on AWS so you can quickly benefit from the cost savings, productivity, resilience, and agility of the Cloud.
Once your applications are running on AWS, you can leverage AWS services and capabilities to quickly and easily re-platform or refactor those applications – which makes lift-and-shift a fast route to modernization.
Therefore, the correct answers are:
- Use the AWS Application Migration Service to migrate your on-premises workloads to the AWS cloud.
- Install the AWS Replication Agent in your on-premises virtualization environment.
The option that says: Use Serverless Application Model (SAM) to migrate the virtual machine (VMs) to AWS and automatically launch an Amazon ECS Cluster to host the VMs is incorrect because the Serverless Application Model (SAM) service is primarily used to build serverless applications on AWS, and not to migrate virtual machines from your on-premises data center.
The option that says: Create an AWS CloudFormation template that mirrors the on-premises virtualized environment. Deploy the stack to the AWS cloud is incorrect. This is possible; however, this will create new AMIs that are from AWS, not from the existing VMs. CloudFormation is ideal for creating and deploying new environments in the AWS Cloud.
The option that says: Establish a Direct Connect connection between your data center and your VPC. Use AWS Service Catalog to centrally manage all your IT services and to quickly migrate virtual machines to your virtual private cloud is incorrect. The AWS Service Catalog is primarily used to allow organizations to create and manage catalogs of IT services that are approved for use on AWS. It enables you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need. However, this service is not suitable for migrating your virtual machines.
References:
Check out this AWS Application Migration Service Cheat Sheet: