Accelerate Workload Migration and Modernization (20%)
Last updated
Last updated
Question 12: Skipped
A company provides big data services to enterprise clients around the globe. One of the clients has 60 TB of raw data from their on-premises Oracle data warehouse. The data is to be migrated to Amazon Redshift. However, the database receives minor updates on a daily basis while major updates are scheduled every end of the month. The migration process must be completed within approximately 30 days before the next major update on the Redshift database. The company can only allocate 50 Mbps of Internet connection for this activity to avoid impacting business operations.
Which of the following actions will satisfy the migration requirements of the company while keeping the costs low?
Create an AWS Snowball import job to request for a Snowball Edge device. Use the AWS Schema Conversion Tool (SCT) to process the on-premises data warehouse and load it to the Snowball Edge device. Install the extraction agent on a separate on-premises server and register it with AWS SCT. Once the Snowball Edge imports data to the S3 bucket, use AWS SCT to migrate the data to Amazon Redshift. Configure a local task and AWS DMS task to replicate the ongoing updates to the data warehouse. Monitor and verify that the data migration is complete.
(Correct)
Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection. Launch an Oracle Real Application Clusters (RAC) database on an EC2 instance and set it up to fetch and synchronize the data from the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
Create an AWS Snowball Edge job using the AWS Snowball console. Export all data from the Oracle data warehouse to the Snowball Edge device. Once the Snowball device is returned to Amazon and data is imported to an S3 bucket, create an Oracle RDS instance to import the data. Create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Copy the missing daily updates from Oracle in the data center to the RDS for Oracle database over the Internet. Monitor and verify if the data migration is complete before the cut-over.
Create a new Oracle Database on Amazon RDS. Configure Site-to-Site VPN connection from the on-premises data center to the Amazon VPC. Configure replication from the on-premises database to Amazon RDS. Once replication is complete, create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over.
Explanation
You can use an AWS SCT agent to extract data from your on-premises data warehouse and migrate it to Amazon Redshift. The agent extracts your data and uploads the data to either Amazon S3 or, for large-scale migrations, an AWS Snowball Edge device. You can then use AWS SCT to copy the data to Amazon Redshift.
(What is AWS SCT agent?)
Large-scale data migrations can include many terabytes of information and can be slowed by network performance and by the sheer amount of data that has to be moved. AWS Snowball Edge is an AWS service you can use to transfer data to the cloud at faster-than-network speeds using an AWS-owned appliance. An AWS Snowball Edge device can hold up to 100 TB of data. It uses 256-bit encryption and an industry-standard Trusted Platform Module (TPM) to ensure both security and full chain-of-custody for your data. AWS SCT works with AWS Snowball Edge devices.
When you use AWS SCT and an AWS Snowball Edge device, you migrate your data in two stages.
First, you use the AWS SCT to process the data locally and then move that data to the AWS Snowball Edge device. You then send the device to AWS using the AWS Snowball Edge process, and then AWS automatically loads the data into an Amazon S3 bucket.
Next, when the data is available on Amazon S3, you use AWS SCT to migrate the data to Amazon Redshift. Data extraction agents can work in the background while AWS SCT is closed. You manage your extraction agents by using AWS SCT. The extraction agents act as listeners. When they receive instructions from AWS SCT, they extract data from your data warehouse.
SCT -> Snowball Edge -> S3 -> SCT -> Redshift
Therefore, the correct answer is: Create an AWS Snowball import job to request for a Snowball Edge device. Use the AWS Schema Conversion Tool (SCT) to process the on-premises data warehouse and load it to the Snowball Edge device. Install the extraction agent on a separate on-premises server and register it with AWS SCT. Once the Snowball Edge imports data to the S3 bucket, use AWS SCT to migrate the data to Amazon Redshift. Configure a local task and AWS DMS task to replicate the ongoing updates to the data warehouse. Monitor and verify that the data migration is complete.
The option that says: Create a new Oracle Database on Amazon RDS. Configure Site-to-Site VPN connection from the on-premises data center to the Amazon VPC. Configure replication from the on-premises database to Amazon RDS. Once replication is complete, create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over is incorrect. Replicating 60 TB worth of data over the public Internet will take several days over the 30-day migration window. It is also stated in the scenario that the company can only allocate 50 Mbps of Internet connection for the migration activity. Sending the data over the Internet could potentially affect business operations.
It takes 111 days to migrate 60TB of data over 50 Mbps
60 TB = 60m MBytes = 60m * 8 mega bits
60m * 8 / 50 Mbps (Megabits per sec) = 9.6m secs = 2667 hrs = 111 days
The option that says: Create an AWS Snowball Edge job using the AWS Snowball console. Export all data from the Oracle data warehouse to the Snowball Edge device. Once the Snowball device is returned to Amazon and data is imported to an S3 bucket, create an Oracle RDS instance to import the data. Create an AWS Schema Conversion Tool (SCT) project with AWS DMS task to migrate the Oracle database to Amazon Redshift. Copy the missing daily updates from Oracle in the data center to the RDS for Oracle database over the internet. Monitor and verify if the data migration is complete before the cut-over is incorrect. You need to configure the data extraction agent first on your on-premises server. In addition, you don't need the data to be imported and exported via Amazon RDS. AWS DMS can directly migrate the data to Amazon Redshift.
The option that says: Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection. Install Oracle database on an EC2 instance that is configured to synchronize with the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over Since you have a 30-day window for migration, configure VPN connectivity between AWS and the company's data center by provisioning a 1 Gbps AWS Direct Connect connection. Launch an Oracle Real Application Clusters (RAC) database on an EC2 instance and set it up to fetch and synchronize the data from the on-premises Oracle database. Once replication is complete, create an AWS DMS task on an AWS SCT project to migrate the Oracle database to Amazon Redshift. Monitor and verify if the data migration is complete before the cut-over is incorrect. Although this is possible, the company wants to keep the cost low. Using a Direct Connect connection for a one-time migration is not a cost-effective solution.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 23: Incorrect
A company runs a Flight Deals web application which is currently hosted on their on-premises data center. The website hosts high-resolution photos of top tourist destinations in the world and uses a third-party payment platform to accept payments. Recently, the company heavily invested in their global marketing campaign and there is a high probability that the incoming traffic to their Flight Deals website will increase in the coming days. Due to a tight deadline, the company does not have the time to fully migrate the website to the AWS cloud. A set of security rules that block common attack patterns, such as SQL injection and cross-site scripting should also be implemented to improve website security.
Which of the following options will maintain the website's functionality despite the massive amount of incoming traffic?
Use CloudFront to cache and distribute the high resolution images and other static assets of the website. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks.
(Correct)
Use the AWS Server Migration Service to easily migrate the website from your on-premises data center to your VPC. Create an Auto Scaling group to automatically scale the web tier based on the incoming traffic. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks.
Create and configure an S3 bucket as a static website hosting. Move the web domain of the website from your on-premises data center to Route 53 then route the newly created S3 bucket as the origin. Enable Amazon S3 server-side encryption with AWS Key Management Service managed keys.
(Incorrect)
Generate an AMI based on the existing Flight Deals website. Launch the AMI to a fleet of EC2 instances with Auto Scaling group enabled, for it to automatically scale up or scale down based on the incoming traffic. Place these EC2 instances behind an ALB which can balance traffic between the web servers in the on-premises data center and the web servers hosted in AWS.
Explanation
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can get started quickly using Managed Rules for AWS WAF, a pre-configured set of rules managed by AWS or AWS Marketplace Sellers. The Managed Rules for WAF address issues like the OWASP Top 10 security risks. These rules are regularly updated as new issues emerge. AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of security rules.
AWS WAF is easy to deploy and protect applications deployed on either Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts all your origin servers, or Amazon API Gateway for your APIs. There is no additional software to deploy, DNS configuration, SSL/TLS certificate to manage, or need for a reverse proxy setup. With AWS Firewall Manager integration, you can centrally define and manage your rules, and reuse them across all the web applications that you need to protect.
Hence, the option that says: Use CloudFront to cache and distribute the high resolution images and other static assets of the website. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks is correct because CloudFront will provide the scalability the website needs without doing major infrastructure changes. Take note that the website has a lot of high-resolution images which can easily be cached using CloudFront to alleviate the massive incoming traffic going to the on-premises web server and also provide a faster page load time for the web visitors.
The option that says: Use the AWS Server Migration Service to easily migrate the website from your on-premises data center to your VPC. Create an Auto Scaling group to automatically scale the web tier based on the incoming traffic. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks is incorrect as migrating to AWS would be time-consuming compared with simply using CloudFront. Although this option can provide a more scalable solution, the scenario says that the company does not have ample time to do the migration.
The option that says: Create and configure an S3 bucket as a static website hosting. Move the web domain of the website from your on-premises data center to Route53 then route the newly created S3 bucket as the origin. Enable Amazon S3 server-side encryption with AWS Key Management Service managed keys is incorrect because the website is a dynamic website that accepts payments and bookings. Migrating your web domain to Route 53 may also take some time.
The option that says: Generate an AMI based on the existing Flight Deals website. Launch the AMI to a fleet of EC2 instances with Auto Scaling group enabled, for it to automatically scale up or scale down based on the incoming traffic. Place these EC2 instances behind an ALB which can balance traffic between the web servers in the on-premises data center and the web servers hosted in AWS is incorrect because it didn't mention any existing AWS Direct Connect or VPN connection. Although an Application Load Balancer can load balance the traffic between the EC2 instances in AWS Cloud and web servers located in the on-premises data center, your systems should be connected via Direct Connect or VPN connection first. In addition, the application seems to be used around the world because the company launched a global marketing campaign. Hence, CloudFront is a more suitable option for this scenario.
References:
Check out this Amazon CloudFront Cheat Sheet:
Question 25: Correct
A company is hosting its production environment on its on-premises servers. Most of the applications are packed as Docker containers that are manually run on self-managed virtual machines. The web servers are using the latest commercial Oracle Java SE suite which costs the company thousands of dollars in licensing costs. The MySQL databases are installed on separate servers configured on a “source-replica” setup for high availability. The company wants to migrate the whole environment to AWS Cloud to take advantage of its flexibility and agility, as well as use OpenJDK to save licensing costs without major changes in its applications.
Which of the following application migration strategies meet the above requirement?
Re-platform the environment on the AWS Cloud platform by running the Docker containers on Amazon ECS. Test the new OpenJDK Docker containers and upload them on Amazon Elastic Container Registry. Migrate the MySQL database to Amazon RDS using AWS Database Migration Service.
(Correct)
Re-factor/re-architect the environment on AWS Cloud by converting the Docker containers to run on AWS Lambda Functions. Convert the MySQL database to Amazon DynamoDB using the AWS Schema Conversion Tool (AWS SCT) to save on costs.
Re-host the environment on the AWS Cloud platform by creating EC2 instances that mirror the current web servers and database servers. Host the Docker instances on Amazon EC2 and test the new OpenJDK Docker containers on these instances. Create a dump of the on-premises MySQL databases and upload it to an Amazon S3 bucket. Launch a new Amazon EC2 instance with a MySQL database and import the data from Amazon S3.
Re-platform the environment on the AWS Cloud platform by deploying the Docker containers on AWS App Runner to reduce operational overhead. Test the new OpenJDK Docker containers and upload them on Amazon Elastic Container Registry (ECR). Convert the MySQL database to Amazon DynamoDB using the AWS Schema Conversion Tool (AWS SCT) to save on costs.
Explanation
The six most common application migration strategies are:
Rehosting — Otherwise known as “lift-and-shift”. Many early cloud projects gravitate toward net new development using cloud-native capabilities, but in a large legacy migration scenario where the organization is looking to scale its migration quickly to meet a business case, applications can be rehosted. Most rehosting can be automated with tools (e.g., CloudEndure Migration, AWS VM Import/Export), although you can do this manually to apply changes on legacy systems to the new cloud platform.
Replatforming — Sometimes, this is called “lift-tinker-and-shift.” Here you might make a few cloud (or other) optimizations in order to achieve some tangible benefit, but you aren’t otherwise changing the core architecture of the application. You may be looking to reduce the amount of time you spend managing database instances by migrating to a database-as-a-service platform like Amazon Relational Database Service (Amazon RDS) or migrating your application to a fully managed platform like Amazon Elastic Beanstalk.
Repurchasing — Moving to a different product. Repurchasing is a move to a SaaS platform. Moving a CRM to Salesforce.com, an HR system to Workday, a CMS to Drupal, etc.
Refactoring / Re-architecting — Re-imagining how the application is architected and developed, typically using cloud-native features. This is typically driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment. For example, migrating from a monolithic architecture to a service-oriented (or server-less) architecture to boost agility.
Retire — This strategy basically means: "Get rid of." Once you’ve discovered everything in your environment, you might ask each functional area who owns each application and see that some of the applications are no longer used. You can save costs by retiring these applications.
Retain — Usually this means “revisit” or do nothing (for now). Maybe you aren’t ready to prioritize an application that was recently upgraded or are otherwise not inclined to migrate some applications. You can retain these applications and revisit your migration strategy.
Therefore, the correct answer is: Re-platform the environment on the AWS Cloud platform by running the Docker containers on Amazon ECS. Test the new OpenJDK Docker containers and upload them on Amazon Elastic Container Registry. Migrate the MySQL database to Amazon RDS using AWS Database Migration Service.
The option that says: Re-host the environment on the AWS Cloud platform by creating EC2 instances that mirror the current web servers and database servers. Host the Docker instances on Amazon EC2 and test the new OpenJDK Docker images on these instances. Create a dump of the on-premises MySQL databases and upload it to an Amazon S3 bucket. Launch a new Amazon EC2 instance with a MySQL database and import the data from Amazon S3 is incorrect. Although this is possible, simply re-hosting your applications by mirroring your current on-premises setup does not take advantage of the cloud’s elasticity and agility. A better approach is to use Amazon ECS to run the Docker containers and migrate the MySQL database to Amazon RDS.
The option that says: Re-factor/re-architect the environment on AWS Cloud by converting the Docker containers to run on AWS Lambda Functions. Convert the MySQL database to Amazon DynamoDB using the AWS Schema Conversion Tool (AWS SCT) to save on costs is incorrect because this solution requires major changes on the current application to execute successfully. In addition, there is nothing mentioned in the scenario that warrants the conversion of the MySQL database to Amazon DynamoDB.
The option that says: Re-platform the environment on the AWS Cloud platform by deploying the Docker containers on AWS App Runner to reduce operational overhead. Test the new OpenJDK Docker containers and upload them on Amazon Elastic Container Registry (ECR). Convert the MySQL database to Amazon DynamoDB using the AWS Schema Conversion Tool (AWS SCT) to save on costs is incorrect. It is possible to use AWS App Runner to deploy highly available containerized workloads. However, there is nothing mentioned in the scenario that warrants the conversion of the MySQL database to Amazon DynamoDB.
References:
AWS Migration Strategies Cheat Sheet:
Question 26: Correct
A company has several virtual machines on its on-premises data center hosting its three-tier web application. The company wants to migrate the application to AWS to take advantage of the benefits of cloud computing. The following are the company requirements for the migration process:
- The virtual machine images from the on-premises data center must be imported to AWS.
- The changes on the on-premises servers must be synchronized to the AWS servers until the production cutover is completed.
- Have minimal downtime during the production cutover.
- The root volumes and data volumes (containing Terabytes of data) of the VMs must be migrated to AWS.
- The migration solution must have minimal operational overhead.
Which of the following options is the recommended solution to meet the company requirements?
Create a job on AWS Application Migration Service (MGN) to migrate the virtual machines to AWS. Install the replication agent on each application tier to sync the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the replicated VM from AWS MGN. After successful testing, perform a cutover and launch new instances based on the updated AMIs.
(Correct)
Create a job on AWS Application Migration Service (MGN) to migrate the root volumes of the virtual machines to AWS. Import the data volumes using the AWS CLI import-snapshot command. Launch Amazon EC2 instances based on the images created from AWS MGN and attach the imported data volumes. After successful testing, perform a final replication before the cutover. Launch new instances based on the updated AMIs and attach the corresponding data volumes.
Leverage both AWS Application Discovery Service and AWS Migration Hub to group the on-premises VMs as an application. Write an AWS CLI script that uses VM Import/Export to import the VMs as AMIs. Schedule the script to run at regular intervals to synchronize the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the images created from VM Import/Export. After successful testing, perform a final virtual machine import before the cutover. Launch new instances based on the updated AMIs.
Write an AWS CLI script that uses VM Import/Export to migrate the virtual machines. Schedule the script to run at regular intervals to synchronize the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the images created from VM Import/Export. After successful testing, re-run the script to perform a final replication before the cutover. Launch new instances based on the updated AMIs.
Explanation
AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers to run natively on AWS. It also simplifies application modernization with built-in, post-launch optimization options.
AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to lift and shift a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows.
MGN replicates source servers into your AWS account. When you’re ready, it automatically converts and launches your servers on AWS so you can quickly benefit from the cost savings, productivity, resilience, and agility of the Cloud. Once your applications are running on AWS, you can leverage AWS services and capabilities to quickly and easily re-platform or refactor those applications – which makes lift-and-shift a fast route to modernization.
The first setup step for Application Migration Service is creating the Replication Settings template. Add source servers to Application Migration Service by installing the AWS Replication Agent (also referred to as "the Agent") on them. The Agent can be installed on both Linux and Windows servers. After you have added all of your source servers and configured their launch settings, you are ready to launch a Test instance. Once you have finalized the testing of all of your source servers, you are ready for cutover. The cutover will migrate your source servers to the Cutover instances on AWS.
Therefore, the correct answer is: Create a job on AWS Application Migration Service (MGN) to migrate the virtual machines to AWS. Install the replication agent on each application tier to sync the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the replicated VM from AWS MGN. After successful testing, perform a cutover and launch new instances based on the updated AMIs.
The option that says: Write an AWS CLI script that uses VM Import/Export to migrate the virtual machines. Schedule the script to run at regular intervals to synchronize the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the images created from VM Import/Export. After successful testing, re-run the script to perform a final replication before the cutover. Launch new instances based on the updated AMIs is incorrect. AWS VM Import/Export does not support synching incremental changes from the on-premises environment to AWS. You will need to import the VM again as a whole after you make changes to the on-premises environment. This requires a lot of time and adds more operational overhead.
The option that says: Create a job on AWS Application Migration Service (MGN) to migrate the root volumes of the virtual machines to AWS. Import the data volumes using the AWS CLI import-snapshot command. Launch Amazon EC2 instances based on the images created from AWS MGN and attach the imported data volumes. After successful testing, perform a final replication before the cutover. Launch new instances based on the updated AMIs and attach the corresponding data volumes is incorrect. This may be possible but creating manual snapshots of the data volumes requires more operational overhead. AWS MGN supports syncing of attached volumes as well, so you don't have to migrate the data volumes manually.
The option that says: Leverage both AWS Application Discovery Service and AWS Migration Hub to group the on-premises VMs as an application. Write an AWS CLI script that uses VM Import/Export to import the VMs as AMIs. Schedule the script to run at regular intervals to synchronize the changes from the on-premises environment to AWS. Launch Amazon EC2 instances based on the images created from VM Import/Export. After successful testing, perform a final virtual machine import before the cutover. Launch new instances based on the updated AMIs is incorrect. The AWS Application Discovery Service plans migration projects by gathering information about the on-premises data center, and all discovered data are stored in your AWS Migration Hub. This is similar to the other option for VM Import/Export, as you will need to import the VM again as a whole after you make changes on the on-premises environment.
References:
Check out this AWS Server Migration Service Cheat Sheet:
AWS Migration Services Overview:
Question 67: Correct
A company uses Lightweight Directory Access Protocol (LDAP) for its employee authentication and authorization. The company plans to release a mobile app that can be installed on employee’s smartphones. The mobile application will allow users to have federated access to AWS resources. Due to strict security and compliance requirements, the mobile application must use a custom-built solution for user authentication. It must also use IAM roles for granting user permissions to AWS resources. The Solutions Architect was tasked to create a solution that meets these requirements.
Which of the following options should the Solutions Architect implement to enable authentication and authorization for the application? (Select TWO.)
Build a custom SAML-compatible solution to handle authentication and authorization. Configure the solution to use LDAP for user authentication and use SAML assertion to perform authorization to the IAM identity provider.
(Correct)
Build a custom SAML-compatible solution for user authentication. Leverage AWS IAM Identity Center for authorizing access to AWS resources.
Build a custom LDAP connector using Amazon API Gateway with AWS Lambda function for user authentication. Use Amazon DynamoDB to store user authorization tokens. Write another Lambda function that will validate user authorization requests based on the token stored on DynamoDB.
Build a custom OpenID Connect-compatible solution in combination with AWS IAM Identity Center to create authentication and authorization functionality for the application.
Build a custom OpenID Connect-compatible solution for the user authentication functionality. Use Amazon Cognito Identity Pools for authorizing access to AWS resources.
(Correct)
Explanation
AWS supports identity federation with SAML 2.0 (Security Assertion Markup Language 2.0), an open standard that many identity providers (IdPs) use. This feature enables federated single sign-on (SSO), so users can log in to the AWS Management Console or call the AWS API operations without you having to create an IAM user for everyone in your organization. By using SAML, you can simplify the process of configuring federation with AWS because you can use the IdP's service instead of writing custom identity proxy code.
You can use a role to configure your SAML 2.0-compliant identity provider (IdP) and AWS to permit your federated users to access the AWS Management Console. The role grants the user permissions to carry out tasks in the console. The following diagram illustrates the flow for SAML-enabled single sign-on.
The diagram illustrates the following steps:
The user browses your organization's portal and selects the option to go to the AWS Management Console. In your organization, the portal is typically a function of your IdP that handles the exchange of trust between your organization and AWS.
The portal verifies the user's identity in your organization.
The portal generates a SAML authentication response that includes assertions that identify the user and include attributes about the user. The portal sends this response to the client's browser.
The client browser is redirected to the AWS single sign-on endpoint and posts the SAML assertion.
The endpoint requests temporary security credentials on behalf of the user and creates a console sign-in URL that uses those credentials.
AWS sends the sign-in URL back to the client as a redirect.
The client browser is redirected to the AWS Management Console. If the SAML authentication response includes attributes that map to multiple IAM roles, the user is first prompted to select the role for accessing the console.
Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a username and password or through a third party such as Facebook, Amazon, Google, or Apple. The two main components of Amazon Cognito are user pools and identity pools. User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.
Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and have received a token.
OpenID Connect is an open standard for authentication that is supported by a number of login providers. Amazon Cognito supports the linking of identities with OpenID Connect providers that are configured through AWS Identity and Access Management. Once you've created an OpenID Connect provider in the IAM Console, you can associate it with an identity pool.
The option that says: Build a custom SAML-compatible solution to handle authentication and authorization. Configure the solution to use LDAP for user authentication and use SAML assertion to perform authorization to the IAM identity provider is correct. The requirement is to use a custom-built solution for user authentication, and this can use the company LDAP system for authentication. The SAML assertion is also needed to get authorization tokens from the IAM identity provider that will grant IAM roles to users that wish to access AWS resources.
The option that says: Build a custom OpenID Connect-compatible solution for the user authentication functionality. Use Amazon Cognito Identity Pools for authorizing access to AWS resources is correct. The custom OpenID Connect-compatible solution will allow users to log in from their mobile application much like a single sign-on functionality. Amazon Cognito Identity Pool will provide temporary tokens to federated users for accessing AWS resources.
The option that says: Build a custom SAML-compatible solution for user authentication. Leverage AWS IAM Identity Center for authorizing access to AWS resources is incorrect. The requirement is to grant federated access from the mobile application. AWS IAM Identity Center supports single sign-on to business applications through web browsers only.
The option that says: Build a custom LDAP connector using Amazon API Gateway with AWS Lambda function for user authentication. Use Amazon DynamoDB to store user authorization tokens. Write another Lambda function that will validate user authorization requests based on the token stored on DynamoDB is incorrect. It is not recommended to store authorization tokens permanently on DynamoDB tables. These tokens should be generated upon user authentication and then temporarily saved on a DynamoDB for a fixed session length.
The option that says: Build a custom OpenID Connect-compatible solution in combination with AWS IAM Identity Center to create authentication and authorization functionality for the application is incorrect. AWS IAM Identity Center supports only SAML 2.0–based applications so an OpenID Connect-compatible solution will not work for this scenario.
References:
AWS Identity Services Overview:
Check out these Amazon Cognito Cheat Sheets:
Question 70: Correct
A company has recently adopted a hybrid cloud architecture which requires them to migrate their databases from their on-premises data center to AWS. One of their applications requires a heterogeneous database migration in which they need to transform their on-premises Oracle database to PostgreSQL. A schema and code transformation should be done first in order to successfully migrate the data.
Which of the following options is the most suitable approach to migrate the database in AWS?
Migrate the database from your on-premises data center using the AWS Server Migration Service (SMS). Afterward, use the AWS Database Migration Service to convert and migrate your data to Amazon RDS for PostgreSQL database.
Use the AWS Schema Conversion Tool (SCT) to convert the source schema to match that of the target database. Migrate the data using the AWS Database Migration Service (DMS) from the source database to an Amazon RDS for PostgreSQL database.
(Correct)
Use a combination of AWS Data Pipeline service and CodeCommit to convert the source schema and code to match that of the target PostgreSQL database in RDS. Use AWS Batch with Spot EC2 instances to cost-effectively migrate the data from the source database to the target database in a batch process.
Use the AWS Serverless Application Model (SAM) service to transform your database to PostgreSQL using AWS Lambda functions. Migrate the database to RDS using the AWS Database Migration Service (DMS).
Explanation
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database. It can also move data between SQL, NoSQL, and text-based targets.
In heterogeneous database migrations, the source and target databases engines are different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and database code of source and target databases can be quite different, requiring a schema and code transformation before the data migration starts. That makes heterogeneous migrations a two-step process.
First, use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type conversions will automatically be done by the AWS Database Migration Service during the migration. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.
The option that says: Migrate the database from your on-premises data center using the AWS Server Migration Service (SMS). Afterwards, use the AWS Database Migration Service to convert and migrate your data to Amazon RDS for PostgreSQL database is incorrect because the AWS Server Migration Service (SMS) is primarily used to migrate virtual machines such as VMware vSphere and Windows Hyper-V. Although it is correct to use AWS Database Migration Service (DMS) to migrate the database, this option is still wrong because you should use the AWS Schema Conversion Tool to convert the source schema.
The option that says: Use a combination of AWS Data Pipeline service and CodeCommit to convert the source schema and code to match that of the target PostgreSQL database in RDS. Use AWS Batch with Spot EC2 instances to cost-effectively migrate the data from the source database to the target database in a batch process is incorrect. AWS Data Pipeline is primarily used to quickly and easily provision pipelines that remove the development and maintenance effort required to manage your daily data operations which lets you focus on generating insights from that data. Although you can use this to connect your data on your on-premises data center, it is not the most suitable service to use, compared with AWS DMS.
The option that says: Use the AWS Serverless Application Model (SAM) service to transform your database to PostgreSQL using AWS Lambda functions. Migrate the database to RDS using the AWS Database Migration Service (DMS) is incorrect. The Serverless Application Model (SAM) is an open-source framework that is primarily used to build serverless applications on AWS, and not for database migration.
References:
Check out these AWS Migration Cheat Sheets:
Question 71: Incorrect
A company is migrating an interactive car registration web system hosted on its on-premises network to AWS Cloud. The current architecture of the system consists of a single NGINX web server and a MySQL database running on a Fedora server, which both reside in their on-premises data center. For the new cloud architecture, a load balancer must be used to evenly distribute the incoming traffic to the application servers. Route 53 must be used for both domain registration and domain management.
In this scenario, what would be the most efficient way to transfer the web application to AWS?
1. Use the AWS Application Migration Service (MGN) to create an EC2 AMI of the NGINX web server.
2. Configure auto-scaling to launch in two Availability Zones.
3. Launch a multi-AZ MySQL Amazon RDS instance in one availability zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Create an ELB to front your web servers
6. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.
(Incorrect)
1. Launch two NGINX EC2 instances in two Availability Zones.
2. Copy the web files from the on-premises web server to each Amazon EC2 web server, using Amazon S3 as the repository.
3. Migrate the database using the AWS Database Migration Service.
4. Create an ELB to front your web servers.
5. Use Route 53 and create an alias A record pointing to the ELB.
(Correct)
1. Use the AWS Application Discovery Service to migrate the NGINX web server.
2. Configure Auto Scaling to launch two web servers in two Availability Zones.
3. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in one Availability Zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Use Amazon Route 53 to create a private hosted zone and point a non-alias A record to the ELB.
1. Export web files to an Amazon S3 bucket in one Availability Zone using AWS Migration Hub.
2. Run the website directly out of Amazon S3.
3. Migrate the database using the AWS Database Migration Service and AWS Schema Conversion Tool (AWS SCT).
4. Use Route 53 and create an alias record pointing to the ELB.
Explanation
This is a trick question that contains a lot of information to confuse you, especially if you don’t know the fundamental concepts of AWS. All options seem to be correct except for their last steps in setting up Route 53. The options that have a step to launch a multi-AZ MySQL Amazon RDS instance in one availability zone only are wrong since a Multi-AZ deployment configuration uses multiple Availability Zones.
To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com, and for subdomains, such as www.example.com. (You can create CNAME records only for subdomains). For EC2 instances, always use a Type A Record without an Alias. For ELB, Cloudfront, and S3, always use a Type A Record with an Alias, and finally, for RDS, always use the CNAME Record with no Alias.
Hence, the following option is the correct answer:
1. Launch two NGINX EC2 instances in two Availability Zones.
2. Copy the web files from the on-premises web server to each Amazon EC2 web server, using Amazon S3 as the repository.
3. Migrate the database using the AWS Database Migration Service.
4. Create an ELB to front your web servers.
5. Use Route 53 and create an alias A record pointing to the ELB.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
The following sets of options are incorrect because they are just using an A record without an Alias:
1. Use the AWS Application Migration Service (MGN) to create an EC2 AMI of the NGINX web server.
2. Configure auto-scaling to launch in two Availability Zones.
3. Launch a multi-AZ MySQL Amazon RDS instance in one availability zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Create an ELB to front your web servers.
6. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.
--
1. Use the AWS Application Discovery Service to migrate the NGINX web server.
2. Configure Auto Scaling to launch two web servers in two Availability Zones.
3. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in one Availability Zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Use Amazon Route 53 to create a private hosted zone and point a non-alias A record to the ELB.
Take note as well that the AWS Application Migration Service (MGN) is primarily used to migrate virtual machines only, which can be from VMware vSphere and Windows Hyper-V to your AWS cloud. In addition, the AWS Application Discovery Service simply helps you to plan migration projects by gathering information about your on-premises data centers, but this service is not a suitable migration service.
The following option is also incorrect because the web system that is being migrated is a non-static (dynamic) website, which cannot be hosted in S3:
1. Export web files to an Amazon S3 bucket in one Availability Zone using AWS Migration Hub.
2. Run the website directly out of Amazon S3.
3. Migrate the database using the AWS Database Migration Service and AWS Schema Conversion Tool (AWS SCT).
4. Use Route 53 and create an alias record pointing to the ELB.
References:
Check out this Amazon Route 53 Cheat Sheet:
Check out this AWS Database Migration Service Cheat Sheet:
AWS Migration Services Overview:
Question 73: Correct
A logistics company plans to host its web application on AWS to allow customers to track their shipping worldwide [-> CloudFront] . The web application will have a multi-tier setup – Amazon EC2 instances for running the web and application layer, Amazon S3 bucket for hosting the static content, and a NoSQL database. The company plans to provision the resources in the us-east-1 region. The company also wants to have a second site hosted on us-west-1 region for disaster recovery. The second site must have the same copy of data from the primary site and the failover should be as quick as possible when the primary region becomes unavailable. Failing back to the primary region should be done automatically once it becomes available again. [-> Route53 failover? ]
Which of the following solutions should the Solutions Architect implement to meet the company requirements?
Provision the same Auto Scaling group of EC2 instances for web and application tiers in both regions using AWS Service Catalog. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Ensure that Amazon Route 53 health check is enabled on the primary region and update the public DNS zone entry with the secondary region in case of an outage. For the database tier, create an Amazon RDS for MySQL and enable cross-region replication to create a read-replica on the secondary region.
Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create Amazon Route 53 DNS zone entries with a failover routing policy and set the us-west-1 region as the secondary site. For the database tier, create a DynamoDB global table spanning both regions.
(Correct)
Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create an Amazon CloudFront distribution. Set the S3 bucket as the origin for static files and multi-origins for the web and application tiers. For the database tier, create an Amazon DynamoDB table in each region and regularly backup to an Amazon S3 bucket.
Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create Amazon Route 53 DNS zone entries with a failover routing policy and set the us-west-1 region as the secondary site. For the database tier, create an Amazon Aurora global database spanning the two regions.
Explanation
AWS CloudFormation helps AWS customers implement an Infrastructure as Code model. Instead of setting up their environments and applications by hand, they build a template and use it to create all of the necessary resources, collectively known as a CloudFormation stack. This model removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.
With Amazon CloudFormation StackSets you can define an AWS resource configuration in a CloudFormation template and then roll it out across multiple AWS accounts and/or Regions with a couple of clicks. You can use this to set up a baseline level of AWS functionality that addresses the cross-account and cross-region scenarios. Once you have set this up, you can easily expand coverage to additional accounts and regions.
Amazon S3 Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. The objects may be replicated to a single destination bucket or multiple destination buckets. Destination buckets can be in different AWS Regions or within the same Region as the source bucket. Amazon S3 Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. CRR is helpful if you want to meet compliance requirements such as the need to have a copy of your data on another location.
Amazon DynamoDB global tables provide you with a fully managed, multi-region and multi-active database that delivers fast, local, read and write performance for massively scaled, global applications. Global tables replicate your DynamoDB tables automatically across your choice of AWS Regions. Global tables eliminate the difficult work of replicating data between Regions and resolving update conflicts, enabling you to focus on your application's business logic. In addition, global tables enable your applications to stay highly available even in the unlikely event of isolation or degradation of an entire Region.
Therefore, the correct answer is: Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create Amazon Route 53 DNS zone entries with a failover routing policy and set the us-west-1 region as the secondary site. For the database tier, create a DynamoDB global table spanning both regions.
The option that says: Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create Amazon Route 53 DNS zone entries with a failover routing policy and set the us-west-1 region as the secondary site. For the database tier, create an Amazon Aurora global database spanning the two regions is incorrect. The application is designed for a NoSQL database so a DynamoDB global table is recommended for this, not an Amazon Aurora global database.
The option that says: Provision the same Auto Scaling group of EC2 instances for web and application tiers in both regions using AWS Service Catalog. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Ensure that Amazon Route 53 health check is enabled on the primary region and update the public DNS zone entry with the secondary region in case of an outage. For the database tier, create an Amazon RDS for MySQL and enable cross-region replication to create a read-replica on the secondary region is incorrect. You don’t have to manually update the public DNS zone entry with the secondary region, you just have to configure the failover routing policy in Route 53 to automatically failover to the secondary site and vice versa. MySQL is not recommended as the application is designed for a NoSQL database.
The option that says: Create the same resources of Auto Scaling group of EC2 instances for web and application tiers on both regions using AWS CloudFormation StackSets. Enable Amazon S3 cross-Region on the S3 bucket to asynchronously replicate the contents to the secondary region. Create an Amazon CloudFront distribution. Set the S3 bucket as the origin for static files and multi-origins for the web and application tiers. For the database tier, create an Amazon DynamoDB table in each region and regularly backup to an Amazon S3 bucket is incorrect. You can’t reliably sync DynamoDB tables on two regions with just backups from S3. There will be a delay on the backup and restore. You should use DynamoDB global tables instead.
References:
Check out these Amazon S3, Amazon DynamoDB, and CloudFormation StackSet Cheat Sheets:
Question 1: Skipped
An analytics company plans to create a self-service solution that will provide a safe and cost-effective way for data scientists to access Amazon SageMaker on the company’s AWS accounts. The data scientists have limited knowledge of the AWS cloud, so the complex setup requirements for their ML models should not be exposed to them. The company wants the data scientists to be able to launch a Jupyter notebook instance as they need it. The data at rest on the storage volume of the notebook instance must be encrypted with a preconfigured AWS KMS key.
Which of the following solutions will meet the company requirements with the LEAST amount of operational overhead?
Write an AWS CloudFormation template that contains the AWS::SageMaker::NotebookInstance
resource type to launch a Jupyter notebook instance with a preconfigured AWS KMS key. Create Mappings on the CloudFormation to map simpler parameter names for instance sizes such as Small, Medium, Large. Reference the URL of the notebook instance on the Outputs section of the template. Create a portfolio in AWS Service Catalog and upload the template to be shared with the IAM role of the data scientists.
(Correct)
Write an AWS CloudFormation template that contains the AWS::SageMaker::NotebookInstance
resource type to launch a Jupyter notebook instance with a preconfigured AWS KMS key. On the Outputs section of the CloudFormation template, reference the URL of the notebook instance. Rename this template to be more user-friendly and upload it to a shared Amazon S3 bucket for distribution to the data scientists.
Create an Amazon S3 bucket with website hosting enabled. Create a simple form as a front-end website hosted on the S3 bucket that allows the data scientist to input their request for Jupyter notebook creation. Send the request to an Amazon API Gateway that will invoke an AWS Lambda function with an IAM role permission to create the Jupyter notebook instance with a preconfigured AWS KMS key. Have the Lambda function reply the URL of the notebook instance for display on the front-end website.
Create a self-service portal using AWS Proton and upload standardized service templates to Amazon S3. Add IAM permissions to the data scientist IAM group to use AWS Proton. Write a custom AWS CLI script that will take input parameters from the data scientist for the requested Jupyter notebook instance with the pre-configured AWS KMS key. Have the data scientists execute the script locally on their computers.
Explanation
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage deployed IT services and your applications, resources, and metadata.
With AWS Service Catalog, you define your own catalog of AWS services and AWS Marketplace software and make them available for your organization. Then, end users can quickly discover and deploy IT services using a self-service portal.
Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don't have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own algorithms and frameworks, SageMaker offers flexible distributed training options that adjust to your specific workflows.
You can easily create a self-service, secured data science using Amazon SageMaker, AWS Service Catalog, and AWS Key Management Service (KMS). Using AWS Service Catalog, you can use a pre-configured AWS KMS key to encrypt data at rest on the machine learning (ML) storage volume that is attached to your notebook instance without ever exposing the complex, unnecessary details to data scientists. ML storage volume encryption is enforced by an AWS Service Catalog product that is pre-configured by centralized security and/or infrastructure teams.
Therefore, the correct answer is: Write an AWS CloudFormation template that contains the AWS::SageMaker::NotebookInstance
resource type to launch a Jupyter notebook instance with a preconfigured AWS KMS key. Create Mappings on the CloudFormation to map simpler parameter names for instance sizes such as Small, Medium, Large. Reference the URL of the notebook instance on the Outputs section of the template. Create a portfolio in AWS Service Catalog and upload the template to be shared with the IAM role of the data scientists. This solution has less operational overhead because you just need to maintain a single template. Additionally, AWS Service Catalog allows end-users to quickly discover and deploy IT services using a self-service portal.
The option that says: Create an Amazon S3 bucket with website hosting enabled. Create a simple form as a front-end website hosted on the S3 bucket that allows the data scientist to input their request for Jupyter notebook creation. Send the request to an Amazon API Gateway that will invoke an AWS Lambda function with an IAM role permission to create the Jupyter notebook instance with a preconfigured AWS KMS key. Have the Lambda function reply the URL of the notebook instance for display on the front-end website is incorrect. Although this is possible, this requires a lot of operational overhead as you need to write a custom website and write a proper Lambda function that has the appropriate code to create the needed resources. Additionally, as the company has several accounts, you will need to create the Lambda function for each AWS account.
The option that says: Write an AWS CloudFormation template that contains the AWS::SageMaker::NotebookInstance
resource type to launch a Jupyter notebook instance with a preconfigured AWS KMS key. On the Outputs section of the CloudFormation template, reference the URL of the notebook instance. Rename this template to be more user-friendly and upload it to a shared Amazon S3 bucket for distribution to the data scientists is incorrect. This is possible, however, it is not very user-friendly as the users need to download the appropriate CloudFormation template from Amazon S3 and then upload it to CloudFormation. They will then need to input the needed parameters for the creation of their Jupyter notebook instance.
The option that says: Create a self-service portal using AWS Proton and upload standardized service templates to Amazon S3. Add IAM permissions to the data scientist IAM group to use AWS Proton. Write a custom AWS CLI script that will take input parameters from the data scientist for the requested Jupyter notebook instance with the pre-configured AWS KMS key. Have the data scientists execute the script locally on their computers is incorrect. You don't have to write a custom AWS CLI script to take input parameters and run the script on the local machines. It is possible to create a self-service portal using AWS Proton. You can use it to speed up the software development lifecycle with pre-approved templates for infrastructure.
References:
Check out these Amazon SageMaker and AWS Service Catalog Cheat Sheets:
Question 28: Skipped
A media company in South Korea offers high-quality wildlife photos to its clients. Its photographers upload a large number of photographs to the company’s Amazon S3 bucket. Currently, the company is using a dedicated group of on-premises servers to process the photos and uses an open-source messaging system to deliver job information to the servers. After processing, the data would go to a tape library and be stored for long-term archival. The company decided to shift everything to AWS Cloud, and the solutions architect was tasked to implement the same existing infrastructure design and leverage AWS tools such as storage and messaging services to minimize cost.
Which of the following options is the recommended solution that will meet the requirement?
SQS will handle the job messages, while CloudWatch alarms will terminate any idle EC2 worker instances. After the data has been processed, change the storage class of your S3 objects to S3 IA-Standard.
Create an Auto-scaling group of spot instance workers that scale according to the queue depth in SQS to process job messages. After the data has been processed, transfer your S3 objects to Amazon Glacier.(Correct)
Initially change the storage class of the S3 objects to S3 IA-Standard. Then create an Auto-scaling group of spot instance workers that scale according to the queue depth in SQS to process job messages. After the data has been processed, transfer your S3 objects to Amazon S3-IA.
SNS will handle the passing of job messages, while CloudWatch alarms will terminate any idle spot worker instances. After the data has been processed, transfer your S3 objects to Amazon Glacier.
Explanation
There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it's configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
In this scenario, the best option is the combination of SQS and Glacier as a storage option.
Therefore, the correct answer is: Create an Auto-scaling group of spot instance workers that scale according to the queue depth in SQS to process job messages. After the data has been processed, transfer your S3 objects to Amazon Glacier. It uses SQS to process the messages, and it uses Glacier as the archival storage solution which is the cheapest storage option.
The option that says: SQS will handle the job messages, while CloudWatch alarms will terminate any idle EC2 worker instances. After the data has been processed, change the storage class of your S3 objects to S3 IA-Standard is incorrect. S3 IA is not an archival storage option, and since auto scaling is not mentioned, you cannot use CloudWatch alarms to terminate the idle EC2 instances.
The option that says: Initially change the storage class of the S3 objects to S3 IA-Standard. Then create an Auto-scaling group of spot instance workers that scale according to the queue depth in SQS to process job messages. After the data has been processed, transfer your S3 objects to Amazon S3-IA is incorrect. S3 IA is not an archival storage option.
The option that says: SNS will handle the passing of job messages, while CloudWatch alarms will terminate any idle spot worker instances. After the data has been processed, transfer your S3 objects to Amazon Glacier is incorrect. Amazon Simple Notification Service cannot be used to process the messages.
References:
Check out these Amazon SQS and Amazon S3 Glacier Cheat Sheets:
Question 36: Skipped
A company has several NFS shares in its on-premises data center that contains millions of small log files totaling around 50TB in size. The files in these NFS shares need to be migrated to an Amazon S3 bucket. To start the migration process, the solutions architect requested an AWS Snowball Edge device that will be used to transfer the files to Amazon S3. A file interface was configured on the Snowball Edge device and is connected to the corporate network. The Solutions Architect initiated the snowball cp
command to start the copying process, however, the copying of data is significantly slower than expected.
Which of the following options are the likely cause of the slow transfer speed and the recommended solution?
Ingesting millions of files has saturated the processing power of the Snowball Edge. Request for another Snowball Edge device and cluster them together to increase the ingest throughput.
The file interface of the Snowball Edge has reached its throughput limit. Change the interface to an S3 Adapter instead for a significantly faster transfer speed.
The file interface of the Snowball Edge is limited by the network interface speed. Connect the device directly using a high-speed USB 3.0 interface instead to maximize the copying throughput.
This is due to encryption overhead when copying files to the Snowball Edge device. Open multiple sessions to the Snowball Edge device and initiate parallel copy jobs to improve the overall copying throughput.
(Correct)
Explanation
One of the major ways that you can improve the performance of an AWS Snowball Edge device is to speed up the transfer of data going to and from a device. In general, you can improve the transfer speed from your data source to the device in the following ways. The following list is ordered from largest to smallest positive impact on performance:
Perform multiple write operations at one time – To do this, run each command from multiple terminal windows on a computer with a network connection to a single AWS Snowball Edge device.
Transfer small files in batches – Each copy operation has some overhead because of encryption. To speed up the process, batch files together in a single archive. When you batch files together, they can be auto-extracted when they are imported into Amazon S3.
Write from multiple computers – A single AWS Snowball Edge device can be connected to many computers on a network. Each computer can connect to any of the three network interfaces at once.
Don't perform other operations on files during transfer – Renaming files during transfer, changing their metadata, or writing data to the files during a copy operation has a negative impact on transfer performance. AWS recommends that your files remain in a static state while you transfer them.
Reduce local network use – Your AWS Snowball Edge device communicates across your local network. So you can improve data transfer speeds by reducing other local network traffic between the AWS Snowball Edge device, the switch it's connected to, and the computer that hosts your data source.
Eliminate unnecessary hops – AWS recommends that you set up your AWS Snowball Edge device, your data source, and the computer running the terminal connection between them so that they're the only machines communicating across a single switch. Doing so can improve data transfer speeds.
For transferring small files, AWS also recommends transferring in batches. Each copy operation has some overhead because of encryption. To speed up the process of transferring small files to your AWS Snowball Edge device, you can batch them together in a single archive. When you batch files together, they can be auto-extracted when they are imported into Amazon S3, if they were batched in one of the supported archive formats.
Typically, files that are 1 MB or smaller should be included in batches. There's no hard limit on the number of files you can have in a batch, though AWS recommends that you limit your batches to about 10,000 files. Having more than 100,000 files in a batch can affect how quickly those files import into Amazon S3 after you return the device. AWS recommends that the total size of each batch be no larger than 100 GB. Batching files is a manual process, which you have to manage.
Therefore, the correct answer is: This is due to encryption overhead when copying files to the Snowball Edge device. Open multiple sessions to the Snowball Edge device and initiate parallel copy jobs to improve the overall copying throughput. Performing multiple copy operations to the Snowball Edge device has the biggest impact to improve your transfer speed.
The option that says: Ingesting millions of files has saturated the processing power of the Snowball Edge. Request for another Snowball Edge device and cluster them together to increase the ingest throughput is incorrect. A Snowball Edge cluster has the benefits of increased durability and storage capacity. It does not improve the copy transfer speed.
The option that says: The file interface of the Snowball Edge has reached its throughput limit. Change the interface to an S3 Adapter instead for a significantly faster transfer speed is incorrect. An S3 Adapter is used to transfer data programmatically to and from the AWS Snowball Edge device using Amazon S3 REST API actions. It may be faster to use an S3 interface if you reached the file interface limit. However, the question already states that the copying of data is very slow compared to what is expected.
The option that says: The file interface of the Snowball Edge is limited by the network interface speed. Connect the device directly using a high-speed USB 3.0 interface instead to maximize the copying throughput is incorrect. Although some revisions of USB 3.0 or USB 3.1 can support up to 5 Gbps to 10 Gbps speeds, the network interface on the Snowball Edge supports up to 100 Gbps. You can maximize throughput by issuing multiple copy commands to the Snowball device.
References:
Check out this AWS Snowball Edge Cheat Sheet:
Question 52: Skipped
A company is running 150 virtual machines (VMs) using 40 TB of storage on its on-premises data center. The company wants to migrate its whole environment to AWS within the next three months. The VMs are mainly used during business hours only, so they can be taken offline, but some are mission-critical, which means that the downtime needs to be minimized. Since upgrading the Internet connection is quite costly for the company, the on-premises network administrator provisioned only a 12 Mbps Internet bandwidth for the migration. The Solutions Architect must design a cost-effective plan to complete the migration within the target time frame.
Which of the following options should the Solutions Architect implement to fulfill the requirements?
Create an export of your virtual machines during out of office hours. Use the AWS Transfer service to securely upload the VMs to Amazon S3 using the SFTP protocol. Import the VMs into Amazon EC2 instances using the VM Import/Export service.
Use AWS Application Migration Service to migrate the mission-critical virtual machines to AWS. Request an AWS Snowball device and transfer the exported VMs to it. Once the VMs are on Amazon S3, import the VMs into Amazon EC2 instances using the VM Import/Export service.
(Correct)
Deploy the AWS Agentless Discovery connector on the company VMware vCenter to assess each application. With the information gathered, refactor each application to run on AWS services or using AWS Marketplace solutions.
Request for a 1 Gbps AWS Direct Connect connection from the on-premises data center to AWS. Create a private virtual interface on Direct Connect. Migrate the virtual machines to AWS using the AWS Application Migration Service (MGN).
Explanation
AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to lift and shift a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows. MGN replicates source servers into your AWS account. When you’re ready, it automatically converts and launches your servers on AWS so you can quickly benefit from the cost savings, productivity, resilience, and agility of the Cloud. Once your applications are running on AWS, you can leverage AWS services and capabilities to quickly and easily re-platform or refactor those applications – which makes lift-and-shift a fast route to modernization.
AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers, including high network costs, long transfer times, and security concerns.
AWS Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data, and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS. Data from the Snowball device will be imported to your selected Amazon S3 bucket.
AWS VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. VM Import/Export is available at no additional charge beyond standard usage charges for Amazon EC2 and Amazon S3. As part of the import process, VM Import will convert your VM into an Amazon EC2 AMI, which you can use to run Amazon EC2 instances.
Therefore, the correct answer is: Use AWS Application Migration Service to migrate the mission-critical virtual machines to AWS. Request an AWS Snowball device and transfer the exported VMs to it. Once the VMs are on Amazon S3, import the VMs into Amazon EC2 instances using the VM Import/Export service.
The option that says: Request for a 1 Gbps AWS Direct Connect connection from the on-premises data center to AWS. Create a private virtual interface on Direct Connect. Migrate the virtual machines to AWS using the AWS Application Migration Service (MGN) is incorrect. Although this is a fast solution that can finish the migration within the required time frame, this is not the most cost-effective solution. As stated in the question, upgrading the Internet connection (and probably requesting a new connection) is costly for the company. Moreover, the requirement is primarily to migrate to virtual machines and not to establish a fast, dedicated network connection from the on-premises network to AWS.
The option that says: Deploy the AWS Agentless Discovery connector on the company VMware vCenter to assess each application. With the information gathered, refactor each application to run on AWS services or using AWS Marketplace solutions is incorrect. This requires a lot of effort for planning since you will have to match each application to the AWS service. This will require changes in the application and can cause possible interruptions during the migration.
The option that says: Create an export of your virtual machines during out of office hours. Use the AWS Transfer service to securely upload the VMs to Amazon S3 using the SFTP protocol. Import the VMs into Amazon EC2 instances using the VM Import/Export service is incorrect. Although this is possible, with the 12 Mbps Internet connection, it will take more than three months to upload all the 40 TB worth of VM storage to an Amazon S3 bucket.
References:
Check out this AWS Server Migration Service Cheat Sheet:
Question 73: Skipped
A company is planning to migrate its workload to the AWS cloud. The solutions architect is looking to reduce the amount of time spent managing database instances from the on-premises data center by migrating to a managed relational database service in AWS such as Amazon Relational Database Service (RDS). In addition, the solutions architect plans to move the application hosted in the on-premises data center to a fully managed platform such as AWS Elastic Beanstalk.
Which of the following is the most cost-effective migration strategy that should be implemented to meet the above requirement?
Rehost
Repurchase
Replatform
(Correct)
Refactor / Re-architect
Explanation
Organizations usually begin to think about how they will migrate an application during Phase 2 (Portfolio Discovery and Planning) of the migration process. This is when you determine what is in your environment and the migration strategy for each application. The six approaches detailed below are common migration strategies employed and build upon “The 5 R’s” that Gartner Inc, a global research and advisory firm, outlined in 2011.
You should gain a thorough understanding of which migration strategy will be best suited for certain portions of your portfolio. It is also important to consider that while one of the six strategies may be best for migrating certain applications in a given portfolio, another strategy might work better for moving different applications in the same portfolio.
1. Rehost (“lift and shift”) - In a large legacy migration scenario where the organization is looking to quickly implement its migration and scale to meet a business case, we find that the majority of applications are rehosted.
2. Replatform (“lift, tinker and shift”) - This entails making a few cloud optimizations in order to achieve some tangible benefit without changing the core architecture of the application.
3. Repurchase (“drop and shop”) - This is a decision to move to a different product and likely means your organization is willing to change the existing licensing model you have been using. For workloads that can easily be upgraded to newer versions, this strategy might allow a feature set upgrade and smoother implementation.
4. Refactor / Re-architect - Typically, this is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment.
5. Retire - Identifying IT assets that are no longer useful and can be turned off will help boost your business case and direct your attention towards maintaining the resources that are widely used.
6. Retain - You may want to retain portions of your IT portfolio because there are some applications that you are not ready to migrate and feel more comfortable keeping them on-premises, or you are not ready to prioritize an application that was recently upgraded and then make changes to it again.
Therefore, the correct answer is: Replatform. This strategy is done by making a few cloud optimizations on your existing systems before migrating them to AWS, which is what will happen if you move your existing database and web applications to AWS. This strategy is more suitable when you want to reduce the amount of time you spend managing database instances by migrating to a managed relational database service such as Amazon Relational Database Service (RDS), or migrating your application to a fully managed platform like AWS Elastic Beanstalk.
Rehost is incorrect. Rehost (“lift and shift”) strategy is more suitable for quickly migrating the systems to AWS to meet a certain business case without no additional configuration involved. Take note that if you migrate your systems to either Elastic Beanstalk or RDS, you will still need to set up, configure, and test your systems, which takes additional time and effort.
Repurchase is incorrect. This strategy entails a decision to move to a different product and likely means your organization is willing to change the existing licensing model you have been using. Hence, this is not a suitable migration strategy for this scenario.
Refactor / Re-architect is incorrect. This strategy is suitable if there is a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment. This type of migration strategy also entails additional cost, compared with the Replatform strategy, since you are allocating time, effort, and budget to optimize your systems.
References:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 74: Skipped
98890u00A company is using AWS Managed Active Directory Service to host the company AD in the AWS Cloud with a custom AD domain name private.tutorialsdojo.com. A pair of domain controllers are launched with the default configuration inside the VPC. A VPC interface endpoint was also created for the Amazon Kinesis using AWS Private Link to allow instances to connect to Kinesis service endpoints from inside the VPC. The solutions architect launched several EC2 instances in the VPC, however, the instances were not able to resolve the company’s custom AD domain name.
Which of the following steps should the Solutions Architect implement to allow the instances to resolve both AWS VPC endpoints and the AWS Managed Microsoft AD domain’s FQDN? (Select TWO.)
Reconfigure the DNS service on every client on the VPC to split DNS queries. Use the Active Directory servers for the custom AD domain and the VPC resolver for all other DNS queries.
Create a conditional forwarder inside the endpoint to forward any queries for private.tutorialsdojo.com to the IP addresses of the two domain controllers.
Create an inbound endpoint on the Amazon Route 53 console. Set the AmazonProvidedDNS as the DNS resolver for the VPC.
Create a forwarding rule inside the endpoint to forward any queries for private.tutorialsdojo.com to the IP addresses of the two domain controllers.
(Correct)
Create an outbound endpoint on the Amazon Route 53 console. Set the AmazonProvidedDNS as the DNS resolver for the VPC.
(Correct)
Explanation
When you create a VPC using Amazon VPC, Route 53 Resolver automatically answers DNS queries for local VPC domain names for EC2 instances (ec2-192-0-2-44.compute-1.amazonaws.com) and records in private hosted zones (private.tutorialsdojo.com). For all other domain names, Resolver performs recursive lookups against public name servers.
You also can integrate DNS resolution between Resolver and DNS resolvers on your network by configuring forwarding rules. Your network can include any network that is reachable from your VPC, such as the following:
The VPC itself
Another peered VPC
An on-premises network that is connected to AWS with AWS Direct Connect, a VPN, or a network address translation (NAT) gateway.
A Route 53 Resolver Endpoint is a customer-managed resolver consisting of one or more Elastic Network Interfaces (ENIs) deployed on your VPC. Resolver Endpoints are classified into two types:
Inbound Endpoint - provides DNS resolution of AWS resources, such as EC2 instances, for your corporate network.
Outbound Endpoint - provides resolution of specific DNS names that you configure using forwarding rules to your VPC.
Outbound Resolver Endpoints host Forwarding Rules that forward queries for specified domain names to specific IP addresses. You create forwarding rules when you want to forward DNS queries for specified domain names to DNS resolvers on your network.
To forward selected queries, you create Resolver rules that specify the domain names for the DNS queries that you want to forward (such as example.com), and the IP addresses of the DNS resolvers on your network that you want to forward the queries to. If a query matches multiple rules (tutorialsdojo.com, portal.tutorialsdojo.com), the Resolver chooses the rule with the most specific match (portal.tutorialsdojo.com) and forwards the query to the IP addresses that you specified in that rule.
When Outbound Endpoint and Forwarding Rule are created, any resource in the VPC that queries the AmazonProvidedDNS as its DNS resolver is able to seamlessly resolve for AWS Managed Microsoft AD domain’s FQDN, as well as any AWS resources on the VPC such as (interface) VPC Endpoints.
The option that says: Create an outbound endpoint on the Amazon Route 53 console. Set the AmazonProvidedDNS as the DNS resolver for the VPC is correct. You need an outbound endpoint to forward and resolve custom domain names inside your VPC.
The option that says: Create a forwarding rule inside the endpoint to forward any queries for private.tutorialsdojo.com to the IP addresses of the two domain controllers is correct. The forwarding rules will handle queries for a given DNS domain and forward them to the AD server to resolve them.
The option that says: Create an inbound endpoint on the Amazon Route 53 console. Set the AmazonProvidedDNS as the DNS resolver for the VPC is incorrect. An inbound endpoint is used for DNS resolution of AWS services. You need an outbound endpoint for this scenario.
The option that says: Reconfigure the DNS service on every client on the VPC to split DNS queries. Use the Active Directory servers for the custom AD domain and the VPC resolver for all other DNS queries is incorrect. It is not recommended to manually configure resources to split DNS queries. This entails a lot of management overhead. You just need to set the Active Directory servers as the DNS servers and the requests will be forwarded to the VPC resolver accordingly.
The option that says: Create a conditional forwarder inside the endpoint to forward any queries for private.tutorialsdojo.com to the IP addresses of the two domain controllers is incorrect. A conditional forwarder is configured inside the AD servers, not on the Route 53 resolver endpoint.
References:
Check out these Amazon VPC and Amazon Route 53 Cheat Sheets:
Question 2: Skipped
A company runs its legacy web application in its on-premises data center. The solutions architect has been tasked to move the legacy web application in a virtual machine running inside the data center to the Amazon VPC. However, this application requires a private and dedicated connection to a number of servers hosted on the on-premises network in order for it to work.
Which combination of options provides the most suitable way to configure the web application running inside the VPC to reach back and access its internal dependencies on the company’s on-premises network? (Select TWO.)
A network device in your data center that supports Border Gateway Protocol (BGP) and BGP MD5 authentication.(Correct)
Set up a Transit VPC between your on-premises data center and your VPC.
An Internet Gateway to allow a VPN connection.
An Elastic IP address on the VPC instance.
An AWS Direct Connect link between the VPC and the network housing the internal services.(Correct)
Explanation
AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router.
With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in your network path. An AWS Direct Connect location provides access to AWS in the Region with which it is associated. You can use a single connection in a public Region or AWS GovCloud (US) to access public AWS services in all other public Regions.
The option that says: A network device in your data center that supports Border Gateway Protocol (BGP) and BGP MD5 authentication is correct as a network device that supports Border Gateway Protocol (BGP) and BGP MD5 authentication is needed to establish a Direct Connect link from your data center to your VPC.
The option that says: An AWS Direct Connect link between the VPC and the network housing the internal services is correct because Direct Connect sets up a dedicated connection between on-premises data center and Amazon VPC, and provides you with the ability to connect your on-premises servers with the instances in your VPC.
The option that says: An Internet Gateway to allow a VPN connection is incorrect as you normally create a VPN connection based on a customer gateway and a virtual private gateway (VPG) in AWS.
The option that says: An Elastic IP address on the VPC instance is incorrect because EIPs are not needed as the instances in the VPC can communicate with on-premises servers via their private IP address.
The option that says: Setting up a Transit VPC between your on-premises data center and your VPC is incorrect. Although a Transit VPC can establish a connection to your VPC and on-premises data center, this option is not suitable for this scenario. Remember that a Transit VPC just simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks.
References:
Check out this AWS Direct Connect Cheat Sheet:
Question 26: Skipped
A global finance company has multiple data centers around the globe. Due to the ever-growing data that your company is storing, the solutions architect was instructed to set up a durable, cost-effective solution to archive sensitive data from the existing on-premises tape-based backup infrastructure to AWS Cloud.
Which of the following options is the recommended implementation to achieve the company requirements?
Set up a File Gateway to back up your data in Amazon S3 and archive in Amazon Glacier using your existing tape-based processes.
Set up a Tape Gateway to back up your data in Amazon S3 and archive it in Amazon Glacier using your existing tape-based processes.
(Correct)
Set up a Tape Gateway to back up your data in Amazon S3 with point-in-time backups as tapes which will be stored in the Virtual Tape Shelf.
Set up a Stored Volume Gateway to back up your data in Amazon S3 with point-in-time backups as EBS snapshots.
Explanation
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the Amazon Web Services Cloud for scalable and cost-effective storage that helps maintain data security.
AWS Storage Gateway offers file-based file gateways (Amazon S3 File and Amazon FSx File), volume-based (Cached and Stored), and tape-based storage solutions.
Tape Gateway offers a durable, cost-effective solution to archive your data in the AWS Cloud. With its virtual tape library (VTL) interface, you use your existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway. Each tape gateway is preconfigured with a media changer and tape drives. These are available to your existing client backup applications as iSCSI devices. You add tape cartridges as you need to archive your data.
Therefore, the correct answer is: Set up a Tape Gateway to back up your data in Amazon S3 and archive it in Amazon Glacier using your existing tape-based processes.
The option that says: Set up a Stored Volume Gateway to back up your data in Amazon S3 with point-in-time backups as EBS snapshots is incorrect. Stored Volume Gateway is not recommended for Tape archives, you should use Tape Gateway instead.
The option that says: Set up a Tape Gateway to back up your data in Amazon S3 with point-in-time backups as tapes which will be stored in the Virtual Tape Shelf is incorrect. The archival should be on Amazon Glacier for cost-effectiveness.
The option that says: Set up a File Gateway to back up your data in Amazon S3 and archive in Amazon Glacier using your existing tape-based processes is incorrect. A File Gateway is not recommended for Tape archives, you should use Tape Gateway instead.
References:
Check out this AWS Storage Gateway Cheat Sheet:
Question 60: Skipped
A fashion company in France sells bags, clothes, and other luxury items in its online web store. The online store is currently hosted on the company’s on-premises data center. The company has recently decided to move all of its on-premises infrastructure to the AWS cloud. The main application is running on an NGINX web server and a database with an Oracle Real Application Clusters (RAC) One Node configuration.
Which of the following is the best way to migrate the application to AWS and set up an automated backup?
Launch an EC2 instance and run an NGINX server to host the application. Deploy an RDS instance and enable automated backups on the RDS RAC cluster.
Launch an On-Demand EC2 instance and run an NGINX server to host the application. Deploy an RDS instance with a Multi-AZ deployment configuration and enable automated backups on the RDS RAC cluster.
Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS Volumes on the EC2 instance of the database and then write a shell script that runs the manual snapshot of the volumes.
Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS volumes to the EC2 instance of the database and then use the Data Lifecycle Manager to automatically create scheduled snapshots against the EBS volumes.
(Correct)
Explanation
Oracle RAC is not supported by RDS. That is why you need to deploy the database in an EC2 instance and then either create a shell script to automate the backup or use the Data Lifecycle Manager to automate the process.
An Oracle Real Application Clusters (RAC) One Node option provides virtualized servers on a single machine. This provides an 'always on' availability for single-instance databases for a fraction of a cost.
Amazon Data Lifecycle Manager (DLM) for EBS Snapshots provides a simple, automated way to back up data stored on Amazon EBS volumes. You can define backup and retention schedules for EBS snapshots by creating lifecycle policies based on tags. With this feature, you no longer have to rely on custom scripts to create and manage your backups.
Hence, the correct answer is: Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS volumes to the EC2 instance of the database and then use the Data Lifecycle Manager to automatically create scheduled snapshots against the EBS volumes.
The option that says: Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS Volumes on the EC2 instance of the database and then write a shell script that runs the manual snapshot of the volumes is incorrect. Although this approach is valid, a more suitable option is to use the Data Lifecycle Manager (DLM) to automatically take the snapshot of the EC2 instance. The DLM can also reduce storage costs by deleting outdated backups.
The following options are incorrect as these two use Amazon RDS, which doesn't natively support Oracle RAC:
- Launch an EC2 instance and run an NGINX server to host the application. Deploy an RDS instance and enable automated backups on the RDS RAC cluster.
- Launch an On-Demand EC2 instance and run an NGINX server to host the application. Deploy an RDS instance with a Multi-AZ deployment configuration and enable automated backups on the RDS RAC cluster.
References:
Check out this Amazon RDS Cheat Sheet:
Question 63: Skipped
A company offers a service that allows users to upload media files through a web portal. The web servers accept the media files and are directly uploaded on the on-premises Network Attached Storage (NAS). For each uploaded media file, a corresponding message is sent to the message queue. A processing server picks up each message and processes each media file which can take up to 30 minutes to process. The company noticed that the number of media files waiting in the processing queue is significantly higher during business hours, but the processing server quickly catches up after business hours. To save costs, the company hired a Solutions Architect to improve the media processing by migrating the workload to AWS Cloud.
Which of the following options is the most cost-effective solution?
Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon S3 bucket.
Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon EFS mount point and shut down the EC2 instances after processing is complete.
Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon EFS volume.
Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon S3 bucket.
(Correct)
Explanation
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers "standard" as the default queue type. Standard queues support at-least-once message delivery.
You can use standard message queues in many scenarios, as long as your application can process messages that arrive more than once and out of order, for example:
- Decouple live user requests from intensive background work – Let users upload media while resizing or encoding it.
- Allocate tasks to multiple worker nodes – Process a high number of credit card validation requests.
- Batch messages for future processing – Schedule multiple entries to be added to a database.
You can scale-in/scale-out your Amazon EC2 Auto Scaling group in response to changes in system load in an Amazon Simple Queue Service (Amazon SQS) queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it's configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times.
The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group. If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively.
Amazon SQS is a queue service that is highly scalable, simple to use, and doesn't require you to set up message brokers. AWS recommends this service for new applications that can benefit from nearly unlimited scalability and simple APIs.
Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers. AWS recommends Amazon MQ for migrating applications from existing message brokers that rely on compatibility with APIs such as JMS or protocols such as AMQP, MQTT, OpenWire, and STOMP.
Therefore, the correct answer is: Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon S3 bucket.
The option that says: Reconfigure the existing web servers to publish messages to a standard queue on Amazon SQS. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon S3 bucket is incorrect. Although this answer is the most cost-effective, AWS Lambda only allows functions to run up to 15 minutes. Remember that the media files can take up to 30 minutes to process.
The option that says: Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an Auto Scaling group of Amazon EC2 instances that will pull requests from the queue and process the media files. Configure the Auto Scaling group to scale based on the length of the SQS queue. Send the processed media files into an Amazon EFS mount point and shut down the EC2 instances after processing is complete is incorrect. It is recommended to use Amazon SQS for this because it is not stated in the scenario to have compatibility with JMS or other protocols like MQTT AMQP, etc. Also, storing media files on Amazon EFS is more expensive than using Amazon S3.
The option that says: Reconfigure the existing web servers to publish messages to a queue in Amazon MQ. Create an AWS Lambda function that will pull requests from the SQS queue and process the media files. Invoke the Lambda function every time a new message is sent to the queue. Send the processed media files into an Amazon EFS volume is incorrect. Lambda functions cannot run for more than 15 minutes, while in the scenario, the processing time is 30 minutes. Moreover, storing media files on Amazon EFS is more expensive than using Amazon S3.
References:
Check out these Amazon SQS and Amazon MQ Cheat Sheets:
Question 73: Skipped
A company plans to migrate its on-premises workload to the AWS cloud. The solutions architect has been tasked to perform a Total Cost of Ownership (TCO) analysis and prepare a cost-optimized migration plan for the systems hosted in your on-premises network to AWS. It is required to collect detail about configuration, usage, and behavioral data from the on-premises servers to help better understand the current workloads before doing the migration.
Which of the following option is the recommended solution that should be implemented to meet the company requirements?
Use the AWS SAM service to move your data to AWS which will also help you perform the TCO analysis.
Use the AWS Application Discovery Service to gather data about your on-premises data center and perform the TCO analysis.
(Correct)
Use the AWS Migration Hub service to collect data from each server in your on-premises data center and perform the TCO analysis.
Use the AWS Application Migration Service (MGN) to migrate VM servers to AWS and collect the data required to complete your TCO analysis.
Explanation
AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers.
Planning data center migrations can involve thousands of workloads that are often deeply interdependent. Server utilization data and dependency mapping are important early first steps in the migration process. AWS Application Discovery Service collects and presents configuration, usage, and behavior data from your servers to help you better understand your workloads.
The collected data is retained in encrypted format in an AWS Application Discovery Service data store. You can export this data as a CSV file and use it to estimate the Total Cost of Ownership (TCO) of running on AWS and to plan your migration to AWS. In addition, this data is also available in AWS Migration Hub, where you can migrate the discovered servers and track their progress as they get migrated to AWS.
Therefore, the correct answer is: Use the AWS Application Discovery Service to gather data about your on-premises data center and perform the TCO analysis.
The option that says: Use the AWS SAM service to move your data to AWS which will also help you perform the TCO analysis is incorrect. The AWS Serverless Application Model (AWS SAM) service is just an open-source framework that you can use to build serverless applications on AWS. It is not a suitable migration service to be used for your on-premises systems.
The option that says: Use the AWS Migration Hub service to collect data from each server in your on-premises data center and performing the TCO analysis is incorrect. AWS Migration Hub simply provides a single location to track the progress of application migrations across multiple AWS and partner solutions. Using Migration Hub just allows you to choose the AWS and partner migration tools that best fits your needs while providing visibility into the status of migrations across your portfolio of applications. Although AWS Application Discovery Service can be integrated with AWS Migration Hub, this service alone is not enough to meet the requirement in this scenario.
The option that says: Use the AWS Application Migration Service (MGN) to migrate VM servers to AWS and collect the data required to complete your TCO analysis is incorrect. Although AWS Application Migration Service is a migration tool, it is used to replicate or mirror your on-premises VMs to the AWS cloud. It does not provide a helpful dashboard on applications running on each VM, unlike AWS Application Discovery Service.
References:
Check out this AWS Application Migration (MGN) Service Cheat Sheet:
Question 14: Skipped
A company is hosting its application and MySQL database in its on-premises data center. The database increases at about 10GB per day and is approximately 25TB in total size. The company wants to migrate the database workload to the AWS cloud. A 50Mbps VPN connection is currently in place to connect the corporate network to AWS. The company plans to complete the migration to AWS within 3 weeks with the LEAST downtime possible.
Which of the following solutions should be implemented to meet the company requirements?
Create a snapshot of the on-premises database server and use VM Import/Export service to import the snapshot to AWS. Provision a new Amazon EC2 instance from the imported snapshot. Configure replication from the on-premises database server to the EC2 instance through the VPN connection. Wait until the replication is complete then update the database DNS entry to point to the EC2 instance. Stop the database replication.
Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Using the VPN connection, configure replication from the on-premises database server to the Amazon Aurora DB instance. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication.
(Correct)
Temporarily stop the on-premises application to stop any database I/O operation. Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Update the database DNS entry to point to the Aurora DB instance. Start the application again to resume normal operations.
Create an AWS Database Migration Service (DMS) instance and an Amazon Aurora MySQL DB instance. Define the on-premises database server details and the Amazon Aurora MySQL instance details on the AWS DMS instance. Use the VPN connection to start the replication from the on-premises database server to AWS. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication.
Explanation
AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale-data transfer up to 80TB. Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.
Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data, and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS.
Each import job uses a single Snowball appliance. After you create a job in the AWS Snow Family Management Console or the job management API, AWS ships a Snowball to you. When it arrives in a few days, you connect the Snowball Edge device to your network and transfer the data that you want to be imported into Amazon S3 onto the device. When you’re done transferring data, ship the Snowball back to AWS, and AWS will import your data into Amazon S3.
Amazon Aurora MySQL integrates with other AWS services so that you can extend your Aurora MySQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to load data from text or XML files stored in an Amazon Simple Storage Service (Amazon S3) bucket into your DB cluster using the LOAD DATA FROM S3 or LOAD XML FROM S3 command.
Therefore, the correct answer is: Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Using the VPN connection, configure replication from the on-premises database server to the Amazon Aurora DB instance. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication. You can use the Snowball device to import TBs of data to AWS. Then you can load the data to the Amazon Aurora DB instance and replicate the missing data from the on-premises database server. You can cut over to the Aurora database after replication.
The option that says: Create a snapshot of the on-premises database server and use VM Import/Export service to import the snapshot to AWS. Provision a new Amazon EC2 instance from the imported snapshot. Configure replication from the on-premises database server to the EC2 instance through the VPN connection. Wait until the replication is complete then update the database DNS entry to point to the EC2 instance. Stop the database replication is incorrect. You cannot import a 25 TB snapshot using VM Import/Export. This option does not specify if it uses the company network to export the snapshot to AWS. If so, the 50Mbps won't be enough to export the entire database within the 3-week window.
The option that says: Create an AWS Database Migration Service (DMS) instance and an Amazon Aurora MySQL DB instance. Define the on-premises database server details and the Amazon Aurora MySQL instance details on the AWS DMS instance. Use the VPN connection to start the replication from the on-premises database server to AWS. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication is incorrect. Replicating an entire 25 TB database via the 50Mbps VPN connection will take several weeks for the replication to catch up. Not to mention that there will be additional 10 GB of data to be replicated for each passing day.
The option that says: Temporarily stop the on-premises application to stop any database I/O operation. Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Amazon Aurora MySQL DB instance and load the data. Update the database DNS entry to point to the Aurora DB instance. Start the application again to resume normal operations is incorrect. This is possible but the downtime is too much. From the data export, loading to the Snowball device, and importing to the Amazon Aurora DB instance, the downtime will take at least a few days.
References:
Check out these AWS Snowball Edge and Amazon Aurora Cheat Sheets:
Question 16: Skipped
A leading aerospace engineering company has over 1 TB of aeronautical data stored on the corporate file server of their on-premises network. This data is used by a lot of their in-house analytical and engineering applications. The aeronautical data consists of technical files which can have a file size of a few megabytes to multiple gigabytes. The data scientists typically modify an average of 10 percent of these files every day. Recently, the management decided to adopt a hybrid cloud architecture to better serve their clients around the globe. The management requested to migrate its applications to AWS over the weekend to minimize any business impact and system downtime. The on-premises data center has a 50-Mbps Internet connection which can be used to transfer all of the 1 TB of data in AWS but based on the calculations, it will take at least 48 hours to complete this task.
Which of the following options will allow the solutions architect to move all of the aeronautical data to AWS to meet the above requirement?
1. Set up a Gateway-Stored volume gateway using the AWS Storage Gateway service.
2. Establish an iSCSI connection between your on-premises data center and your AWS Cloud then copy the data to the Storage Gateway volume.
3. After all of your data has been successfully copied, create an EBS snapshot of the volume.
4. Restore the snapshots as EBS volumes and attach them to your EC2 instances on Sunday.
1. At the end of business hours on Friday, start copying the data to a Snowball Edge device.
2. When the Snowball Edge have completely transferred your data to your AWS Cloud, copy all of the data to multiple EBS Volumes.
3. On Sunday afternoon, mount the generated EBS volume to your EC2 instances.
1. Synchronize the data from your on-premises data center to an S3 bucket using Multipart upload for large files from Saturday morning to Sunday evening.
2. Configure your application hosted in AWS to use the S3 bucket to serve the aeronautical data files.
1. Synchronize the on-premises data to an S3 bucket one week before the migration schedule using the AWS CLI's S3 sync
command.
2. Perform a final synchronization task on Friday after the end of business hours.
3. Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket.
(Correct)
Explanation
The most effective choice here is to use the S3 sync
feature that is available in AWS CLI. In this way, you can comfortably synchronize the data in your on-premises server and in AWS a week before the migration. And on Friday, just do another sync to complete the task. Remember that the sync feature of S3 only uploads the "delta" or in other words, the "difference" in the subset. Therefore, it will only take just a fraction of the time to complete the data synchronization compared to the other methods.
Therefore, the correct answer is:
1. Synchronize the on-premise data to an S3 bucket one week before the migration schedule using the AWS CLI's S3 sync
command.
2. Perform a final synchronization task on Friday after the end of business hours.
3. Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket.
The idea is to synchronize the data days before the migration weekend to avoid the risks and possible delays of transferring the data in such a short period of allowable time i.e. 48 hours.
The following option is incorrect:
1. At the end of business hours on Friday, start copying the data to a Snowball Edge device.
2. When the Snowball Edge have completely transferred your data to your AWS Cloud, copy all of the data to multiple EBS Volumes.
3. On Sunday afternoon, mount the generated EBS volume to your EC2 instances.
Although using a Snowball appliance is a valid option, there is a risk that your data won't make it on time on Monday. Remember that you have to consider the time it takes to transfer the data to Snowball including the process of shipping it back to AWS, which may take a day or two. Factoring all of these considerations, it is clear that this is not the best option as the risk is just too high that the Snowball delivery might not make it in time.
The following option is incorrect:
1. Set up a Gateway-Stored volume gateway using the AWS Storage Gateway service.
2. Establish an iSCSI connection between your on-premise data center and your AWS Cloud then copy the data to the Storage Gateway volume.
3. After all of your data has been successfully copied, create an EBS snapshot of the volume.
4. Restore the snapshots as EBS volumes and attach them to your EC2 instances on Sunday.
In this scenario, you have to create storage volumes in the AWS Cloud that your on-premises applications can access as Internet Small Computer System Interface (iSCSI) targets, which is not mentioned in the option. This is not a cost-effective solution considering that you only have to migrate 1 TB of data and the fact that you can just use the S3 sync
command via AWS CLI.
The following option is incorrect:
1. Synchronize the data from your on-premise data center to an S3 bucket using Multipart upload for large files from Saturday morning to Sunday evening.
2. Configure your application hosted in AWS to use the S3 bucket to serve the aeronautical data files.
Although using multi-part upload in S3 is faster than regular S3 upload, this method still carries a risk considering that the company only has 50 Mbps internet connection. It is still better to use S3 sync
command via AWS CLI days before the actual migration to mitigate the risks.
References:
Check out this Amazon S3 Cheat Sheet:
Question 29: Skipped
A company has a suite of IBM products in its on-premises data centers, such as IBM WebSphere, IBM MQ, and IBM DB2 servers. The solutions architect has been tasked to migrate all of the current systems to the AWS Cloud in the most cost-effective way and improve the availability of the cloud infrastructure.
Which of the following options is the MOST suitable solution that the solutions architect should implement to meet the company's requirements?
Use the AWS Application Migration Service to migrate your servers to AWS. Set up Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon SQS Standard Queue.
Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, re-architect, and migrate the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Re-host and migrate the IBM MQ service to Amazon SQS FIFO Queue.
Use the AWS Application Migration Service to migrate your servers to AWS. Upload the IBM licenses to AWS License Manager and use the licenses when configuring Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon MQ.
Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, migrate, and re-architect the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Migrate and re-platform IBM MQ to Amazon MQ in a phased approach.
(Correct)
Explanation
On Amazon EC2, you can run many of the proven IBM technologies with which you're already familiar. You may be eligible to bring many of your own IBM software and licenses (BYOSL) to run on Amazon EC2 instances.
AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) can allow you to convert and migrate IBM Db2 databases on Linux, UNIX, and Windows (Db2 LUW) to any DMS-supported target. This can accelerate your move to the cloud by allowing you to migrate more of your legacy databases. The new Db2 LUW source adds to the existing list of the relational database, NoSQL, and object store sources supported by DMS. If the database migration target is Amazon Aurora, Amazon Redshift, or Amazon DynamoDB, you can use DMS free for six months.
Amazon MQ is a managed message broker service from AWS that makes it easy to set up and operate message brokers in the cloud. To migrate and re-platform your on-premises IBM MQ to Amazon MQ, you can opt for a phased approach for the migration process. You can move the producers (senders) and consumers (receivers) in phases from your on-premises to the cloud. This process uses Amazon MQ as the message broker and decommissions IBM MQ once all producers/consumers have been successfully migrated.
Hence, the correct option is: Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, migrate and re-architect the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Migrate and re-platform IBM MQ to Amazon MQ in phased approach.
The option that says: Use the AWS Application Migration Service to migrate your servers to AWS. Set up Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon SQS Standard Queue is incorrect because the AWS Application Migration Service simply automates the migration of your on-premises virtual machines to the AWS Cloud. Moreover, you can't re-host and migrate your IBM MQ service to Amazon SQS. A better solution is to re-platform IBM MQ to Amazon MQ.
The option that says: Use the AWS Application Migration Service to migrate your servers to AWS. Upload the IBM licenses to AWS License Manager and use the licenses when configuring Amazon EC2 instances to re-host your IBM WebSphere and IBM DB2 servers separately. Re-host and migrate the IBM MQ service to Amazon MQ is incorrect because the AWS Application Migration Service simply automates the migration of your on-premises virtual machines to the AWS Cloud. AWS License Manager is used to create customized licensing rules that emulate the terms of their licensing agreements and then enforce these rules. It is not used for storing software licenses. This service is limited to migrating virtual machines (VMs) and in addition, you cannot directly re-host and migrate your IBM MQ to Amazon MQ since these are two completely different systems. Instead, you have to re-platform IBM MQ to Amazon MQ in a phased approach.
The option that says: Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, re-architect and migrate the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Re-host and migrate the IBM MQ service to Amazon SQS FIFO Queue is incorrect. Although the use of AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) is correct, the migration process for your IBM MQ is wrong. Amazon Simple Queue Service (SQS) is just a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. There are a lot of features in IBM MQ that are not available in Amazon SQS, whether it is a Standard or a FIFO Queue. You have to re-platform IBM MQ to Amazon MQ instead.
References:
Check out these cheat sheets on AWS Database Migration Service and AWS Application Migration Service:
Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
Question 31: Skipped
A company recently launched its new e-commerce platform that is hosted on its on-premises data center. The webservers connect to a MySQL database. The e-commerce platform is quickly gaining popularity and the management is worried that the on-premises servers won’t be able to keep up with user traffic in the coming months. They decided to migrate the entire application to AWS to take advantage of the scalability of the cloud. The following are required for this migration:
- Improve the security of the application.
- Increase the reliability and availability of the application.
- Reduce the latency between the users and the application.
- Reduce the maintenance overhead after the migration to the cloud.
Which of the following options should the Solutions Architect implement to meet the company's requirements? (Select TWO.)
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. Use an Amazon Aurora for MySQL with Multi-AZ enabled as the database.
(Correct)
Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, set this bucket as the origin for an Amazon CloudFront distribution. Create AWS WAF rules to block common web exploits.
(Correct)
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers and the highly available MySQL database cluster in a master and slave configuration.
Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, enable S3 Transfer Acceleration. Create AWS WAF rules to block common web exploits.
Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. To reduce the latency when serving content, create an AWS Global Accelerator endpoint. Migrate the database to an Amazon RDS MySQL instance.
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure and is engineered to be highly reliable.
Adding Amazon EC2 Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Amazon EC2 Auto Scaling, your applications gain the following benefits:
- Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Amazon EC2 Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Amazon EC2 Auto Scaling can launch instances in another one to compensate.
- Better availability. Amazon EC2 Auto Scaling helps ensure that your application always has the right amount of capacity to handle the current traffic demand.
- Better cost management. Amazon EC2 Auto Scaling can dynamically increase and decrease capacity as needed. Because you pay for the EC2 instances you use, you save money by launching instances when they are needed and terminating them when they aren't.
Amazon CloudFront works seamlessly with Amazon Simple Storage Service (S3) to accelerate the delivery of your web content and reduce the load on your origin servers. The CloudFront edge locations will cache and deliver your content closer to your users to reduce latency and offload capacity from your origin. CloudFront will also restrict access to your S3 bucket to only CloudFront endpoints rendering your content and application more secure and performant.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. Use an Amazon Aurora for MySQL with Multi-AZ enabled as the database is correct. The Auto Scaling group ensures there are enough web servers to answer user requests. Spreading the instances on at least two AZ and enabling Multi-AZ on the database ensure that your application is protected if one AZ in AWS goes down.
The option that says: Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, set this bucket as the origin for an Amazon CloudFront distribution. Create AWS WAF rules to block common web exploits is correct. Amazon S3 can serve static content and Amazon CloudFront can cache frequently requested content, which greatly improves latency. AWS WAF has default rules that you can enable to block common web exploits to improve your application security in AWS.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers and the highly available MySQL database cluster in a master and slave configuration is incorrect. Spreading EC2 instances in multiple AZ is good for availability however, hosting the database on the EC2 instances requires a lot of management overhead. It is recommended to use Amazon Aurora for the database.
The option that says: Create an Amazon S3 bucket to store the static contents and enable website hosting. To reduce the latency when serving content, enable S3 Transfer Acceleration. Create AWS WAF rules to block common web exploits is incorrect. Amazon S3 can serve static content, but S3 Transfer Acceleration is used for accelerating transfers going to and from the S3 bucket. Caching with CloudFront would be even faster since the user doesn't need to go to the S3 bucket for their cached requests.
The option that says: Create an Auto Scaling of Amazon EC2 instances spread in two Availability Zones to host the web servers. To reduce the latency when serving content, create an AWS Global Accelerator endpoint. Migrate the database to an Amazon RDS MySQL instance is incorrect. Without enabling Multi-AZ, your RDS is not protected in the event of a crash or failure. When you promote read-replicas to become master, a small downtime is required. This reduces the application availability.
References:
Check out these Amazon Aurora, Amazon S3, and Amazon CloudFront Cheat Sheets:
Question 34: Skipped
A company has deployed a multi-tier web application on AWS that uses Compute Optimized Instances for server-side processing and Storage Optimized EC2 Instances to store various media files. To ensure data durability, there is a scheduled job that replicates the files to each EC2 instance. The current architecture worked for a few months but it started to fail as the number of files grew, which is why the management decided to redesign the system.
Which of the following options should the solutions architect implement in order to launch a new architecture with improved data durability and cost-efficiency?
Migrate all media files to an Amazon S3 bucket and use this as the origin for the new CloudFront web distribution. Set up an Elastic Load Balancer with an Auto Scaling of EC2 instances to host the web servers. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
(Correct)
Migrate all media files to Amazon EFS then attach this new drive as a mount point to a new set of Storage Optimized EC2 Instances. For the web servers, set up an Elastic Load Balancer with an Auto Scaling of EC2 instances and use this as the origin for a new Amazon CloudFront web distribution. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Migrate the web application to AWS Elastic Beanstalk and move all media files to Amazon EFS for a durable and scalable storage. Set up an Amazon CloudFront distribution with EFS as the origin. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Migrate and host the entire web application to Amazon S3 for a more cost-effective web hosting. Enable cross-region replication to improve data durability. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
Explanation
Cloud storage is a cloud computing model that stores data on the Internet through a cloud computing provider who manages and operates data storage as a service. It’s delivered on demand with just-in-time capacity and costs, and eliminates buying and managing your own data storage infrastructure. This gives you agility, global scale and durability, with 'anytime, anywhere' data access.
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Get started quickly by creating custom reports that analyze cost and usage data, both at a high level and for highly-specific requests. Using AWS Cost Explorer, you can dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.
In this scenario, you can use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings. For data storage, you can either use S3 or EFS to store the media files. However, S3 is cheaper than EFS and is more suitable to use in storing static media files.
Therefore, the correct answer is: Migrate all media files to an Amazon S3 bucket and use this as the origin for the new CloudFront web distribution. Set up an Elastic Load Balancer with an Auto Scaling of EC2 instances to host the web servers. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings.
The option that says: Migrate and host the entire web application to Amazon S3 for a more cost-effective web hosting. Enable cross-region replication to improve data durability. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. Amazon S3 is not capable to handle server-side processing as this can only be used for static websites. Moreover, you can only use Consolidated Billing if your account is configured to use AWS Organizations.
The option that says: Migrate all media files to Amazon EFS then attach this new drive as a mount point to a new set of Storage Optimized EC2 Instances. For the web servers, set up an Elastic Load Balancer with an Auto Scaling of EC2 instances and use this as the origin for a new Amazon CloudFront web distribution. Use a combination of Cost Explorer and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. Although this new setup may work, it entails a higher cost to maintain a new set of Storage Optimized EC2 Instances along with EFS. There is also an added cost of maintaining your web-tier which is comprised of an Elastic Load Balancer with another set of Auto Scaling of EC2 instances. It is also better to use S3 instead of EFS since you are only storing media files and not documents which requires file locking or POSIX-compliant storage.
The option that says: Migrate the web application to AWS Elastic Beanstalk and move all media files to Amazon EFS for a durable and scalable storage. Set up an Amazon CloudFront distribution with EFS as the origin. Use a combination of Consolidated Billing and AWS Trusted advisor checks to monitor the operating costs and identify potential savings is incorrect. You cannot set EFS as the origin of your CloudFront web distribution. The scenario also doesn't mention the use of AWS Organization, which is why Consolidated Billing is not applicable.
References:
Check out this Amazon S3 Cheat Sheet:
Check out this Amazon S3, EBS, and EFS comparison:
Question 59: Skipped
An analytics company hosts its data processing application on its on-premises data center. Data scientists upload input files through a web portal which then are then stored in the company NAS. For every uploaded file, the web server sends a message to the processing server over a message queue. It could take up to 30 minutes to process each file on the NAS. During business hours, the number of files awaiting processing is significantly higher and it could take a while for the processing servers to catch up. The number of files significantly declines after business hours. The company has tasked the solutions architect to migrate this workload to the AWS cloud.
Which of the following options is the recommended solution while being cost-effective?
Reconfigure the web application to publish messages to a new Amazon SQS queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files on an Amazon S3 bucket.
Reconfigure the web application to publish messages to a new Amazon MQ queue. Create an auto-scaling group of Amazon EC2 instances to pull messages from the queue and process the files. Store the processed files on an Amazon EFS volume. Power off the EC2 when there are no messages left on the queue.
Reconfigure the web application to publish messages to a new Amazon SQS queue. Create an auto-scaling group of Amazon EC2 instances based on the SQS queue length to pull messages from the queue and process the files. Store the processed files on an Amazon S3 bucket.
(Correct)
Reconfigure the web application to publish messages to a new Amazon MQ queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files in Amazon EFS
Explanation
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components.
There are some scenarios where you might think about scaling in response to activity in an Amazon SQS queue. For example, suppose that you have a web app that lets users upload images and use them online. In this scenario, each image requires resizing and encoding before it can be published. The app runs on EC2 instances in an Auto Scaling group, and it's configured to handle your typical upload rates. Unhealthy instances are terminated and replaced to maintain current instance levels at all times. The app places the raw bitmap data of the images in an SQS queue for processing. It processes the images and then publishes the processed images where they can be viewed by users. The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
There are three main parts to this configuration:
-An Auto Scaling group to manage EC2 instances for the purposes of processing messages from an SQS queue.
-A custom metric to send to Amazon CloudWatch that measures the number of messages in the queue per EC2 instance in the Auto Scaling group.
-A target tracking policy that configures your Auto Scaling group to scale based on the custom metric and a set target value. CloudWatch alarms invoke the scaling policy.
The following diagram illustrates the architecture of this configuration.
Therefore, the correct answer is: Reconfigure the web application to publish messages to a new Amazon SQS queue. Create an auto-scaling group of Amazon EC2 instances based on the SQS queue length to pull messages from the queue and process the files. Store the processed files on an Amazon S3 bucket. This is a cost-effective solution as the EC2 instances scale out depending on the length of the SQS queue, and Amazon S3 is cheaper compared to Amazon EFS.
The option that says: Reconfigure the web application to publish messages to a new Amazon SQS queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files on an Amazon S3 bucket is incorrect. The files may take up to 30 minutes to be processed. AWS Lambda has an execution limit of 15 minutes.
The option that says: Reconfigure the web application to publish messages to a new Amazon MQ queue. Create an auto-scaling group of Amazon EC2 instances to pull messages from the queue and process the files. Store the processed files on an Amazon EFS volume. Power off the EC2 when there are no messages left on the queue is incorrect. This is possible but not the most cost-effective solution. Amazon EFS is significantly more expensive than Amazon S3.
The option that says: Reconfigure the web application to publish messages to a new Amazon MQ queue. Write an AWS Lambda function to pull messages from the SQS queue and process the files. Trigger the Lambda function for every new message on the queue. Store the processed files in Amazon EFS is incorrect. The files may take up to 30 minutes to be processed. AWS Lambda has an execution limit of 15 minutes. Amazon EFS is significantly more expensive than Amazon S3.
References:
Check out these Amazon EC2 and Amazon SQS Cheat Sheets:
Question 61: Skipped
A multinational consumer goods company is currently using a VMWare vCenter Server to manage their virtual machines, multiple ESXi hosts, and all dependent components from a single centralized location. To save costs and to avail the benefits of cloud computing, the company decided to move its virtual machines to AWS. The Solutions Architect is required to generate new AMIs of the existing virtual machines which can then be launched as an EC2 instance in the company VPC.
Which combination of steps should the Solutions Architect do to properly execute the cloud migration? (Select TWO.)
Use Serverless Application Model (SAM) to migrate the virtual machines (VMs) to AWS and automatically launch an Amazon ECS Cluster to host the VMs.
Use the AWS Application Migration Service to migrate your on-premises workloads to the AWS cloud.
(Correct)
Create an AWS CloudFormation template that mirrors the on-premises virtualized environment. Deploy the stack to the AWS cloud.
Establish a Direct Connect connection between your data center and your VPC. Use AWS Service Catalog to centrally manage all your IT services and to quickly migrate virtual machines to your virtual private cloud.
Install the AWS Replication Agent in your on-premises virtualization environment.
(Correct)
Explanation
AWS Application Migration Service (MGN) is a highly automated lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. It enables companies to lift and shift a large number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long cutover windows.
MGN replicates source servers into your AWS account. When you’re ready, it automatically converts and launches your servers on AWS so you can quickly benefit from the cost savings, productivity, resilience, and agility of the Cloud.
Once your applications are running on AWS, you can leverage AWS services and capabilities to quickly and easily re-platform or refactor those applications – which makes lift-and-shift a fast route to modernization.
Therefore, the correct answers are:
- Use the AWS Application Migration Service to migrate your on-premises workloads to the AWS cloud.
- Install the AWS Replication Agent in your on-premises virtualization environment.
The option that says: Use Serverless Application Model (SAM) to migrate the virtual machine (VMs) to AWS and automatically launch an Amazon ECS Cluster to host the VMs is incorrect because the Serverless Application Model (SAM) service is primarily used to build serverless applications on AWS, and not to migrate virtual machines from your on-premises data center.
The option that says: Create an AWS CloudFormation template that mirrors the on-premises virtualized environment. Deploy the stack to the AWS cloud is incorrect. This is possible; however, this will create new AMIs that are from AWS, not from the existing VMs. CloudFormation is ideal for creating and deploying new environments in the AWS Cloud.
The option that says: Establish a Direct Connect connection between your data center and your VPC. Use AWS Service Catalog to centrally manage all your IT services and to quickly migrate virtual machines to your virtual private cloud is incorrect. The AWS Service Catalog is primarily used to allow organizations to create and manage catalogs of IT services that are approved for use on AWS. It enables you to centrally manage commonly deployed IT services and helps you achieve consistent governance and meet your compliance requirements while enabling users to quickly deploy only the approved IT services they need. However, this service is not suitable for migrating your virtual machines.
References:
Check out this AWS Application Migration Service Cheat Sheet:
ChatGPT -