Mock : AWS Certified Solutions Architect Associate

You would like to host a static website for digitalcloud.training on AWS. You will be using Route 53 to direct traffic to the website. Which of the below steps would help you achieve your objectives? (Choose 2)   

Options are :

  • Create an “Aliasâ€? record that points to the S3 bucket (Correct)
  • Create an S3 bucket named digitalcloud.training (Correct)
  • Create an "SRV" record that points to the S3 bucket
  • Create a “CNAMEâ€? record that points to the S3 bucket
  • Use any existing S3 bucket that has public read access enabled

Answer : Create an “Alias� record that points to the S3 bucket Create an S3 bucket named digitalcloud.training

Explanation S3 can be used to host static websites and you can use a custom domain name with S3 using a Route 53 Alias record. When using a custom domain name the bucket name must be the same as the domain name The Alias record is a Route 53 specific record type. Alias records are used to map resource record sets in your hosted zone to Amazon Elastic Load Balancing load balancers, Amazon CloudFront distributions, AWS Elastic Beanstalk environments, or Amazon S3 buckets that are configured as websites You cannot use any bucket when you want to use a custom domain name. As mentioned above you must have a bucket name that matches the domain name You must use an Alias record when configuring an S3 bucket as a static website - you cannot use SRV or CNAME records References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/

Your company runs a two-tier application on the AWS cloud that is composed of a web front-end and an RDS database. The web front-end uses multiple EC2 instances in multiple Availability Zones (AZ) in an Auto Scaling group behind an Elastic Load Balancer. Your manager is concerned about a single point of failure in the RDS database layer.

What would be the most effective approach to minimizing the risk of an AZ failure causing an outage to your database layer?

Options are :

  • Enable Multi-AZ for the RDS DB instance (Correct)
  • Create a Read Replica of the RDS DB instance in another AZ
  • Take a snapshot of the database
  • Increase the DB instance size

Answer : Enable Multi-AZ for the RDS DB instance

Explanation Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it. This provides a DR solution as if the AZ in which the primary DB resides fails, multi-AZ will automatically fail over to the replica instance with minimal downtime Read replicas are used for read heavy DBs and replication is asynchronous. Read replicas do not provide HA/DR as you cannot fail over to a read replica. They are used purely for offloading read requests from the primary DB Taking a snapshot of the database is useful for being able to recover from a failure so you can restore the database. However, this does not prevent an outage from happening as there will be significant downtime while you try and restore the snapshot to a new DB instance in another AZ Increasing the DB instance size will not provide any benefits to enabling high availability or fault tolerance, it will only serve to improve the performance of the DB References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

You would like to create a highly available web application that serves static content using multiple On-Demand EC2 instances.

Which of the following AWS services will help you to achieve this? (choose 2)

Options are :

  • Elastic Load Balancer and Auto Scaling (Correct)
  • Multiple Availability Zones (Correct)
  • DynamoDB and ElastiCache
  • Direct Connect
  • Amazon S3 and CloudFront

Answer : Elastic Load Balancer and Auto Scaling Multiple Availability Zones

Explanation None of the answer options present the full solution. However, you have been asked which services will help you to achieve the desired outcome. In this case we need high availability for on-demand EC2 instances. A single Auto Scaling Group will enable the on-demand instances to be launched into multiple availability zones with an elastic load balancer distributing incoming connections to the available EC2 instances. This provides high availability and elasticity Amazon S3 and CloudFront could be used to serve static content from an S3 bucket, however the question states that the web application runs on EC2 instances DynamoDB and ElastiCache are both database services, not web application services, and cannot help deliver high availability for EC2 instances Direct Connect is used for connecting on-premise data centers into AWS using a private network connection and does not help in this situation at all. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

You are a Solutions Architect at Digital Cloud Training. A client of yours is using API Gateway for accepting and processing a large number of API calls to AWS Lambda. The client’s business is rapidly growing and he is therefore expecting a large increase in traffic to his API Gateway and AWS Lambda services.

The client has asked for advice on ensuring the services can scale without any reduction in performance. What advice would you give to the client? (choose 2)   

Options are :

  • AWS Lambda scales concurrently executing functions up to your default limit (Correct)
  • API Gateway scales manually through the assignment of provisioned throughput
  • API Gateway scales up to the default throttling limit, with some additional burst capacity available (Correct)
  • API Gateway can only scale up to the fixed throttling limits
  • AWS Lambda automatically scales up by using larger instance sizes for your functions

Answer : AWS Lambda scales concurrently executing functions up to your default limit API Gateway scales up to the default throttling limit, with some additional burst capacity available

Explanation API Gateway can scale to any level of traffic received by an API. API Gateway scales up to the default throttling limit of 10,000 requests per second, and can burst past that up to 5,000 RPS. Throttling is used to protect back-end instances from traffic spikes Lambda uses continuous scaling – scales out not up. Lambda scales concurrently executing functions up to your default limit (1000) API Gateway does not use provisioned throughput - this is something that is used to provision performance in DynamoDB API Gateway can scale past the default throttling limits (they are not fixed, you just have to apply to have them adjusted) References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/

An application you manage uses Auto Scaling and a fleet of EC2 instances. You recently noticed that Auto Scaling is scaling the number of instances up and down multiple times in the same hour. You need to implement a remediation to reduce the amount of scaling events. The remediation must be cost-effective and preserve elasticity

What design changes would you implement? (choose 2)   

Options are :

  • Modify the Auto Scaling group termination policy to terminate the oldest instance first
  • Modify the Auto Scaling group cool-down timers (Correct)
  • Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy (Correct)
  • Modify the Auto Scaling policy to use scheduled scaling actions
  • Modify the Auto Scaling group termination policy to terminate the newest instance first

Answer : Modify the Auto Scaling group cool-down timers Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy

Explanation The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn't launch or terminate additional instances before the previous scaling activity takes effect so this would help. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities The CloudWatch Alarm Evaluation Period is the number of the most recent data points to evaluate when determining alarm state. This would help as you can increase the number of datapoints required to trigger an alarm The order in which Auto Scaling terminates instances is not the issue here, the problem is that the workload is dynamic and Auto Scaling is constantly reacting to change and launching or terminating instances References: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html#alarm-evaluation https://digitalcloud.guru/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

An EBS-backed EC2 instance has been configured with some proprietary software that uses an embedded license. You need to move the EC2 instance to another Availability Zone (AZ) within the region. How can this be accomplished? Choose the best answer.   

Options are :

  • Take a snapshot of the instance. Create a new EC2 instance and perform a restore from the snapshot
  • Create an image from the instance. Launch an instance from the AMI in the destination AZ (Correct)
  • Use the AWS Management Console to select a different AZ for the existing instance
  • Perform a copy operation to move the EC2 instance to the destination AZ

Answer : Create an image from the instance. Launch an instance from the AMI in the destination AZ

Explanation The easiest and recommended option is to create an AMI (image) from the instance and launch an instance from the AMI in the other AZ. AMIs are backed by snapshots which in turn are backed by S3 so the data is available from any AZ within the region You can take a snapshot, launch an instance in the destination AZ. Stop the instance, detach its root volume, create a volume from the snapshot you took and attach it to the instance. However, this is not the best option There’s no way to move an EC2 instance from the management console You cannot perform a copy operation to move the instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://aws.amazon.com/premiumsupport/knowledge-center/move-ec2-instance/

Your manager has asked you to explain how Amazon ElastiCache may assist with the company’s plans to improve the performance of database queries.

Which of the below statements is a valid description of the benefits of Amazon ElastiCache? (Choose 2)

Options are :

  • ElastiCache can form clusters using a mixture of Memcached and Redis caching engines, allowing you to take advantage of the best features of each caching engine
  • ElastiCache is best suited for scenarios where the data base load type is OLTP
  • The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads (Correct)
  • ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud (Correct)
  • ElastiCache nodes can be accessed directly from the Internet and EC2 instances in other regions, which allows you to improve response times for queries over long distances

Answer : The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud

Explanation ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads ElastiCache is best for scenarios where the DB load is based on Online Analytics Processing (OLAP) transactions not Online Transaction Processing (OLTP) Elasticache EC2 nodes cannot be accessed from the Internet, nor can they be accessed by EC2 instances in other VPCs You cannot mix Memcached and Redis in a cluster References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/

You have been tasked with building an ECS cluster using the EC2 launch type and need to ensure container instances can connect to the cluster. A colleague informed you that you must ensure the ECS container agent is installed on your EC2 instances. You have selected to use the Amazon ECS-optimized AMI.

Which of the statements below are correct? (Choose 2)

Options are :

  • The Amazon ECS container agent is installed on the AWS managed infrastructure used for tasks using the EC2 launch type so you don’t need to do anything
  • The Amazon ECS container agent must be installed for all AMIs
  • The Amazon ECS container agent is included in the Amazon ECS-optimized AMI (Correct)
  • You can install the ECS container agent on any Amazon EC2 instance that supports the Amazon ECS specification (Correct)
  • You can only install the ECS container agent on Linux instances

Answer : The Amazon ECS container agent is included in the Amazon ECS-optimized AMI You can install the ECS container agent on any Amazon EC2 instance that supports the Amazon ECS specification

Explanation The ECS container agent allows container instances to connect to the cluster nd runs on each infrastructure resource on an ECS cluster. The ECS container agent is included in the Amazon ECS optimized AMI and can also be installed on any EC2 instance that supports the ECS specification (only supported on EC2 instances). It is available for Linux and Windows The ECS container agent does not need to be installed for all AMIs as it is included in the Amazon ECS optimized AMI With the EC2 launch type the container agent is not installed on AWS managed infrastructure - however this is true for the Fargate launch type You can install the EC2 container agent on Windows instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

Your organization is deploying a multi-language website on the AWS Cloud. The website uses CloudFront as the front-end and the language is specified in the HTTP request:

·       http://d12345678aabbcc0.cloudfront.net/main.html?language=en

·       http://d12345678aabbcc0.cloudfront.net/main.html?language=sp

·       http://d12345678aabbcc0.cloudfront.net/main.html?language=fr

You need to configure CloudFront to deliver the cached content. What method can be used?

Options are :

  • Signed Cookies
  • Signed URLs
  • Query string parameters (Correct)
  • Origin Access Identity

Answer : Query string parameters

Explanation Query string parameters cause CloudFront to forward query strings to the origin and to cache based on the language parameter Signed URLs and Cookies provide additional control over access to content Origin access identities are used to control access to CloudFront distributions References: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

A company is launching a new application and expects it to be very popular. The company requires a database layer that can scale along with the application. The schema will be frequently changes and the application cannot afford any downtime for database changes.

Which AWS service allows the company to achieve these requirements?

Options are :

  • Amazon RDS MySQL
  • Amazon DynamoDB (Correct)
  • Amazon Aurora
  • Amazon RedShift

Answer : Amazon DynamoDB

Explanation DynamoDB a NoSQL DB which means you can change the schema easily. It's also the only DB in the list that you can scale without any downtime Amazon Aurora, RDS MySQL and RedShift all require changing instance sizes in order to scale which causes an outage. They are also all relational databases (SQL) so changing the schema is difficult References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The web servers must be accessible only to customers on an SSL connection. The database should only be accessible to web servers in a public subnet. 

Which solution meets these requirements without impacting other running applications? (choose 2)

Options are :

  • Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for web servers, and deny all outbound traffic
  • Create a network ACL on the web server's subnet, allow HTTPS port 443 inbound, and specify the source as 0.0.0.0/0
  • Create a DB server security group that allows MySQL port 3306 inbound and specify the source as a web server security group (Correct)
  • Create a web server security group that allows HTTPS port 443 inbound traffic from Anywhere (0.0.0.0/0) and apply it to the web servers (Correct)
  • Create a DB server security group that allows the HTTPS port 443 inbound and specify the source as a web server security group

Answer : Create a DB server security group that allows MySQL port 3306 inbound and specify the source as a web server security group Create a web server security group that allows HTTPS port 443 inbound traffic from Anywhere (0.0.0.0/0) and apply it to the web servers

Explanation A VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic. Custom network ACLs deny everything inbound and outbound by default but in this case a default network ACL is being used Inbound connections to web servers will be coming in on port 443 from the Internet so creating a security group to allow this port from 0.0.0.0/0 and applying it to the web servers will allow this traffic The MySQL DB will be listening on port 3306. Therefore, the security group that is applied to the DB servers should allow 3306 inbound from the web servers security group The DB server is listening on 3306 so creating a rule allowing 443 inbound will not help References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You are considering the security and durability of your data that is stored in Amazon EBS volumes. Which of the statements below is true?   

Options are :

  • You can define the number of AZs to replicate your data to via the API
  • EBS volumes are backed by Amazon S3 which replicates data across multiple facilities within a region
  • EBS volumes are replicated across AZs to protect you from loss of access to an individual AZ
  • EBS volumes are replicated within their Availability Zone (AZ) to protect you from component failure (Correct)

Answer : EBS volumes are replicated within their Availability Zone (AZ) to protect you from component failure

Explanation EBS volume data is replicated across multiple servers within an AZ EBS volumes are not replicated across AZs EBS volumes are not automatically backed up to Amazon S3 so there is no durability here. However, snapshots of EBS volumes do reside on S3 There is no option to define the number of AZs you can replicate your data to References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

Your company is opening a new office in the Asia Pacific region. Users in the new office will need to read data from an RDS database that is hosted in the U.S. To improve performance, you are planning to implement a Read Replica of the database in the Asia Pacific region. However, your Chief Security Officer (CSO) has explained to you that the company policy dictates that all data that leaves the U.S must be encrypted at rest. The master RDS DB is not currently encrypted.

What options are available to you? (choose 2)

Options are :

  • You can enable encryption for the master DB by creating a new DB from a snapshot with encryption enabled (Correct)
  • You can create an encrypted Read Replica that is encrypted with the same key
  • You can create an encrypted Read Replica that is encrypted with a different key (Correct)
  • You can enable encryption for the master DB through the management console
  • You can use an encrypted EBS volume for the Read Replica

Answer : You can enable encryption for the master DB by creating a new DB from a snapshot with encryption enabled You can create an encrypted Read Replica that is encrypted with a different key

Explanation You cannot encrypt an existing DB, you need to create a snapshot, copy it, encrypt the copy, then build an encrypted DB from the snapshot You can encrypt your Amazon RDS instances and snapshots at rest by enabling the encryption option for your Amazon RDS DB instance Data that is encrypted at rest includes the underlying storage for a DB instance, its automated backups, Read Replicas, and snapshots A Read Replica of an Amazon RDS encrypted instance is also encrypted using the same key as the master instance when both are in the same region If the master and Read Replica are in different regions, you encrypt using the encryption key for that region You can't have an encrypted Read Replica of an unencrypted DB instance or an unencrypted Read Replica of an encrypted DB instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

A company is planning moving their DNS records to AWS as part of a major migration to the cloud. Which statements are true about Amazon Route 53? (choose 2)

Options are :

  • You can automatically register EC2 instances with private hosted zones
  • You can transfer domains to Route 53 even if the Top-Level Domain (TLD) is unsupported
  • Route 53 can be used to route Internet traffic for domains registered with another domain registrar (Correct)
  • You cannot automatically register EC2 instances with private hosted zones (Correct)

Answer : Route 53 can be used to route Internet traffic for domains registered with another domain registrar You cannot automatically register EC2 instances with private hosted zones

Explanation You cannot automatically register EC2 instances with private hosted zones Route 53 can be used to route Internet traffic for domains registered with another domain registrar (any domain) You can transfer domains to Route 53 only if the Top Level Domain (TLD) is supported References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/

You are looking for a method to distribute onboarding videos to your company’s numerous remote workers around the world. The training videos are located in an S3 bucket that is not publicly accessible. Which of the options below would allow you to share the videos?   

Options are :

  • Use a Route 53 Alias record the points to the S3 bucket
  • Use CloudFront and set the S3 bucket as an origin (Correct)
  • Use ElastiCache and attach the S3 bucket as a cache origin
  • Use CloudFront and use a custom origin pointing to an EC2 instance

Answer : Use CloudFront and set the S3 bucket as an origin

Explanation CloudFront uses origins which specify the origin of the files that the CDN will distribute Origins can be either an S3 bucket, an EC2 instance, and Elastic Load Balancer, or Route 53 – can also be external (non-AWS). When using Amazon S3 as an origin you place all of your objects within the bucket You cannot configure an origin with ElastiCache You cannot use a Route 53 Alias record to connect to an S3 bucket that is not publicly available You can configure a custom origin pointing to an EC2 instance but as the training videos are located in an S3 bucket this would not be helpful References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

Your company runs a two-tier application that uses web front-ends running on EC2 instances across multiple AZs. The back-end is an RDS multi-AZ database instance. The front-end servers host a Content Management System (CMS) application that stores files that users upload in attached EBS storage. You don’t like having the uploaded files distributed across multiple EBS volumes and are concerned that this solution is not scalable.

You would like to design a solution for storing the files that are uploaded to your EC2 instances that can achieve high levels of aggregate throughput and IOPS. The solution must scale automatically, and provide consistent low latencies. You also need to be able to mount the storage to the EC2 instances across multiple AZs within the region.

Which AWS service would meet your needs?

Options are :

  • Use ElastiCache
  • Store the files in the RDS database
  • Use the Amazon Elastic File System (Correct)
  • Create an S3 bucket and use this as the storage location for the application

Answer : Use the Amazon Elastic File System

Explanation The Amazon Elastic File System (EFS) is a file-based (not block or object-based) system that is accessed using the NFSv4.1 protocol. You can concurrently connect 1 to 1000s of EC2 instances from multiple AZs to a single EFS file system. EFS is elastic and provides high levels of aggregate throughput and IOPS. Amazon S3 is an object-based solution and cannot be mounted to EC2 instances ElastiCache is an in-memory database used for caching data and providing high performance access, it it not a file storage solution that can be mounted to EC2 instances RDS is a relational database and cannot be mounted to EC2 instances and used to store files References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

You work as a Solutions Architect at Digital Cloud Training. You are working on a disaster recovery solution that allows you to bring up your applications in another AWS region. Some of your applications run on EC2 instances and have proprietary software configurations with embedded licenses. You need to create duplicate copies of your EC2 instances in the other region.

What would be the best way to do this? (choose 2)

Options are :

  • Copy the snapshots to the other region
  • Create new EC2 instances from the AMIs (Correct)
  • Create an AMI of each EC2 instance and copy the AMIs to the other region (Correct)
  • Create snapshots of the EBS volumes attached to the instances
  • Create new EC2 instances from the snapshot

Answer : Create new EC2 instances from the AMIs Create an AMI of each EC2 instance and copy the AMIs to the other region

Explanation In this scenario we are not looking to backup the instances but to create identical copies of them in the other region. These are often called golden images. We must assume that any data used by the instances resides in another service and will be accessible to them when they are launched in a DR situation You launch EC2 instances using AMIs not snapshots (you can create AMIs from snapshots). Therefore, you should create AMIs of each instance (rather than snapshots), copy the AMIs between regions and then create new EC2 instances from the AMIs AMIs are regional as they are backed by Amazon S3. You can only launch an AMI from the region in which it is stored. However, you can copy AMI’s to other regions using the console, command line, or the API References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A Solutions Architect requires a highly available database that can deliver an extremely low RPO. Which of the following configurations uses synchronous replication?

Options are :

  • RDS Read Replica across AWS regions
  • EBS volume synchronization
  • RDS DB instance using a Multi-AZ configuration (Correct)
  • DynamoDB Read Replica

Answer : RDS DB instance using a Multi-AZ configuration

Explanation A Recovery Point Objective (RPO) relates to the amount of data loss that can be allowed, in this case a low RPO means that you need to minimize the amount of data lost so synchronous replication is required. Out of the options presented only Amazon RDS in a multi-AZ configuration uses synchronous replication RDS Read Replicas use asynchronous replication and are not used for DR DynamoDB Read Replicas do not exist EBS volume synchronization does not exist References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

You have taken a snapshot of an encrypted EBS volume and would like to share the snapshot with another AWS account. Which statements are true about sharing snapshots of encrypted EBS volumes? (choose 2)   

Options are :

  • You must store the CMK key in CloudHSM and delegate access to the other AWS account
  • You must share the CMK key as well as the snapshot with the other AWS account (Correct)
  • Snapshots of encrypted volumes are unencrypted
  • You must obtain an encryption key from the target AWS account for encrypting the snapshot
  • A custom CMK key must be used for encryption if you want to share the snapshot (Correct)

Answer : You must share the CMK key as well as the snapshot with the other AWS account A custom CMK key must be used for encryption if you want to share the snapshot

Explanation A custom CMK key must be used for encryption if you want to share the snapshot You must share the CMK key as well as the snapshot with the other AWS account Snapshots of encrypted volumes are encrypted automatically To share an encrypted snapshot you must encrypt it in the source account with a custom CMK key and then share the key with the target account You do not need to store the CMK key in CloudHSM References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

A developer is creating a solution for a real-time bidding application for a large retail company that allows users to bid on items of end-of-season clothing. The application is expected to be extremely popular and the back-end DynamoDB database may not perform as required

How can the Solutions Architect enable in-memory read performance with microsecond response times for the DynamoDB database?

Options are :

  • Enable read replicas
  • Increase the provisioned throughput
  • Configure DynamoDB Auto Scaling
  • Configure Amazon DAX (Correct)

Answer : Configure Amazon DAX

Explanation Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. You can enable DAX for a DynamoDB database with a few clicks Provisioned throughput is the maximum amount of capacity that an application can consume from a table or index, it doesn't improve the speed of the database or add in-memory capabilities DynamoDB auto scaling actively manages throughput capacity for tables and global secondary indexes so like provisioned throughput it does not provide the speed or in-memory capabilities requested There is no such thing as read replicas with DynamoDB References: https://aws.amazon.com/dynamodb/dax/

You have launched a Spot instance on EC2 for working on an application development project. In the event of an interruption what are the possible behaviors that can be configured? (choose 2)   

Options are :

  • Stop (Correct)
  • Restart
  • Hibernate (Correct)
  • Pause
  • Save

Answer : Stop Hibernate

Explanation You can specify whether Amazon EC2 should hibernate, stop, or terminate Spot Instances when they are interrupted. You can choose the interruption behavior that meets your needs. The default is to terminate Spot Instances when they are interrupted You cannot configure the interruption behavior to restart, save, or pause the instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html

Your company SysOps practices involve running scripts within the Linux operating systems of your applications. Which of the following AWS services allow you to access the underlying operating system? (choose 2)   

Options are :

  • Amazon EMR (Correct)
  • AWS Lambda
  • Amazon RDS
  • Amazon DynamoDB
  • Amazon EC2 (Correct)

Answer : Amazon EMR Amazon EC2

Explanation You can access Amazon EMR by using the AWS Management Console, Command Line Tools, SDKs, or the EMR API With EMR and EC2 you have access to the underlying operating system which means you can connect to the operating system using protocols such as SSH and then manage the operating system The other services listed are managed services that do not allow access to the underlying operating systems on which the services run References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-emr/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

Another systems administrator in your company created an Auto Scaling group that is configured to ensure that four EC2 instances are available at a minimum at all times. The settings he selected on the Auto Scaling group are a minimum group size of four instances and a maximum group size of six instances.

Your colleague has asked your assistance in trying to understand if Auto Scaling will allow him to terminate instances in the Auto Scaling group and what the effect would be if it does.

What advice would you give to your colleague?

Options are :

  • Auto Scaling will not allow him to terminate an EC2 instance, because there are currently four provisioned instances and the minimum is set to four
  • This can only be done via the command line
  • He would need to reduce the minimum group size setting to be able to terminate any instances
  • This should be allowed, and Auto Scaling will launch additional instances to compensate for the ones that were terminated (Correct)

Answer : This should be allowed, and Auto Scaling will launch additional instances to compensate for the ones that were terminated

Explanation You can terminate instances in the ASG and Auto Scaling will then perform rebalancing when it finds that the number of instances across AZs is not balanced Auto Scaling will not prevent an imbalance from occurring by stopping you from terminating instances, but it will react to the imbalance by attempting to rebalance by launching new instances You do not need to reduce the minimum group size and terminating instances does not need to be performed using the command line References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A company is moving a large amount of sensitive data to the cloud. Data will be moved to Amazon S3 and the Solutions Architects are concerned about encryption and management of keys.

Which of the statements below is correct regarding the SSE-KMS option? (choose 2)

Options are :

  • KMS uses customer provided keys (CPKs)
  • KMS uses customer master keys (CMKs) (Correct)
  • Auditable master keys can be created, rotated, and disabled from the IAM console (Correct)
  • Keys are managed through Amazon S3
  • Data is encrypted by default on the client side and then transferred in an encrypted state

Answer : KMS uses customer master keys (CMKs) Auditable master keys can be created, rotated, and disabled from the IAM console

Explanation You can use server-side encryption with SSE-KMS to protect your data with a master key or you can use an AWS KMS customer master key KMS uses customer master keys (CMKs), not customer provided keys SSE-KMS requires that AWS manage the data key but you manage the master key in AWS KMS Auditable master keys can be created, rotated, and disabled from the IAM console You can use the Amazon S3 encryption client in the AWS SDK from your own application to encrypt objects and upload them to Amazon S3, otherwise data is encrypted on Amazon S3, not on the client side References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

A colleague recently deployed a two-tier web application into a subnet using a test account. The subnet has an IP address block of 10.0.5.0/27 and he launched an Auto Scaling Group (ASG) with a desired capacity of 8 web servers. Another ASG has 6 application servers and two database servers and both ASGs are behind a single ALB with multiple target groups. All instances are On-Demand instances. Your colleague attempted to test a simulated increase in capacity requirements of 50% and not all instances were able to launch successfully.

What would be the best explanations for the failure to launch the extra instances? (choose 2)

Options are :

  • There are insufficient IP addresses in the subnet range to allow for the EC2 instances, the AWS reserved addresses, and the ELB IP address requirements (Correct)
  • AWS impose a soft limit of 20 instances per region for an account, you have exceeded this number (Correct)
  • There are insufficient resources available in the Availability Zone
  • The IP address block overlaps with another subnet in the VPC
  • The ASG is waiting for the health check grace period to expire, it might have been set at a high value

Answer : There are insufficient IP addresses in the subnet range to allow for the EC2 instances, the AWS reserved addresses, and the ELB IP address requirements AWS impose a soft limit of 20 instances per region for an account, you have exceeded this number

Explanation The relevant facts are there is a soft limit of 20 On-demand or 20 reserved instances per region by default and there are 32 possible hosts in a /27 subnet. AWS reserve the first 4 and last 1 IP address. ELB requires 8 addresses within your subnet which only leaves 19 addresses available for use There are 16 EC2 instances so a capacity increase of 50% would bring the total up to 24 instances which exceeds the address space and the default account limit for On-Demand instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A Solutions Architect is reviewing the IP addressing strategy for the company's resources in the AWS Cloud. Which of the statements below are correct regarding private IP addresses? (choose 2)   

Options are :

  • For instances launched in EC2-Classic, the private IPv4 address is released when the instance is stopped or terminated (Correct)
  • Secondary private IP addresses cannot be reassigned from one instance to another
  • For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted (Correct)
  • A private IPv4 address is an IP address that's reachable over the Internet
  • By default, an instance has a primary and secondary private IP address

Answer : For instances launched in EC2-Classic, the private IPv4 address is released when the instance is stopped or terminated For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted

Explanation For instances launched in EC2-Classic, the private IPv4 address is released when the instance is stopped or terminated For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted By default an instance only has a single private IP address Secondary private IP addresses can be reassigned from one instance to another (the primary IPs cannot) A private IPv4 address is not reachable over the Internet References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html

The operations team in your company are looking for a method to automatically respond to failed system status check alarms that are being received from an EC2 instance. The system in question is experiencing intermittent problems with its operating system software.

Which two steps will help you to automate the resolution of the operating system software issues? (choose 2)

Options are :

  • Configure an EC2 action that recovers the instance
  • Configure an EC2 action that reboots the instance (Correct)
  • Configure an EC2 action that terminates the instance
  • Create a CloudWatch alarm that monitors the “StatusCheckFailed_Systemâ€? metric
  • Create a CloudWatch alarm that monitors the “StatusCheckFailed_Instanceâ€? metric (Correct)

Answer : Configure an EC2 action that reboots the instance Create a CloudWatch alarm that monitors the “StatusCheckFailed_Instance� metric

Explanation EC2 status checks are performed every minute and each returns a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired System status checks detect (StatusCheckFailed_System) problems with your instance that require AWS involvement to repair whereas Instance status checks (StatusCheckFailed_Instance) detect problems that require your involvement to repair The action to recover the instance is only supported on specific instance types and can be used only with StatusCheckFailed_System Configuring an action to terminate the instance would not help resolve system software issues as the instance would be terminated References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

You are using an Application Load Balancer (ALB) for distributing traffic for a number of application servers running on EC2 instances. The configuration consists of a single ALB with a single target group. The front-end listeners are receiving traffic for digitalcloud.guru on port 443 (SSL/TLS) and the back-end listeners are receiving traffic on port 80 (HTTP).

You will be installing a new application component on one of the application servers in the existing target group that will process data sent to digitalcloud.guru/orders. The application component will listen on HTTP port 8080 for this traffic.

What configuration changes do you need to make to implement this solution update? (choose 2)

Options are :

  • Add an additional port to the existing target group and set it to 8080
  • Add a new rule to the existing front-end listener with a Path condition. Set the path condition to /orders and add an action that forwards traffic to the new target group (Correct)
  • Add a new rule to the existing front-end listener with a Host condition. Set the host condition to /orders and add an action that forwards traffic to the new target group
  • Add an additional front-end listener that listens on port 443 and set a path condition to process traffic destined to the path /orders
  • Create a new target group and add the EC2 instance to it. Define the protocol as HTTP and the port as 8080 (Correct)

Answer : Add a new rule to the existing front-end listener with a Path condition. Set the path condition to /orders and add an action that forwards traffic to the new target group Create a new target group and add the EC2 instance to it. Define the protocol as HTTP and the port as 8080

Explanation The traffic is coming in on standard ports (443/HTTPS, 80/HTTP) to a single front-end listener. You can only have a single listener running on a single port. Therefore to be able to direct traffic for a specific web page you need to use an ALB and path-based routing to direct the traffic to a specific back-end listener. As only one protocol and one port can be defined per target group you also need to create a new target group that uses port 8080 as a target. As discussed above you cannot add additional ports to existing target groups as you can only have a single protocol/port per target group Host conditions (host-based routing) route client requests based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balancer - in this case we are not directing traffic based on the host field (digitalcloud.training), which does not change in this scenario, we are directing traffic based on the path field (/orders) You also cannot add an additional front-end listener that listens on the same port as another listener References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A client is in the design phase of developing an application that will process orders for their online ticketing system. The application will use a number of front-end EC2 instances that pick-up orders and place them in a queue for processing by another set of back-end EC2 instances. The client will have multiple options for customers to choose the level of service they want to pay for. The client has asked how he can design the application to process the orders in a prioritized way based on the level of service the customer has chosen   

Options are :

  • Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority and configure the back-end instances to poll the queue and pick up messages in the order they are presented
  • Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested and configure the back-end instances to sequentially poll the queues in order of priority (Correct)
  • Create a combination of FIFO queues and Standard queues and configure the applications to place messages into the relevant queue based on priority
  • Create multiple SQS queues, configure exactly-once processing and set the maximum visibility timeout to 12 hours

Answer : Create multiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested and configure the back-end instances to sequentially poll the queues in order of priority

Explanation The best option is to create multiple queues and configure the application to place orders onto a specific queue based on the level of service. You then configure the back-end instances to poll these queues in order or priority so they pick up the higher priority jobs first Creating a combination of FIFO and standard queues is incorrect as creating a mixture of queue types is not the best way to separate the messages, and there is nothing in this option that explains how the messages would be picked up in the right order Creating a single queue and configuring the applications to place orders on the queue in order of priority would not work as standard queues offer best-effort ordering so there’s no guarantee that the messages would be picked up in the correct order Creating multiple SQS queues and configuring exactly-once processing (only possible with FIFO) would not ensure that the order of the messages is prioritized References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sqs/

There are two business units in your company that each have their own VPC. A company restructure has resulted in the need to work together more closely and you would like to configure VPC peering between the two VPCs. VPC A has a CIDR block of 172.16.0.0/16 and VPC B has a CIDR block of 10.0.0.0/16. You have created a VPC peering connection with the ID: pcx-11112222.

Which of the entries below should be added to the route table to allow full access to the entire CIDR block of the VPC peer? (choose 2)

Options are :

  • Destination 172.16.0.0/16 and target pcx.11112222 in VPC B (Correct)
  • Destination 10.0.0.0/16 and target pcx-11112222 in VPC A (Correct)
  • Destination 10.0.0.0/16 and target pcx-11112222 in VPC B
  • Destination 0.0.0.0/0 and target Local in VPC A and VPC B
  • Destination 172.16.0.0/16 and target pcx.11112222 in VPC A

Answer : Destination 172.16.0.0/16 and target pcx.11112222 in VPC B Destination 10.0.0.0/16 and target pcx-11112222 in VPC A

Explanation Please note that though this is an incomplete solution. Sometimes in the exam you'll be offered solutions that are incomplete or for which you have to make assumptions. You'll also sometimes be offered multiple correct responses and have to choose the best or most cost-effective option The full list of route tables entries required for this solution are: - Destination 172.16.0.0/16 and target Local in VPC A - Destination 10.0.0.0/16 and target pcx-11112222 in VPC A - Destination 10.0.0.0/16 and target Local in VPC B - Destination 172.16.0.0/16 and target pcx.11112222 in VPC B Refer to the URL below for more details around this scenario References: https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-full-access.html

As the Chief Security Officer (CSO) of a large banking organization you are reviewing your security policy for the usage of public cloud services. A key assessment criteria when comparing public cloud services against maintaining applications on-premise, is the split of responsibilities between AWS, as the service provider, and your company, as the customer.

According to the AWS Shared Responsibility Model, which of the following would be responsibilities of the service provider? (choose 2)

Options are :

  • Identity and Access Management
  • Operating system, network and firewall configuration
  • Availability Zones (Correct)
  • Physical networking infrastructure (Correct)
  • Customer data

Answer : Availability Zones Physical networking infrastructure

Explanation AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services The customer is responsible for security of the resources they provision. Customer responsibilities include operating system, network and firewall configuration, identity and access management, and customer data References: https://aws.amazon.com/compliance/shared-responsibility-model/

You are running a Hadoop cluster on EC2 instances in your VPC. The EC2 instances are launched by an Auto Scaling Group (ASG) and you have configured the ASG to scale out and in as demand changes. One of the instances in the group is the Hadoop Master Node and you need to ensure that it is not terminated when your ASG processes a scale in action.

What is the best way this can be achieved without interrupting services?

Options are :

  • Change the DeleteOnTermination value for the EC2 instance
  • Enable Deletion Protection for the EC2 instance
  • Use the Instance Protection feature to set scale in protection for the Hadoop Master Node (Correct)
  • Move the Hadoop Master Node to another ASG that has the minimum and maximum instance settings set to 1

Answer : Use the Instance Protection feature to set scale in protection for the Hadoop Master Node

Explanation You can enable Instance Protection to protect a specific instance in an ASG from a scale in action Moving the Hadoop Node to another ASG would work but is impractical and would incur service interruption EC2 has a feature called “termination protection� not “Deletion Protection� The “DeleteOnTermination� value relates to EBS volumes not EC2 instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/ https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/

The development team in your company has created a new application that you plan to deploy on AWS which runs multiple components in Docker containers. You would prefer to use AWS managed infrastructure for running the containers as you do not want to manage EC2 instances.

Which of the below solution options would deliver these requirements? (choose 2)

Options are :

  • Use the Elastic Container Service (ECS) with the Fargate Launch Type (Correct)
  • Use the Elastic Container Service (ECS) with the EC2 Launch Type
  • Put your container images in a private repository
  • Use CloudFront to deploy Docker on EC2
  • Put your container images in the Elastic Container Registry (ECR) (Correct)

Answer : Use the Elastic Container Service (ECS) with the Fargate Launch Type Put your container images in the Elastic Container Registry (ECR)

Explanation If you do not wand to manage EC2 instances you must use the AWS Fargate launch type which is a serverless infrastructure managed by AWS. Fargate only supports container images hosted on Elastic Container Registry (ECR) or Docker Hub The EC2 Launch Type allows you to run containers on EC2 instances that you manage Private repositories are only supported by the EC2 Launch Type You cannot use CloudFront (a CDN) to deploy Docker on EC2 References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

You are designing a solution for an application that will read and write large amounts of data to S3. You are expecting high throughput that may exceed 1000 requests per second and need the performance of S3 to scale. What is AWS’s current advice for designing your S3 storage strategy to ensure fast performance?   

Options are :

  • Enable an object cache on S3 to ensure performance at this scale
  • You must use CloudFront for caching objects at this scale as S3 cannot provide this level of performance
  • There is no longer a need to use random prefixes as S3 scales per prefix and the performance required is well within the S3 performance limitations (Correct)
  • Use a random prefix on objects to improve performance

Answer : There is no longer a need to use random prefixes as S3 scales per prefix and the performance required is well within the S3 performance limitations

Explanation According to the latest information, AWS no longer require random prefixes as they have improved S3 so that it can scale to higher throughput and per prefix Caution is required as the exam may not yet reflect these changes You do not need to use CloudFront for caching objects because of performance concerns with S3. CloudFront is more for performance concerns where end-users need to access objects over the Internet There is no such thing as an object cache in Amazon S3 References: https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

You are deploying a two-tier web application within your VPC. The application consists of multiple EC2 instances and an Internet-facing Elastic Load Balancer (ELB). The application will be used by a small number of users with fixed public IP addresses and you need to control access so only these users can access the application.

What would be the BEST methods of applying these controls? (choose 2)

Options are :

  • Configure the local firewall on each EC2 instance to only allow traffic from the specific IP sources
  • Configure the ELB Security Group to allow traffic from only the specific IP sources (Correct)
  • Configure the EC2 instance’s Security Group to allow traffic from only the specific IP sources
  • Configure the ELB to send the X-Forwarded-For header and configure the EC2 instances to filter traffic based on the source IP information in the header (Correct)
  • Configure certificates on the clients and use client certificate authentication on the ELB

Answer : Configure the ELB Security Group to allow traffic from only the specific IP sources Configure the ELB to send the X-Forwarded-For header and configure the EC2 instances to filter traffic based on the source IP information in the header

Explanation There are two practical methods of implementing these controls and these can be used in isolation or together (defence in depth). As the clients have fixed IPs you can configure a security group to control access by only permitting these addresses. The ELB security group is the correct place to implement this control. You can also configured ELB to forward the X-Forwarded-For header which means the source IP information is carried through to the EC2 instances. You are then able to configure security controls for the addresses at the EC2 instance level, for instance by using an iptables firewall ELB does not support client certificate authentication (API Gateway does support this) The EC2 instance Security Group is the wrong place to implement the allow rule References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A customer has a production application running on Amazon EC2. The application frequently overwrites and deletes data, and it is essential that the application receives the most up-to-date version of the data whenever it is requested.

Which service is most appropriate for these requirements?

Options are :

  • Amazon RedShift
  • AWS Storage Gateway
  • Amazon S3
  • Amazon RDS (Correct)

Answer : Amazon RDS

Explanation This scenario asks that when retrieving data the chosen storage solution should always return the most up-to-date data. Therefore we must use Amazon RDS as it provides read-after-write consistency Amazon S3 only provides eventual consistency for overwrites and deletes Amazon RedShift is a data warehouse and is not used as a transactional database so this is the wrong use case for it AWS Storage Gateway is used for enabling hybrid cloud access to AWS storage services from on-premises References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A Solutions Architect is developing a new web application on AWS that needs to be able to scale to support unpredictable workloads. The Architect prefers to focus on value-add activities such as software development and product roadmap development rather than provisioning and managing instances.

Which solution is most appropriate for this use case?

Options are :

  • Elastic Load Balancing with Auto Scaling groups and Amazon EC2
  • Amazon CloudFront and AWS Lambda
  • Amazon API Gateway and Amazon EC2
  • Amazon API Gateway and AWS Lambda (Correct)

Answer : Amazon API Gateway and AWS Lambda

Explanation The Architect requires a solution that removes the need to manage instances. Therefore it must be a serverless service which rules out EC2. The two remaining options use AWS Lambda at the back-end for processing. Though CloudFront can trigger Lambda functions it is more suited to customizing content delivered from an origin. Therefore API Gateway with AWS Lambda is the most workable solution presented This solution will likely require other services such as S3 for content and a database service. Refer to the link below for an example scenario that use API Gateway and AWS Lambda with other services to create a serverless web application References: https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions