Practice Exam : AWS Certified Solutions Architect Associate

A three-tier application running in your VPC uses Auto Scaling for maintaining a desired count of EC2 instances. One of the EC2 instances just reported an EC2 Status Check status of Impaired. Once this information is reported to Auto Scaling, what action will be taken?   

Options are :

  • Auto Scaling waits for the health check grace period and then terminates the instance
  • A new instance will immediately be launched, then the impaired instance will be terminated
  • The impaired instance will be terminated, then a replacement will be launched (Correct)
  • Auto Scaling must verify with the ELB status checks before taking any action

Answer : The impaired instance will be terminated, then a replacement will be launched

Explanation By default Auto Scaling uses EC2 status checks Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances Auto Scaling does not wait for the health check grace period or verify with ELB before taking any action References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

You work for a large multinational retail company. The company has a large presence in AWS in multiple regions. You have established a new office and need to implement a high-bandwidth, low-latency connection to multiple VPCs in multiple regions within the same account. The VPCs each have unique CIDR ranges.

What would be the optimum solution design using AWS technology? (choose 2)   

Options are :

  • Implement a Direct Connect connection to the closest AWS region (Correct)
  • Configure AWS VPN CloudHub
  • Implement Direct Connect connections to each AWS region
  • Create a Direct Connect gateway, and create private VIFs to each region (Correct)
  • Provision an MPLS network

Answer : Implement a Direct Connect connection to the closest AWS region Create a Direct Connect gateway, and create private VIFs to each region

Explanation You should implement an AWS Direct Connect connection to the closest region. You can then use Direct Connect gateway to create private virtual interfaces (VIFs) to each AWS region. Direct Connect gateway provides a grouping of Virtual Private Gateways (VGWs) and Private Virtual Interfaces (VIFs) that belong to the same AWS account and enables you to interface with VPCs in any AWS Region (except AWS China Region). You can share a private virtual interface to interface with more than one Virtual Private Cloud (VPC) reducing the number of BGP sessions required You do not need to implement multiple Direct Connect connections to each region. This would be a more expensive option as you would need to pay for an international private connection AWS VPN CloudHub is not the best solution as you have been asked to implement high-bandwidth, low-latency connections and VPN uses the Internet so is not reliable An MPLS network could be used to create a network topology that gets you closer to AWS in each region but you would still need use Direct Connect or VPN for the connectivity into AWS. Also, the question states that you should use AWS technology and MPLS is not offered as a service by AWS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/aws-direct-connect/

Your company has over 2000 users and is planning to migrate data into the AWS Cloud. Some of the data is user’s home folders on an existing file share and the plan is to move this data to S3. Each user will have a folder in a shared bucket under the folder structure: bucket/home/%username%.

What steps do you need to take to ensure that each user can access their own home folder and no one else’s? (choose all that apply)

Options are :

  • Create an IAM policy that applies object-level S3 ACLs
  • Create a bucket policy that applies access permissions based on username
  • Create an IAM policy that applies folder-level permissions (Correct)
  • Create an IAM group and attach the IAM policy (Correct)
  • Attach an S3 ACL sub-resource that grants access based on the %username% variable

Answer : Create an IAM policy that applies folder-level permissions Create an IAM group and attach the IAM policy

Explanation The AWS blog URL below explains how to construct an IAM policy for a similar scenario References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/ https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

You are a Solutions Architect at Digital Cloud Training. You have just completed the implementation of a 2-tier web application for a client. The application uses EC2 instances, ELB and Auto Scaling across two subnets. After deployment you notice that only one subnet has EC2 instances running in it. What might be the cause of this situation?   

Options are :

  • Cross-zone load balancing is not enabled on the ELB
  • The Auto Scaling Group has not been configured with multiple subnets (Correct)
  • The ELB is configured as an internal-only load balancer
  • The AMI is missing from the ASG’s launch configuration

Answer : The Auto Scaling Group has not been configured with multiple subnets

Explanation You can specify which subnets Auto Scaling will launch new instances into. Auto Scaling will try to distribute EC2 instances evenly across AZs. If only one subnet has EC2 instances running in it the first thing to check is that you have added all relevant subnets to the configuration The type of ELB deployed is not relevant here as Auto Scaling is responsible for launching instances into subnets whereas ELB is responsible for distributing connections to the instances Cross-zone load balancing is an ELB feature and ELB is not the issue here as it is not responsible for launching instances into subnets If the AMI was missing from the launch configuration no instances would be running References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

An application you manage runs a number of components using a micro-services architecture. Several ECS container instances in your ECS cluster are displaying as disconnected. The ECS instances were created from the Amazon ECS-Optimized AMI. What steps might you take to troubleshoot the issue? (choose 2)   

Options are :

  • Verify that the instances have the correct IAM group applied
  • Verify that the container instances are using the Fargate launch type
  • Verify that the container instances have the container agent installed
  • Verify that the container agent is running on the container instances (Correct)
  • Verify that the IAM instance profile has the necessary permissions (Correct)

Answer : Verify that the container agent is running on the container instances Verify that the IAM instance profile has the necessary permissions

Explanation The ECS container agent is included in the Amazon ECS optimized AMI and can also be installed on any EC2 instance that supports the ECS specification (only supported on EC2 instances). Therefore, you know don't need to verify that the agent is installed You need to verify that the installed agent is running and that the IAM instance profile has the necessary permissions applied. You apply IAM roles (instance profile) to EC2 instances, not groups This example is based on the EC2 launch type not the Fargate launch type. With Fargate the infrastructure is managed for you by AWS Troubleshooting steps for containers include: Verify that the Docker daemon is running on the container instance Verify that the Docker Container daemon is running on the container instance Verify that the container agent is running on the container instance Verify that the IAM instance profile has the necessary permissions References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/ https://aws.amazon.com/premiumsupport/knowledge-center/ecs-agent-disconnected/

A three-tier web application that you deployed in your VPC has been experiencing heavy load on the DB tier. The DB tier uses RDS MySQL in a multi-AZ configuration. Customers have been complaining about poor response times and you have been asked to find a solution. During troubleshooting you discover that the DB tier is experiencing high read contention during peak hours of the day.

What are two possible options you could use to offload some of the read traffic from the DB to resolve the performance issues? (choose 2)

Options are :

  • Add RDS read replicas in each AZ (Correct)
  • Migrate to DynamoDB
  • Use an ELB to distribute load between RDS instances
  • Deploy ElastiCache in each AZ (Correct)
  • Use a larger RDS instance size

Answer : Add RDS read replicas in each AZ Deploy ElastiCache in each AZ

Explanation ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads Read replicas are used for read heavy DBs and replication is asynchronous. They are for workload sharing and offloading and are created from a snapshot of the master instance Moving from a relational DB to a NoSQL DB (DynamoDB) is unlikely to be a viable solution Using a larger instance size may alleviate the problems the question states that the solution should offload reads from the main DB, read replicas can do this References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-elasticache/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A company runs several web applications on AWS that experience a large amount of traffic. An Architect is considering adding a caching service to one of the most popular web applications. What are two advantages of using ElastiCache? (choose 2)   

Options are :

  • Caching query results for improved performance (Correct)
  • Decoupling application components
  • Can be used for storing session state data (Correct)
  • Multi-region HA
  • Low latency network connectivity

Answer : Caching query results for improved performance Can be used for storing session state data

Explanation The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads Elasticache can also be used for storing session state You cannot enable multi-region HA with ElastiCache ElastiCache is a caching service, not a network service so it is not responsible for providing low-latency network connectivity Amazon SQS is used for decoupling application components References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-elasticache/

The development team in your company have created a Python application running on ECS containers with the Fargate launch type. You have created an ALB with a Target Group that routes incoming connections to the ECS-based application. The application will be used by consumers who will authenticate using federated OIDC compliant Identity Providers such as Google and Facebook. You would like to securely authenticate the users on the front-end before they access the authenticated portions of the application.

How can this be done on the ALB?

Options are :

  • The only option is to use SAML with Amazon Cognito on the ALB
  • This can be done on the ALB by creating an authentication action on a listener rule that configures an Amazon Cognito user pool with the social IdP (Correct)
  • This cannot be done on an ALB; you’ll need to use another layer in front of the ALB
  • This cannot be done on an ALB; you’ll need to authenticate users on the back-end with AWS Single Sign-On (SSO) integration

Answer : This can be done on the ALB by creating an authentication action on a listener rule that configures an Amazon Cognito user pool with the social IdP

Explanation ALB supports authentication from OIDC compliant identity providers such as Google, Facebook and Amazon. It is implemented through an authentication action on a listener rule that integrates with Amazon Cognito to create user pools SAML can be used with Amazon Cognito but this is not the only option References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://aws.amazon.com/blogs/aws/built-in-authentication-in-alb/

You are concerned that you may be getting close to some of the default service limits for several AWS services. What AWS tool can be used to display current usage and limits?   

Options are :

  • AWS Trusted Advisor (Correct)
  • AWS Dashboard
  • AWS Systems Manager
  • AWS CloudWatch

Answer : AWS Trusted Advisor

Explanation Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a Service Limits check (in the Performance category) that displays your usage and limits for some aspects of some services AWS CloudWatch is used for performance monitoring not displaying usage limits AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources There is no service known as "AWS Dashboard" References: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

A company is deploying new services on EC2 and needs to determine which instance types to use with what type of attached storage. Which of the statements about Instance store-backed and EBS-backed instances is true?   

Options are :

  • EBS-backed instances can be stopped and restarted (Correct)
  • Instance-store backed instances can only be terminated
  • Instance-store backed instances can be stopped and restarted
  • EBS-backed instances cannot be restarted

Answer : EBS-backed instances can be stopped and restarted

Explanation EBS-backed means the root volume is an EBS volume and storage is persistent whereas instance store-backed means the root volume is an instance store volume and storage is not persistent On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination EBS backed instances can be stopped. You will not lose the data on this instance if it is stopped (persistent) EBS volumes can be detached and reattached to other EC2 instances EBS volume root devices are launched from AMI’s that are backed by EBS snapshots Instance store volumes are sometimes called Ephemeral storage (non-persistent) Instance store volumes cannot be stopped. If the underlying host fails the data will be lost Instance store volume root devices are created from AMI templates stored on S3 Instance store volumes cannot be detached/reattached When rebooting the instances for both types data will not be lost By default both root volumes will be deleted on termination unless you configured otherwise References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You are trying to clean up your unused EBS volumes and snapshots to save some space and cost. How many of the most recent snapshots of an EBS volume need to be maintained to guarantee that you can recreate the full EBS volume from the snapshot?   

Options are :

  • Two snapshots, the oldest and most recent snapshots
  • Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost (Correct)
  • The oldest snapshot, as this references data in all other snapshots
  • You must retain all snapshots as the process is incremental and therefore data is required from each snapshot

Answer : Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost

Explanation Snapshots capture a point-in-time state of an instance. If you make periodic snapshots of a volume, the snapshots are incremental, which means that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-snapshot.html

You are running an application on EC2 instances in a private subnet of your VPC. You would like to connect the application to Amazon API Gateway. For security reasons, you need to ensure that no traffic traverses the Internet and need to ensure all traffic uses private IP addresses only.

How can you achieve this?

Options are :

  • Add the API gateway to the subnet the EC2 instances are located in
  • Create a public VIF on a Direct Connect connection
  • Create a NAT gateway
  • Create a private API using an interface VPC endpoint (Correct)

Answer : Create a private API using an interface VPC endpoint

Explanation An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services You do not need to implement Direct Connect and create a public VIF. This would not ensure that traffic avoids the Internet You cannot add API Gateway to the subnet the EC2 instances are in, it is a public service with a public endpoint NAT Gateways are used to provide Internet access for EC2 instances in public subnets so are of no use in this solution References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You need to create a file system that can be concurrently accessed by multiple EC2 instances within an AZ. The file system needs to support high throughput and the ability to burst. As the data that will be stored on the file system will be sensitive you need to ensure it is encrypted at rest and in transit.

Which storage solution would you implement for the EC2 instances?

Options are :

  • Add EBS volumes to each EC2 instance and configure data replication
  • Use the Elastic File System (EFS) and mount the file system using NFS v4.1 (Correct)
  • Use the Elastic Block Store (EBS) and mount the file system at the block level
  • Add EBS volumes to each EC2 instance and use an ELB to distribute data evenly between the volumes

Answer : Use the Elastic File System (EFS) and mount the file system using NFS v4.1

Explanation EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud. EFS file systems are mounted using the NFSv4.1 protocol. EFS is designed to burst to allow high throughput levels for periods of time. EFS also offers the ability to encrypt data at rest and in transit EBS is a block-level storage system not a file-level storage system. You cannot connect to a single EBS volume concurrently from multiple EC2 instances Adding EBS volumes to each instance and configuring data replication is not the best solution for this scenario and there is no native capability within AWS for performing the replication. Some 3rd party data management software does use this model however You cannot use an ELB to distribute data between EBS volumes References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

The development team in your organization would like to start leveraging AWS services. They have asked you what AWS service can be used to quickly deploy and manage applications in the AWS Cloud? The developers would like the ability to simply upload applications and have AWS handle the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. What AWS service would you recommend?   

Options are :

  • EC2
  • OpsWorks
  • Auto Scaling
  • Elastic Beanstalk (Correct)

Answer : Elastic Beanstalk

Explanation Whenever you hear about developers uploading code/applications think Elastic Beanstalk.AWS Elastic Beanstalk can be used to quickly deploy and manage applications in the AWS Cloud. Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. It is considered to be a Platform as a Service (PaaS) solution and supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications If you use EC2 you must manage the deployment yourself, AWS will not handle the deployment, capacity provisioning etc. Auto Scaling does not assist with deployment of applications OpsWorks provides a managed Chef or Puppet infrastructure. You can define how to deploy and configure infrastructure but it does not give you the ability to upload application code and have the service deploy the application for you References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-elastic-beanstalk/

An event in CloudTrail is the record of an activity in an AWS account. What are the two types of events that can be logged in CloudTrail? (choose 2)   

Options are :

  • Management Events which are also known as control plane operations (Correct)
  • Platform Events which are also known as hardware level operations
  • System Events which are also known as instance level operations
  • Data Events which are also known as data plane operations (Correct)

Answer : Management Events which are also known as control plane operations Data Events which are also known as data plane operations

Explanation Trails can be configured to log Data events and management events: Data events: These events provide insight into the resource operations performed on or within a resource. These are also known as data plane operations Management events: Management events provide insight into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudtrail/ https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html

A client has requested a design for a fault tolerant database that can failover between AZs. You have decided to use RDS in a multi-AZ configuration. What type of replication will the primary database use to replicate to the standby instance?   

Options are :

  • Asynchronous replication
  • Continuous replication
  • Synchronous replication (Correct)
  • Scheduled replication

Answer : Synchronous replication

Explanation Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it (DR only). Multi-AZ deployments for the MySQL, MariaDB, Oracle and PostgreSQL engines utilize synchronous physical replication. Multi-AZ deployments for the SQL Server engine use synchronous logical replication (SQL Server-native Mirroring technology) Asynchronous replication is used by RDS for Read Replicas Scheduled and continuous replication are not replication types that are supported by RDS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

You have an existing Auto Scaling Group running with 8 EC2 instances. You have decided to attach an ELB to the ASG by connecting a Target Group. The ELB is in the same region and already has 10 EC2 instances running in the Target Group. When attempting to attach the ELB the request immediately fails, what is the MOST likely cause?

Options are :

  • You cannot attach running EC2 instances to an ASG
  • ASGs cannot be edited once defined, you would need to recreate it
  • One or more of the instances are unhealthy
  • Adding the 10 EC2 instances to the ASG would exceed the maximum capacity configured (Correct)

Answer : Adding the 10 EC2 instances to the ASG would exceed the maximum capacity configured

Explanation You can attach one or more Target Groups to your ASG to include instances behind an ALB and the ELBs must be in the same region. Once you do this any EC2 instance existing or added by the ASG will be automatically registered with the ASG defined ELBs. If adding an instance to an ASG would result in exceeding the maximum capacity of the ASG the request will fail Auto Scaling Groups can be edited once created (however launch configurations cannot be edited) You can attach running EC2 instances to an ASG After the load balancer enters the InService state, Amazon EC2 Auto Scaling terminates and replaces any instances that are reported as unhealthy. However, in this case the request immediately failed so having one or more unhealthy instances is not the issue References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/ https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html

A Solutions Architect is creating an application design with several components that will be publicly addressable. The Architect would like to use Alias records. Using Route 53 Alias records what targets can you specify? (choose 2)   

Options are :

  • On-premise web server
  • Elastic BeanStalk environment (Correct)
  • VPC endpoint
  • ElastiCache cluster
  • CloudFront distribution (Correct)

Answer : Elastic BeanStalk environment CloudFront distribution

Explanation Alias records are used to map resource record sets in your hosted zone to Amazon Elastic Load Balancing load balancers, Amazon CloudFront distributions, AWS Elastic Beanstalk environments, or Amazon S3 buckets that are configured as websites You cannot point an Alias record directly at an on-premises web server (you can point to another record in a hosted zone, which could point to an on-premises web server though I'm not sure if this is supported) You cannot use an Alias to point at an ElastiCache cluster or VPC endpoint References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/ https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

You are trying to decide on the best data store to use for a new project. The requirements are that the data store is schema-less, supports strongly consistent reads, and stores data in tables, indexed by a primary key.

Which AWS data store would you use?

Options are :

  • DynamoDB (Correct)
  • RDS
  • S3
  • RedShift

Answer : DynamoDB

Explanation Amazon Dynamo DB is a fully managed NoSQL (schema-less) database service that provides fast and predictable performance with seamless scalability. Provides two read models: eventually consistent reads (Default) and strongly consistent reads. DynamoDB stores structured data in tables, indexed by a primary key Amazon S3 is an object store and stores data in buckets, not tables Amazon RDS is a relational (has a schema) database service used for transactional purposes Amazon RedShift is a relational (has a schema) database service used for analytics References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

Your company has multiple AWS accounts for each environment (Prod, Dev, Test etc.). You would like to copy an EBS snapshot from DEV to PROD. The snapshot is from an EBS volume that was encrypted with a custom key.

What steps do you need to take to share the encrypted EBS snapshot with the Prod account? (choose 2)

Options are :

  • Modify the permissions on the encrypted snapshot to share it with the Prod account (Correct)
  • Create a snapshot of the unencrypted volume and share it with the Prod account
  • Use CloudHSM to distribute the encryption keys use to encrypt the volume
  • Share the custom key used to encrypt the volume (Correct)
  • Make a copy of the EBS volume and unencrypt the data in the process

Answer : Modify the permissions on the encrypted snapshot to share it with the Prod account Share the custom key used to encrypt the volume

Explanation When an EBS volume is encrypted with a custom key you must share the custom key with the PROD account. You also need to modify the permissions on the snapshot to share it with the PROD account. The PROD account must copy the snapshot before they can then create volumes from the snapshot You cannot share encrypted volumes created using a default CMK key and you cannot change the CMK key that is used to encrypt a volume CloudHSM is used for key management and storage but not distribution You do not need to decrypt the data as there is a workable solution that keeps the data secure at all times References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ https://aws.amazon.com/blogs/aws/new-cross-account-copying-of-encrypted-ebs-snapshots/

The development team at Digital Cloud Training have created a new web-based application that will soon be launched. The application will utilize 20 EC2 instances for the web front-end. Due to concerns over latency, you will not be using an ELB but still want to load balance incoming connections across multiple EC2 instances. You will be using Route 53 for the DNS service and want to implement health checks to ensure instances are available.

What two Route 53 configuration options are available that could be individually used to ensure connections reach multiple web servers in this configuration? (choose 2)

Options are :

  • Use Route 53 simple load balancing which will return records in a round robin fashion
  • Use Route 53 multivalue answers to return up to 8 records with each DNS query (Correct)
  • Use Route 53 weighted records and give equal weighting to all 20 EC2 instances (Correct)
  • Use Route 53 failover routing in an active-active configuration
  • Use Route 53 Alias records to resolve using the zone apex

Answer : Use Route 53 multivalue answers to return up to 8 records with each DNS query Use Route 53 weighted records and give equal weighting to all 20 EC2 instances

Explanation The key requirement here is that you can load balance incoming connections to a series of EC2 instances using Route 53 AND the solution must support health checks. With multi-value answers Route 53 responds with up to eight health records (per query) that are selected at random The weighted record type is similar to simple but you can specify a weight per IP address. You create records that have the same name and type and assign each record a relative weight. In this case you could assign multiple records the same weight and Route 53 will essentially round robin between the records We cannot use the simple record type as it does not support health checks Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They do not provide equal distribution to multiple endpoints or multi-value answers Failover routing is used for active/passive configurations only References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/

You are creating a design for a two-tier application with a MySQL RDS back-end. The performance requirements of the database tier are hard to quantify until the application is running and you are concerned about right-sizing the database.

What methods of scaling are possible after the MySQL RDS database is deployed? (choose 2)

Options are :

  • Horizontal scaling for read and write by enabling Multi-Master RDS DB
  • Horizontal scaling for read capacity by creating a read-replica (Correct)
  • Vertical scaling for read and write by choosing a larger instance size (Correct)
  • Horizontal scaling for write capacity by enabling Multi-AZ
  • Vertical scaling for read and write by using Transfer Acceleration

Answer : Horizontal scaling for read capacity by creating a read-replica Vertical scaling for read and write by choosing a larger instance size

Explanation Relational databases can scale vertically (e.g. upgrading to a larger RDS DB instance) For read-heavy use cases, you can scale horizontally using read replicas There is no such thing as a Multi-Master MySQL RDS DB (there is for Aurora) You cannot scale write capacity by enabling Multi-AZ as only one DB is active and can be written to Transfer Acceleration is a feature of S3 for fast uploads of objects References: https://aws.amazon.com/architecture/well-architected/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A new financial platform has been re-architected to use Docker containers in a micro-services architecture. The new architecture will be implemented on AWS and you have been asked to recommend the solution configuration. For operational reasons, it will be necessary to access the operating system of the instances on which the containers run.

Which solution delivery option will you select?

Options are :

  • ECS with the Fargate launch type
  • ECS with a default cluster
  • EKS with Kubernetes managed infrastructure
  • ECS with the EC2 launch type (Correct)

Answer : ECS with the EC2 launch type

Explanation Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances The EC2 Launch Type allows you to run containers on EC2 instances that you manage so you will be able to access the operating system instances The Fargate Launch Type is a serverless infrastructure managed by AWS so you do not have access to the operating system of the EC2 instances that the container platform runs on The EKS service is a managed Kubernetes service that provides a fully-managed control plane so you would not have access to the EC2 instances that the platform runs on ECS with a default cluster is an incorrect answer, you need to choose the launch type to ensure you get the access required, not the cluster configuration References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

A client has made some updates to their web application. The application uses an Auto Scaling Group to maintain a group of several EC2 instances. The application has been modified and a new AMI must be used for launching any new instances.

What do you need to do to add the new AMI?

Options are :

  • Suspend Auto Scaling and replace the existing AMI
  • Modify the existing launch configuration to add the new AMI
  • Create a new launch configuration that uses the AMI and update the ASG to use the new launch configuration (Correct)
  • Create a new target group that uses a new launch configuration with the new AMI

Answer : Create a new launch configuration that uses the AMI and update the ASG to use the new launch configuration

Explanation A launch configuration is the template used to create new EC2 instances and includes parameters such as instance family, instance type, AMI, key pair and security groups You cannot edit a launch configuration once defined. In this case you can create a new launch configuration that uses the new AMI and any new instances that are launched by the ASG will use the new AMI Suspending scaling processes can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes. It is not useful in this situation A target group is a concept associated with an ELB not Auto Scaling References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You have just initiated the creation of a snapshot of an EBS volume and the snapshot process is currently in operation. Which of the statements below is true regarding the operations that are possible while the snapshot process in running?   

Options are :

  • The volume can be used in read-only mode while the snapshot is in progress
  • The volume cannot be used until the snapshot completes
  • The volume can be used in write-only mode while the snapshot is in progress
  • The volume can be used as normal while the snapshot is in progress (Correct)

Answer : The volume can be used as normal while the snapshot is in progress

Explanation You can take a snapshot of an EBS volume while the instance is running and it does not cause any outage of the volume so it can continue to be used as normal. However, the advice is that to take consistent snapshots writes to the volume should be stopped. For non-root EBS volumes this can entail taking the volume offline (detaching the volume with the instance still running), and for root EBS volumes it entails shutting down the instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

A Solutions Architect is creating a solution for an application that must be deployed on Amazon EC2 hosts that are dedicated to the client. Instance placement must be automatic and billing should be per instance.

Which type of EC2 deployment model should be used?

Options are :

  • Dedicated Instance (Correct)
  • Reserved Instance
  • Dedicated Host
  • Cluster Placement Group

Answer : Dedicated Instance

Explanation Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances allow automatic instance placement and billing is per instance An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses. With dedicated hosts billing is on a per-host basis (not per instance) Reserved instances are a method of reducing cost by committing to a fixed contract term of 1 or 3 years A Cluster Placement Group determines how instances are placed on underlying hardware to enable low-latency connectivity References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://aws.amazon.com/ec2/dedicated-hosts/

A Solutions Architect is designing the compute layer of a serverless application. The compute layer will manage requests from external systems, orchestrate serverless workflows, and execute the business logic.

The Architect needs to select the most appropriate AWS services for these functions. Which services should be used for the compute layer? (choose 2)

Options are :

  • Use AWS Step Functions for orchestrating serverless workflows (Correct)
  • Use AWS Elastic Beanstalk for executing the business logic
  • Use Amazon API Gateway with AWS Lambda for executing the business logic (Correct)
  • Use Amazon ECS for executing the business logic
  • Use AWS CloudFormation for orchestrating serverless workflows

Answer : Use AWS Step Functions for orchestrating serverless workflows Use Amazon API Gateway with AWS Lambda for executing the business logic

Explanation With Amazon API Gateway, you can run a fully managed REST API that integrates with Lambda to execute your business logic and includes traffic management, authorization and access control, monitoring, and API versioning AWS Step Functions orchestrates serverless workflows including coordination, state, and function chaining as well as combining long-running executions not supported within Lambda execution limits by breaking into multiple steps or by calling workers running on Amazon Elastic Compute Cloud (Amazon EC2) instances or on-premises The Amazon Elastic Container Service (ECS) is not a serverless application stack, containers run on EC2 instances AWS CloudFormation and Elastic Beanstalk are orchestrators that are used for describing and provisioning resources not actually performing workflow functions within the application References: https://aws.amazon.com/step-functions/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

You are a Solutions Architect for Digital Cloud Training. A client has asked for some assistance in selecting the best database for a specific requirement. The database will be used for a data warehouse solution and the data will be stored in a structured format. The client wants to run complex analytics queries using business intelligence tools.

Which AWS database service will you recommend?

Options are :

  • RedShift (Correct)
  • Aurora
  • RDS
  • DynamoDB

Answer : RedShift

Explanation Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools. RedShift is a SQL based data warehouse used for analytics applications. RedShift is an Online Analytics Processing (OLAP) type of DB. RedShift is used for running complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution Amazon RDS does store data in a structured format but it is not a data warehouse. The primary use case for RDS is as a transactional database (not an analytics database) Amazon DynamoDB is not a structured database (schema-less / NoSQL) and is not a data warehouse solution Amazon Aurora is a type of RDS database so is also not suitable for a data warehouse use case References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/

You would like to provide some elasticity for your RDS DB. You are considering read replicas and are evaluating the features. Which of the following statements are applicable when using RDS read replicas? (choose 2)   

Options are :

  • You cannot specify the AZ the read replica is deployed in
  • You cannot have more than four instances involved in a replication chain (Correct)
  • It is possible to have read-replicas of read-replicas (Correct)
  • Replication is synchronous
  • During failover RDS automatically updates configuration (including DNS endpoint) to use the second node

Answer : You cannot have more than four instances involved in a replication chain It is possible to have read-replicas of read-replicas

Explanation Multi-AZ utilizes failover and DNS endpoint updates, not read replicas Read replicas are used for read heavy DBs and replication is asynchronous You can have read replicas of read replicas for MySQL and MariaDB but not for PostgreSQL You cannot have more than four instances involved in a replication chain You can specify the AZ the read replica is deployed in References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

An application running in your on-premise data center writes data to a MySQL database. You are re-architecting the application and plan to move the database layer into the AWS cloud on RDS. You plan to keep the application running in your on-premise data center.

What do you need to do to connect the application to the RDS database via the Internet? (choose 2)

Options are :

  • Configure an NAT Gateway and attach the RDS database
  • Create a security group allowing access from your public IP to the RDS instance and assign to the RDS instance (Correct)
  • Create a DB subnet group that is publicly accessible
  • Select a public IP within the DB subnet group to assign to the RDS instance
  • Choose to make the RDS instance publicly accessible and place it in a public subnet (Correct)

Answer : Create a security group allowing access from your public IP to the RDS instance and assign to the RDS instance Choose to make the RDS instance publicly accessible and place it in a public subnet

Explanation When you create the RDS instance, you need to select the option to make it publicly accessible. A security group will need to be created and assigned to the RDS instance to allow access from the public IP address of your application (or firewall) NAT Gateways are used for enabling Internet connectivity for EC2 instances in private subnets A DB subnet group is a collection of subnets (typically private) that you create in a VPC and that you then designate for your DB instance. The DB subnet group cannot be made publicly accessible, even if the subnets are public subnets, it is the RDS DB that must be configured to be publicly accessible References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html#USER_VPC.Scenario4

A Solutions Architect is designing an application stack that will be highly elastic. Which AWS services can be used that don’t require you to make any capacity decisions upfront? (choose 2)

Options are :

  • Amazon EC2
  • DynamoDB
  • AWS Lambda (Correct)
  • Amazon RDS
  • Amazon Kinesis Firehose (Correct)

Answer : AWS Lambda Amazon Kinesis Firehose

Explanation With Kinesis Data Firehose, you only pay for the amount of data you transmit through the service, and if applicable, for data format conversion. There is no minimum fee or setup cost AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running With Amazon EC2 you need to select your instance sizes and number of instances With RDS you need to select the instance size for the DB With DynamoDB you need to specify the read/write capacity of the DB References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

An application you manage regularly uploads files from an EC2 instance to S3. The files can be a couple of GB in size and sometimes the uploads are slower than you would like resulting in poor upload times. What method can be used to increase throughput and speed things up?   

Options are :

  • Turn off versioning on the destination bucket
  • Upload the files using the S3 Copy SDK or REST API
  • Randomize the object names when uploading
  • Use Amazon S3 multipart upload (Correct)

Answer : Use Amazon S3 multipart upload

Explanation Multipart upload can be used to speed up uploads to S3. Multipart upload uploads objects in parts independently, in parallel and in any order. It is performed using the S3 Multipart upload API and is recommended for objects of 100MB or larger. It can be used for objects from 5MB up to 5TB and must be used for objects larger than 5GB Randomizing object names provides no value in this context, random prefixes are used for intensive read requests Copy is used for copying, moving and renaming objects within S3 not for uploading to S3 Turning off versioning will not speed up the upload References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

A new department will begin using AWS services in your account and you need to create an authentication and authorization strategy. Select the correct statements regarding IAM groups? (choose 2)   

Options are :

  • IAM groups can be used to group EC2 instances
  • IAM groups can be nested up to 4 levels
  • An IAM group is not an identity and cannot be identified as a principal in an IAM policy (Correct)
  • IAM groups can be used to assign permissions to users (Correct)
  • IAM groups can temporarily assume a role to take on permissions for a specific task

Answer : An IAM group is not an identity and cannot be identified as a principal in an IAM policy IAM groups can be used to assign permissions to users

Explanation Groups are collections of users and have policies attached to them A group is not an identity and cannot be identified as a principal in an IAM policy Use groups to assign permissions to users IAM groups cannot be used to group EC2 instances Only users and services can assume a role to take on permissions (not groups) References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

You are a Solutions Architect at Digital Cloud Training. In your VPC you have a mixture of EC2 instances in production and non-production environments. You need to devise a way to segregate access permissions to different sets of users for instances in different environments.

How can this be achieved? (choose 2)

Options are :

  • Add a specific tag to the instances you want to grant the users or groups access to (Correct)
  • Create an IAM policy that grants access to any instances with the specific tag (Correct)
  • Add an environment variable to the instances using user data
  • Attach an Identity Provider (IdP) and delegate access to the instances to the relevant groups
  • Create an IAM policy with a conditional statement that matches the environment variables

Answer : Add a specific tag to the instances you want to grant the users or groups access to Create an IAM policy that grants access to any instances with the specific tag

Explanation You can use the condition checking in IAM policies to look for a specific tag. IAM checks that the tag attached to the principal making the request matches the specified key name and value You cannot achieve this outcome using environment variables stored in user data and conditional statements in a policy. You must use an IAM policy that grants access to instances based on the tag You cannot use an IdP for this solution References: https://aws.amazon.com/premiumsupport/knowledge-center/iam-ec2-resource-tags/ https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html

The application development team in your company have created a new application written in .NET. You are looking for a way to easily deploy the application whilst maintaining full control of the underlying resources.

Which PaaS service provided by AWS would suit this requirement?

Options are :

  • CloudFormation
  • Elastic Beanstalk (Correct)
  • EC2 Placement Groups
  • CloudFront

Answer : Elastic Beanstalk

Explanation AWS Elastic Beanstalk can be used to quickly deploy and manage applications in the AWS Cloud. Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. It is considered to be a Platform as a Service (PaaS) solution and allows full control of the underlying resources CloudFront is a content delivery network for caching content to improve performance CloudFormation uses templates to provision infrastructure EC2 Placement Groups are used to control how instances are launched to enable low-latency connectivity or to be spread across distinct hardware References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-elastic-beanstalk/

There is new requirement for a database that will store a large number of records for an online store. You are evaluating the use of DynamoDB. Which of the following are AWS best practices for DynamoDB? (choose 2)   

Options are :

  • Use for BLOB data use cases
  • Store more frequently and less frequently accessed data in separate tables (Correct)
  • Use large files
  • Use separate local secondary indexes for each item
  • Store objects larger than 400KB in S3 and use pointers in DynamoDB (Correct)

Answer : Store more frequently and less frequently accessed data in separate tables Store objects larger than 400KB in S3 and use pointers in DynamoDB

Explanation DynamoDB best practices include: Keep item sizes small If you are storing serial data in DynamoDB that will require actions based on data/time use separate tables for days, weeks, months Store more frequently and less frequently accessed data in separate tables If possible compress larger attribute values Store objects larger than 400KB in S3 and use pointers (S3 Object ID) in DynamoDB References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

A new application that you rolled out recently runs on Amazon EC2 instances and uses API Gateway and Lambda. Your company is planning on running an advertising campaign that will likely result in significant hits to the application after each ad is run.

You’re concerned about the impact this may have on your application and would like to put in place some controls to limit the number of requests per second that hit the application.

What controls will you implement in this situation?

Options are :

  • Enable Lambda continuous scaling
  • Enable caching on the API Gateway and specify a size in gigabytes
  • Implement throttling rules on the API Gateway (Correct)
  • API Gateway and Lambda scale automatically to handle any load so there’s no need to implement controls

Answer : Implement throttling rules on the API Gateway

Explanation The key requirement is that you need to limit the number of requests per second that hit the application. This can only be done by implementing throttling rules on the API Gateway. Throttling enables you to throttle the number of requests to your API which in turn means less traffic will be forwarded to your application server Caching can improve performance but does not limit the amount of requests coming in API Gateway and Lambda both scale up to their default limits however the bottleneck is with the application server running on EC2 which may not be able to scale to keep up with demand Lambda continuous scaling does not resolve the scalability concerns with the EC2 application server References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/ https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions