QA : AWS Certified Solutions Architect Associate

An Amazon CloudWatch alarm recently notified you that the load on a DynamoDB table you are running is getting close to the provisioned capacity for writes. The DynamoDB table is part of a two-tier customer-facing application. You are concerned about what will happen if the limit is reached but need to wait for approval to increase the WriteCapacityUnits value assigned to the table.

What will happen if the limit for the provisioned capacity for writes is reached?

Options are :

  • The requests will succeed, and an HTTP 200 status code will be returned
  • The requests will be throttled, and fail with an HTTP 503 code (Service Unavailable)
  • DynamoDB scales automatically so there’s no need to worry
  • The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException (Correct)

Answer : The requests will be throttled, and fail with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceededException

Explanation DynamoDB can throttle requests that exceed the provisioned throughput for a table. When a request is throttled it fails with an HTTP 400 code (Bad Request) and a ProvisionedThroughputExceeded exception (not a 503 or 200 status code) When using the provisioned capacity pricing model DynamoDB does not automatically scale. DynamoDB can automatically scale when using the new on-demand capacity mode (DynamoDB Auto Scaling) however this is not configured for this database References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

To increase the resiliency of your RDS DB instance, you decided to enable Multi-AZ. Where will the new standby RDS instance be created?   

Options are :

  • In the same AWS Region but in a different AZ for high availability (Correct)
  • In another subnet within the same AZ
  • In a different AWS Region to protect against Region failures
  • You must specify the location when configuring Multi-AZ

Answer : In the same AWS Region but in a different AZ for high availability

Explanation Multi-AZ RDS creates a replica in another AZ within the same region and synchronously replicates to it (DR only). You cannot choose which AZ in the region will be chosen to create the standby DB instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A Solutions Architect is considering the best approach to enabling Internet access for EC2 instances in a private subnet. What advantages do NAT Gateways have over NAT Instances? (choose 2)

Options are :

  • Highly available within each AZ (Correct)
  • Can be used as a bastion host
  • Can be assigned to security groups
  • Managed for you by AWS (Correct)
  • Can be scaled up manually

Answer : Highly available within each AZ Managed for you by AWS

Explanation NAT gateways are managed for you by AWS. NAT gateways are highly available in each AZ into which they are deployed. They are not associated with any security groups and can scale automatically up to 45Gbps NAT instances are managed by you. They must be scaled manually and do not provide HA. NAT Instances can be used as bastion hosts and can be assigned to security groups References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

Your manager has asked you to explain the benefits of using IAM groups. Which of the below statements are valid benefits? (choose 2)   

Options are :

  • Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users (Correct)
  • Provide the ability to create custom permission policies
  • Provide the ability to nest groups to create an organizational hierarchy
  • Enables you to attach IAM permission policies to more than one user at a time (Correct)
  • You can restrict access to the subnets in your VPC

Answer : Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users Enables you to attach IAM permission policies to more than one user at a time

Explanation Groups are collections of users and have policies attached to them. A group is not an identity and cannot be identified as a principal in an IAM policy. Use groups to assign permissions to users. Use the principal of least privilege when assigning permissions. You cannot nest groups (groups within groups) You cannot use groups to restrict access to subnet in your VPC Custom permission policies are created using IAM policies. These are then attached to users, groups or roles References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

You would like to implement a method of automating the the creation, retention, and deletion of backups for the EBS volumes in your VPC. What is the easiest way to automate these tasks using AWS tools?

Options are :

  • Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes (Correct)
  • Create a scheduled job and run the AWS CLI command “create-backup?
  • Create a scheduled job and run the AWS CLI command “create-snapshot?
  • Configure EBS volume replication to create a backup on S3

Answer : Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes

Explanation You backup EBS volumes by taking snapshots. This can be automated via the AWS CLI command "create-snapshot". However the question is asking for a way to automate not just the creation of the snapshot but the retention and deletion too. The EBS Data Lifecycle Manager (DLM) is a new feature that can automate all of these actions for you and this can be performed centrally from within the management console Snapshots capture a point-in-time state of an instance and are stored on Amazon S3. They do not provide granular backup (not a replacement for backup software) You cannot configure volume replication for EBS volumes using AWS tools References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html https://docs.aws.amazon.com/cli/latest/reference/ec2/create-snapshot.html

An application has been deployed in a private subnet within your VPC and an ELB will be used to accept incoming connections. You need to setup the configuration for the listeners on the ELB.

When using a Classic Load Balancer, which of the following combinations of listeners support the proxy protocol? (choose 2)     

Options are :

  • Front-End – SSL & Back-End – SSL
  • Front-End – HTTP & Back-End SSL
  • Front-End – TCP & Back-End SSL
  • Front-End – SSL & Back-End - TCP (Correct)
  • Front-End – TCP & Back-End – TCP (Correct)

Answer : Front-End – SSL & Back-End - TCP Front-End – TCP & Back-End – TCP

Explanation The proxy protocol only applies to L4 and the back-end listener must be TCP for proxy protocol When using the proxy protocol the front-end listener can be either TCP or SSL The X-forwarded-for header only applies to L7 Proxy protocol for TCP/SSL carries the source (client) IP/port information. The Proxy Protocol header helps you identify the IP address of a client when you have a load balancer that uses TCP for back-end connection References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-elb-listenerconfig-quickref.html

You have created a VPC with private and public subnets and will be deploying a new mySQL database server running on an EC2 instance. Which subnet should you deploy the database server into?   

Options are :

  • The private subnet (Correct)
  • It doesn’t matter
  • The public subnet
  • The subnet that is mapped to the primary AZ in the region

Answer : The private subnet

Explanation AWS best practice is to deploy databases into private subnets wherever possible. You can then deploy your web front-ends into public subnets and configure these, or an additional application tier to write data to the database Public subnets are typically used for web front-ends as they are directly accessible from the Internet. It is preferable to launch your database in a private subnet There is no such thing as a "primary" Availability Zone (AZ). All AZs are essentially created equal and your subnets map 1:1 to a single AZ References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You just attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (choose 2)   

Options are :

  • You have reached the limit on the number of instances that you can launch in a region
  • AWS does not currently have enough available On-Demand capacity to service your request
  • The AMI is unsupported
  • You've reached your EBS volume limit (Correct)
  • An EBS snapshot is corrupt (Correct)

Answer : You've reached your EBS volume limit An EBS snapshot is corrupt

Explanation The following are a few reasons why an instance might immediately terminate: - You've reached your EBS volume limit - An EBS snapshot is corrupt - The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption - The instance store-backed AMI that you used to launch the instance is missing a required part (an image.part.xx file) It is possible that an instance type is not supported by an AMI and this can cause an "UnsupportedOperation" client error. However, in this case the instance was previously running (it is in a stopped state) so it is unlikely that this is the issue If AWS does not have capacity available a InsufficientInstanceCapacity error will be generated when you try to launch a new instance or restart a stopped instance If you've reached the limit on the number of instances you can launch in a region you get an InstanceLimitExceeded error when you try to launch a new instance or restart a stopped instance References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html

You have a requirement to perform a large-scale testing operation that will assess the ability of your application to scale. You are planning on deploying a large number of c3.2xlarge instances with several PIOPS EBS volumes attached to each. You need to ensure you don’t run into any problems with service limits. What are the service limits you need to be aware of in this situation?   

Options are :

  • 20 On-Demand EC2 instances and 100,000 aggregate PIOPS per account
  • 20 On-Demand EC2 instances and 300 TiB of aggregate PIOPS volume storage per region (Correct)
  • 20 On-Demand EC2 instances and 300 TiB of aggregate PIOPS volume storage per account
  • 20 On-Demand EC2 instances and 100,000 aggregate PIOPS per region

Answer : 20 On-Demand EC2 instances and 300 TiB of aggregate PIOPS volume storage per region

Explanation You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic spot limit per region (by default) You are limited to an aggregate of 300 TiB of aggregate PIOPS volumes per region and 300,000 aggregate PIOPS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

A Solutions Architect needs to attach an Elastic Network Interface (ENI) to an EC2 instance. This can be performed when the instance is in different states. What state does “warm attach? refer to?   

Options are :

  • Attaching an ENI to an instance when it is stopped (Correct)
  • Attaching an ENI to an instance during the launch process
  • Attaching an ENI to an instance when it is running
  • Attaching an ENI to an instance when it is idle

Answer : Attaching an ENI to an instance when it is stopped

Explanation ENIs can be “hot attached? to running instances ENIs can be “warm-attached? when the instance is stopped ENIs can be “cold-attached? when the instance is launched References: https://digitalcloud.guru/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A client has requested a recommendation for a high-level hosting architecture for a distributed application that will utilize decoupled components.

The application will make use of servers running on EC2 instances and in the client’s own data centers. Which AWS application integration services could you use to support interaction between the servers?

Which of the following options are valid? (choose 2)

Options are :

  • Amazon SQS (Correct)
  • Amazon S3
  • Amazon SWF (Correct)
  • Amazon VPC
  • Amazon DynamoDB

Answer : Amazon SQS Amazon SWF

Explanation Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers. SQS is used for distributed/decoupled applications A VPC is a logical network construct Amazon S3 is an object store and is not designed for application integration between servers Amazon DynamoDB is a non-relational database References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-swf/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sqs/

A Solutions Architect has setup a VPC with a public subnet and a VPN-only subnet. The public subnet is associated with a custom route table that has a route to an Internet Gateway. The VPN-only subnet is associated with the main route table and has a route to a virtual private gateway.

 The Architect has created a new subnet in the VPC and launched an EC2 instance in it. However, the instance cannot connect to the Internet. What is the MOST likely reason?

Options are :

  • The new subnet has not been associated with a route table
  • The Internet Gateway is experiencing connectivity problems
  • There is no NAT Gateway available in the new subnet so Internet connectivity is not possible
  • The subnet has been automatically associated with the main route table which does not have a route to the Internet (Correct)

Answer : The subnet has been automatically associated with the main route table which does not have a route to the Internet

Explanation When you create a new subnet, it is automatically associated with the main route table. Therefore, the EC2 instance will not have a route to the Internet. The Architect should associate the new subnet with the custom route table NAT Gateways are used for connecting EC2 instances in private subnets to the Internet. This is a valid reason for a private subnet to not have connectivity, however in this case the Architect is attempting to use an Internet Gateway Subnets are always associated to a route table when created Internet Gateways are highly-available so it's unlikely that IGW connectivity is the issue References: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html

A membership website your company manages has become quite popular and is gaining members quickly. The website currently runs on EC2 instances with one web server instance and one DB instance running MySQL. You are concerned about the lack of high-availability in the current architecture. What can you do to easily enable HA without making major changes to the architecture?   

Options are :

  • Install MySQL on an EC2 instance in the same AZ and enable replication
  • Create a Read Replica in another AZ
  • Install MySQL on an EC2 instance in another AZ and enable replication (Correct)
  • Enable Multi-AZ for the MySQL instance

Answer : Install MySQL on an EC2 instance in another AZ and enable replication

Explanation If you are installing MySQL on an EC2 instance you cannot enable read replicas or multi-AZ. Instead you would need to use Amazon RDS with a MySQL DB engine to use these features Migrating to RDS would entail a major change to the architecture so is not really feasible. In this example it will therefore be easier to use the native HA features of MySQL rather than to migrate to RDS. You would want to place the second MySQL DB instance in another AZ to enable high availability and fault tolerance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

One of the applications you manage on RDS uses the MySQL DB and has been suffering from performance issues. You would like to setup a reporting process that will perform queries on the database but you’re concerned that the extra load will further impact the performance of the DB and may lead to poor customer experience.

What would be the best course of action to take so you can implement the reporting process?

Options are :

  • Deploy a Read Replica to setup a secondary read and write database instance
  • Configure Multi-AZ to setup a secondary database instance in another region
  • Deploy a Read Replica to setup a secondary read-only database instance (Correct)
  • Configure Multi-AZ to setup a secondary database instance in another Availability Zone

Answer : Deploy a Read Replica to setup a secondary read-only database instance

Explanation The reporting process will perform queries on the database but not writes. Therefore you can use a read replica which will provide a secondary read-only database and configure the reporting process to use the read replica Read replicas are for workload offloading only and do not provide the ability to write to the database Multi-AZ is used for implementing fault tolerance. With Multi-AZ you can failover to a DB in another AZ within the region in the event of a failure of the primary DB. However, you can only read and write to the primary DB so still need a read replica to offload the reporting job References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

You have an Amazon RDS Multi-AZ deployment across two availability zones. An outage of the availability zone in which the primary RDS DB instance is running occurs. What actions will take place in this circumstance? (choose 2)   

Options are :

  • A failover will take place once the connection draining timer has expired
  • A manual failover of the DB instance will need to be initiated using Reboot with failover
  • Due to the loss of network connectivity the process to switch to the standby replica cannot take place
  • The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance (Correct)
  • The primary DB instance will switch over automatically to the standby replica (Correct)

Answer : The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance The primary DB instance will switch over automatically to the standby replica

Explanation Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it (DR only) A failover may be triggered in the following circumstances: Loss of primary AZ or primary DB instance failure Loss of network connectivity on primary Compute (EC2) unit failure on primary Storage (EBS) unit failure on primary The primary DB instance is changed Patching of the OS on the primary DB instance Manual failover (reboot with failover selected on primary) During failover RDS automatically updates configuration (including DNS endpoint) to use the second node The process to failover is not reliant on network connectivity as it is designed for fault tolerance Connection draining timers are applicable to ELBs not RDS You do not need to manually failover the DB instance, multi-AZ has an automatic process as outlined above References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A new application you are designing will store data in an Amazon Aurora MySQL DB. You are looking for a way to enable regional disaster recovery capabilities with fast replication and fast failover. Which of the following options is the BEST solution?

Options are :

  • Create a cross-region Aurora Read Replica
  • Use Amazon Aurora Global Database (Correct)
  • Enable Multi-AZ for the Aurora DB
  • Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot

Answer : Use Amazon Aurora Global Database

Explanation Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Aurora Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities in less than 1 minute. You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into an AWS Region that is closer to your users, and make it easier to migrate from one AWS Region to another. However, this solution would not provide the fast storage replication and fast failover capabilities of the Aurora Global Database and is therefore not the best option Enabling Multi-AZ for the Aurora DB would provide AZ-level resiliency within the region not across regions Though you can take a DB snapshot and replicate it across regions, it does not provide an automated solution and it would not enable fast failover References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Replication.html

One of your clients is a banking regulator and they run an application that provides auditing information to the general public using AWS Lambda and API Gateway. A Royal Commission has exposed some suspect lending practices and this has been picked up by the media and raised concern amongst the general public. With some major upcoming announcements expected you’re concerned about traffic spikes hitting the client’s application.

How can you protect the backend systems from traffic spikes?

Options are :

  • Use a CloudFront Edge Cache
  • Enable throttling limits and result caching in API Gateway (Correct)
  • Put the APIs in an S3 bucket and publish as a static website using CloudFront
  • Use ElastiCache as the front-end to cache frequent queries

Answer : Enable throttling limits and result caching in API Gateway

Explanation You can throttle and monitor requests to protect your backend. Resiliency through throttling rules is based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call API Gateway is the front-end component of this application therefore that is where you need to implement the controls. You cannot use CloudFront or ElastiCache to cache APIs. You also cannot put APIs in a bucket and publish as a static website References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

You need to record connection information from clients using an ELB. When enabling the Proxy Protocol with an ELB to carry connection information from the source requesting the connection, what prerequisites apply? (choose 2)   

Options are :

  • Confirm that your back-end listeners are configured for TCP and front-end listeners are configured for TCP (Correct)
  • Confirm that your load balancer is configured to include the X-Forwarded-For request header
  • Confirm that your instances are on-demand instances
  • Confirm that your load balancer is not behind a proxy server with Proxy Protocol enabled (Correct)
  • Confirm that your load balancer is using HTTPS listeners

Answer : Confirm that your back-end listeners are configured for TCP and front-end listeners are configured for TCP Confirm that your load balancer is not behind a proxy server with Proxy Protocol enabled

Explanation Proxy protocol for TCP/SSL carries the source (client) IP/port information. The Proxy Protocol header helps you identify the IP address of a client when you have a load balancer that uses TCP for back-end connections. You need to ensure the client doesn’t go through a proxy or there will be multiple proxy headers. You also need to ensure the EC2 instance’s TCP stack can process the extra information The back-end and front-end listeners must be configured for TCP HTTPS listeners do not carry proxy protocol information (use the X-Forwarded-For header instead) It doesn't matter what type of pricing model you're using for EC2 (e.g. on-demand, reserved etc.) X-Forwarded-For is a different protocol that operates at layer 7 whereas proxy protocol operates at layer 4 References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-elb-listenerconfig-quickref.html

You need to run a production batch process quickly that will use several EC2 instances. The process cannot be interrupted and must be completed within a short time period.

What is likely to be the MOST cost-effective choice of EC2 instance type to use for this requirement?

Options are :

  • Spot instances
  • Flexible instances
  • On-demand instances (Correct)
  • Reserved instances

Answer : On-demand instances

Explanation The key requirements here are that you need to deploy several EC2 instances quickly to run the batch process and you must ensure that the job completes. The on-demand pricing model is the best for this ad-hoc requirement as though spot pricing may be cheaper you cannot afford to risk that the instances are terminated by AWS when the market price increases Spot instances provide a very low hourly compute cost and are good when you have flexible start and end times. They are often used for use cases such as grid computing and high-performance computing (HPC) Reserved instances are used for longer more stable requirements where you can get a discount for a fixed 1 or 3 year term. This pricing model is not good for temporary requirements There is no such thing as a "flexible instance" References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

You are creating a CloudFormation template that will provision a new EC2 instance and new EBS volume. What do you need to specify to associate the block store with the instance?   

Options are :

  • The EC2 logical ID
  • Both the EC2 physical ID and the EBS physical ID
  • The EC2 physical ID
  • Both the EC2 logical ID and the EBS logical ID (Correct)

Answer : Both the EC2 logical ID and the EBS logical ID

Explanation Logical IDs are used to reference resources within the template Physical IDs identify resources outside of AWS CloudFormation templates, but only after the resources have been created References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/amazon-cloudwatch/

A health club is developing a mobile fitness app that allows customers to upload statistics and view their progress. Amazon Cognito is being used for authentication, authorization and user management and users will sign-in with Facebook IDs.

 In order to securely store data in DynamoDB, the design should use temporary AWS credentials. What feature of Amazon Cognito is used to obtain temporary credentials to access AWS services?

Options are :

  • Key Pairs
  • SAML Identity Providers
  • User Pools
  • Identity Pools (Correct)

Answer : Identity Pools

Explanation With an identity pool, users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB A user pool is a user directory in Amazon Cognito. With a user pool, users can sign in to web or mobile apps through Amazon Cognito, or federate through a third-party identity provider (IdP) SAML Identity Providers are supported IDPs for identity pools but cannot be used for gaining temporary credentials for AWS services Key pairs are used in Amazon EC2 for access to instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

You are discussing EC2 with a colleague and need to describe the differences between EBS-backed instances and Instance store-backed instances. Which of the statements below would be valid descriptions? (choose 2)   

Options are :

  • EBS volumes can be detached and reattached to other EC2 instances (Correct)
  • By default, root volumes for both types will be retained on termination unless you configured otherwise
  • Instance store volumes can be detached and reattached to other EC2 instances
  • For both types of volume rebooting the instances will result in data loss
  • On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination (Correct)

Answer : EBS volumes can be detached and reattached to other EC2 instances On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination

Explanation On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination EBS volumes can be detached and reattached to other EC2 instances Instance store volumes cannot be detached and reattached to other EC2 instances When rebooting the instances for both types data will not be lost By default, root volumes for both types will be deleted on termination unless you configured otherwise References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

An Auto Scaling group is configured with the default termination policy. The group spans multiple Availability Zones and each AZ has the same number of instances running.

 A scale in event needs to take place, what is the first step in evaluating which instances to terminate?

Options are :

  • Select instances that use the oldest launch configuration (Correct)
  • Select instances that are closest to the next billing hour
  • Select the newest instance in the group
  • Select instances randomly

Answer : Select instances that use the oldest launch configuration

Explanation Using the default termination policy, when there are even number of instances in multiple AZs, Auto Scaling will first select the instances with the oldest launch configuration, and if multiple instances share the oldest launch configuration, AS then selects the instances that are closest to the next billing hour References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html

A new application you are deploying uses Docker containers. You are creating a design for an ECS cluster to host the application. Which statements about ECS clusters are correct? (choose 2)   

Options are :

  • ECS Clusters are a logical grouping of container instances that you can place tasks on (Correct)
  • Clusters can contain tasks using the Fargate and EC2 launch type (Correct)
  • Each container instance may be part of multiple clusters at a time
  • Clusters are AZ specific
  • Clusters can contain a single container instance type

Answer : ECS Clusters are a logical grouping of container instances that you can place tasks on Clusters can contain tasks using the Fargate and EC2 launch type

Explanation ECS Clusters are a logical grouping of container instances the you can place tasks on Clusters can contain tasks using BOTH the Fargate and EC2 launch type Each container instance may only be part of one cluster at a time Clusters are region specific For clusters with the EC2 launch type clusters can contain different container instance types References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

An Auto Scaling Group in which you have four EC2 instances running is becoming heavily loaded. The instances are using the m4.large instance type and the CPUs are hitting 80%. Due to licensing constraints you don’t want to add additional instances to the ASG so you are planning to upgrade to the m4.xlarge instance type instead. You need to make the change immediately but don’t want to terminate the existing instances.

How can you perform the change without causing the ASG to launch new instances? (choose 2)

Options are :

  • Create a new launch configuration with the new instance type specified
  • Change the instance type and then restart the instance
  • Stop each instance and change its instance type. Start the instance again (Correct)
  • On the ASG suspend the Auto Scaling process until you have completed the change (Correct)
  • Edit the existing launch configuration and specify the new instance type

Answer : Stop each instance and change its instance type. Start the instance again On the ASG suspend the Auto Scaling process until you have completed the change

Explanation When you resize an instance, you must select an instance type that is compatible with the configuration of the instance. You must stop your Amazon EBS–backed instance before you can change its instance type You can suspend and then resume one or more of the scaling processes for your Auto Scaling group. Suspending scaling processes can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes You do not need to create a new launch configuration and you cannot edit an existing launch configuration You cannot change an instance type without first stopping the instance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html

A Solutions Architect is creating the business process workflows associated with an order fulfilment system. What AWS service can assist with coordinating tasks across distributed application components?   

Options are :

  • Amazon SNS
  • Amazon SWF (Correct)
  • Amazon SQS
  • Amazon STS

Answer : Amazon SWF

Explanation Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks Amazon Security Token Service (STS) is used for requesting temporary credentials Amazon Simple Queue Service (SQS) is a message queue used for decoupling application components Amazon Simple Notification Service (SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud SNS supports notifications over multiple transports including HTTP/HTTPS, Email/Email-JSON, SQS and SMS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-swf/

You are building a new Elastic Container Service (ECS) cluster. The ECS instances are running the EC2 launch type and you would like to enable load balancing to distributed connections to the tasks running on the cluster. You would like the mapping of ports to be performed dynamically and will need to route to different groups of servers based on the path in the requested URL. Which AWS service would you choose to fulfil these requirements?   

Options are :

  • Application Load Balancer (Correct)
  • ECS Services
  • Network Load Balancer
  • Classic Load Balancer

Answer : Application Load Balancer

Explanation An ALB allows containers to use dynamic host port mapping so that multiple tasks from the same service are allowed on the same container host – the CLB and NLB do not offer this An ALB can also route requests based on the content of the request in the host field: host-based or path-based References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs/ https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html

In your AWS VPC, you need to add a new subnet that will allow you to host a total of 20 EC2 instances.

Which of the following IPv4 CIDR blocks can you use for this scenario?

Options are :

  • 172.0.0.0/28
  • 172.0.0.0/30
  • 172.0.0.0/29
  • 172.0.0.0/27 (Correct)

Answer : 172.0.0.0/27

Explanation When you create a VPC, you must specify an IPv4 CIDR block for the VPC The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses) The CIDR block must not overlap with any existing CIDR block that's associated with the VPC A /27 subnet mask provides 32 addresses The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance The following list shows total addresses for different subnet masks: /32 = 1 ; /31 = 2 ; /30 = 4 ; /29 = 8 ; /28 = 16 ; /27 = 32 References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You created a second ENI (eth1) interface when launching an EC2 instance. You would like to terminate the instance and have not made any changes.

What will happen to the attached ENIs?

Options are :

  • eth1 will persist but eth0 will be terminated (Correct)
  • eth1 will be terminated, but eth0 will persist
  • Both eth0 and eth1 will persist
  • Both eth0 and eth1 will be terminated with the instance

Answer : eth1 will persist but eth0 will be terminated

Explanation By default Eth0 is the only Elastic Network Interface (ENI) created with an EC2 instance when launched. You can add additional interfaces to EC2 instances (number dependent on instances family/type). Default interfaces are terminated with instance termination. Manually added interfaces are not terminated by default References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

You are designing solutions that will utilize CloudFormation templates and your manager has asked how much extra will it cost to use CloudFormation to deploy resources?   

Options are :

  • CloudFormation is charged per hour of usage
  • The cost is based on the size of the template
  • There is no additional charge for AWS CloudFormation, you only pay for the AWS resources that are created (Correct)
  • Amazon charge a flat fee for each time you use CloudFormation

Answer : There is no additional charge for AWS CloudFormation, you only pay for the AWS resources that are created

Explanation There is no additional charge for AWS CloudFormation. You pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation in the same manner as if you created them manually. You only pay for what you use, as you use it; there are no minimum fees and no required upfront commitments There is no flat fee, per hour usage costs or charges applicable to templates References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/

You would like to store a backup of an Amazon EBS volume on Amazon S3. What is the easiest way of achieving this?   

Options are :

  • Use SWF to automatically create a backup of your EBS volumes and then upload them to an S3 bucket
  • Write a custom script to automatically copy your data to an S3 bucket
  • Create a snapshot of the volume (Correct)
  • You don’t need to do anything, EBS volumes are automatically backed up by default

Answer : Create a snapshot of the volume

Explanation Snapshots capture a point-in-time state of an instance. Snapshots of Amazon EBS volumes are stored on S3 by design so you only need to take a snapshot and it will automatically be stored on Amazon S3 EBS volumes are not automatically backed up using snapshots. You need to manually take a snapshot or you can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of snapshots This is not a good use case for Amazon SWF References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

When using throttling controls with API Gateway what happens when request submissions exceed the steady-state request rate and burst limits?   

Options are :

  • API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests? error responses to the client (Correct)
  • API Gateway drops the requests and does not return a response to the client
  • The requests will be buffered in a cache until the load reduces
  • API Gateway fails the limit-exceeding requests and returns “500 Internal Server Error? error responses to the client

Answer : API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests? error responses to the client

Explanation You can throttle and monitor requests to protect your backend. Resiliency through throttling rules based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/ https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html

Your manager is interested in reducing operational overhead and cost and heard about “serverless? computing at a conference he recently attended. He has asked you if AWS provide any services that the company can leverage. Which services from the list below would you tell him about? (choose 2)   

Options are :

  • EC2
  • ECS
  • API Gateway (Correct)
  • Lambda (Correct)
  • EMR

Answer : API Gateway Lambda

Explanation AWS Serverless services include (but not limited to): API Gateway Lambda S3 DynamoDB SNS SQS Kinesis EMR, EC2 and ECS all use compute instances running on Amazon EC2 so are not serverless References: https://aws.amazon.com/serverless/

You are designing the disk configuration for an EC2 instance. The instance will be running an application that requires heavy read/write IOPS. You need to provision a single volume that is 500 GiB in size and needs to support 20,000 IOPS.

What EBS volume type will you select?

Options are :

  • EBS Provisioned IOPS SSD (Correct)
  • EBS Throughput Optimized HDD
  • EBS General Purpose SSD in a RAID 1 configuration
  • EBS General Purpose SSD

Answer : EBS Provisioned IOPS SSD

Explanation This is simply about understanding the performance characteristics of the different EBS volume types. The only EBS volume type that supports over 10,000 IOPS is Provisioned IOPS SSD SSD, General Purpose - GP2 - Baseline of 3 IOPS per GiB with a minimum of 100 IOPS - Burst up to 3000 IOPS for volumes >= 334GB) SSD, Provisioned IOPS - I01 - More than 10,000 IOPS - Up to 32000 IOPS per volume - Up to 50 IOPS per GiB HDD, Throughput Optimized - (ST1) - Throughput measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume HDD, Cold - (SC1) - Lowest cost storage - cannot be a boot volume - These volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume HDD, Magnetic - Standard - cheap, infrequently accessed storage - lowest cost storage that can be a boot volume References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You need to connect from your office to a Linux instance that is running in a public subnet in your VPC using the Internet. Which of the following items are required to enable this access? (choose 2)   

Options are :

  • A Public or Elastic IP address on the EC2 instance (Correct)
  • A bastion host
  • A NAT Gateway
  • An Internet Gateway attached to the VPC and a route table attached to the public subnet pointing to it (Correct)
  • An IPSec VPN

Answer : A Public or Elastic IP address on the EC2 instance An Internet Gateway attached to the VPC and a route table attached to the public subnet pointing to it

Explanation A public subnet is a subnet that has an Internet Gateway attached and "Enable auto-assign public IPv4 address" enabled. Instances require a public IP or Elastic IP address. It is also necessary to have the subnet route table updated to point to the Internet Gateway and security groups and network ACLs must be configured to allow the SSH traffic on port 22 A bastion host can be used to access instances in private subnets but is not required for instances in public subnets A NAT Gateway allows instances in private subnets to access the Internet, it is not used for remote access An IPSec VPN is not required to connect to an instance in a public subnet References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You recently noticed that your Network Load Balancer (NLB) in one of your VPCs is not distributing traffic evenly between EC2 instances in your AZs. There are an odd number of EC2 instances spread across two AZs. The NLB is configured with a TCP listener on port 80 and is using active health checks.

What is the most likely problem?

Options are :

  • There is no HTTP listener
  • Cross-zone load balancing is disabled (Correct)
  • Health checks are failing in one AZ due to latency
  • NLB can only load balance within a single AZ

Answer : Cross-zone load balancing is disabled

Explanation Without cross-zone load balancing enabled, the NLB will distribute traffic 50/50 between AZs. As there are an odd number of instances across the two AZs some instances will not receive any traffic. Therefore enabling cross-zone load balancing will ensure traffic is distributed evenly between available instances in all AZs If health checks fail this will cause the NLB to stop sending traffic to these instances. However, the health check packets are very small and it is unlikely that latency would be the issue within a region Listeners are used to receive incoming connections. An NLB listens on TCP not on HTTP therefore having no HTTP listener is not the issue here An NLB can load balance across multiple AZs just like the other ELB types References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

An EC2 instance on which you are running a video on demand web application has been experiencing high CPU utilization. You would like to take steps to reduce the impact on the EC2 instance and improve performance for consumers. Which of the steps below would help?   

Options are :

  • Create an ELB and place it in front of the EC2 instance
  • Use ElastiCache as the web front-end and forward connections to EC2 for cache misses
  • Create a CloudFront RTMP distribution and point it at the EC2 instance
  • Create a CloudFront distribution and configure a custom origin pointing at the EC2 instance (Correct)

Answer : Create a CloudFront distribution and configure a custom origin pointing at the EC2 instance

Explanation This is a good use case for CloudFront which is a content delivery network (CDN) that caches content to improve performance for users who are consuming the content. This will take the load off of the EC2 instances as CloudFront has a cached copy of the video files. An origin is the origin of the files that the CDN will distribute. Origins can be either an S3 bucket, an EC2 instance, and Elastic Load Balancer, or Route 53 – can also be external (non-AWS) ElastiCache cannot be used as an Internet facing web front-end For RTMP CloudFront distributions files must be stored in an S3 bucket Placing an ELB in front of a single EC2 instance does not help to reduce load References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

You work as a Solutions Architect for a global travel agency. The company has numerous offices around the world and users regularly upload large data sets to a centralized data center in the in U.S. The company is moving into AWS and you have been tasked with re-architecting the application stack on AWS.

For the data storage, you would like to use the S3 object store and enable fast and secure transfer of the files over long distances using the public Internet. Many objects will be larger than 100MB.

Considering cost, which of the following solutions would you recommend? (choose 2)

Options are :

  • AWS Direct Connect
  • Use S3 bucket replication
  • Use Route 53 latency based routing
  • Use multipart upload (Correct)
  • Enable S3 transfer acceleration (Correct)

Answer : Use multipart upload Enable S3 transfer acceleration

Explanation Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. It is used to accelerate object uploads to S3 over long distances (latency) You can also use multipart uploads with transfer acceleration. For objects larger than 100 megabytes use the Multipart Upload capability You can use cross-region replication to replicate S3 buckets and so it is possible you could replicate them to a region that is closer to the end-users which would reduce latency. However, this entails having duplicate copies of the data which will incur storage costs. The question has also requested fast and secure transfer which is the purpose of S3 transfer acceleration AWS Direct Connect creates a private network connection into the AWS data center which will provide predictable bandwidth and latency. However, this is the most expensive option and overkill for this solution Using Route 53 latency based routing would only work if you had multiple endpoints and could therefore upload to the endpoint with the lowest latency. As you are uploading to a specific S3 bucket, and buckets are region-specific, this would not work References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions