Questions : AWS Certified Solutions Architect Associate

You are a Solutions Architect for a systems integrator. Your client is growing their presence in the AWS cloud and has applications and services running in a VPC across multiple availability zones within a region. The client has a requirement to build an operational dashboard within their on-premise data center within the next few months. The dashboard will show near real time statistics and therefore must be connected over a low latency, high performance network.

What would be the best solution for this requirement?

Options are :

  • You cannot connect to multiple AZs from a single location
  • Use redundant VPN connections to two VGW routers in the region, this should give you access to the infrastructure in all AZs
  • Order multiple AWS Direct Connect connections that will be connected to multiple AZs
  • Order a single AWS Direct Connect connection to connect to the client’s VPC. This will provide access to all AZs within the region (Correct)

Answer : Order a single AWS Direct Connect connection to connect to the client’s VPC. This will provide access to all AZs within the region

Explanation With AWS Direct Connect you can provision a low latency, high performance private connection between the client's data center and AWS. Direct Connect connections connect you to a region and all AZs within that region. In this case the client has a single VPC so we know their resources are container within a single region and therefore a single Direct Connect connection satisfies the requirements. As Direct Connect connections allow you to connect to all AZs within a region you do not need to order multiple connections (but you might want to for redundancy) VPN connections use the public Internet and are therefore not good when you need a low latency, high performance and consistent network experience References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/aws-direct-connect/

Your client is looking for a way to use standard templates for describing and provisioning their infrastructure resources on AWS. Which AWS service can be used in this scenario?   

Options are :

  • Auto Scaling
  • Simple Workflow Service (SWF)
  • Elastic Beanstalk
  • CloudFormation (Correct)

Answer : CloudFormation

Explanation AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment AWS Auto Scaling is used for providing elasticity to EC2 instances by launching or terminating instances based on load Elastic Beanstalk is a PaaS service for running managed web applications. It is not used for infrastructure deployment Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components, it does not use templates for deploying infrastructure References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/

An application that you will be deploying in your VPC requires 14 EC2 instances that must be placed on distinct underlying hardware to reduce the impact of the failure of a hardware node. The instances will use varying instance types. What configuration will cater to these requirements taking cost-effectiveness into account?   

Options are :

  • You cannot control which nodes your instances are placed on
  • Use dedicated hosts and deploy each instance on a dedicated host
  • Use a Spread Placement Group across two AZs (Correct)
  • Use a Cluster Placement Group within a single AZ

Answer : Use a Spread Placement Group across two AZs

Explanation A spread placement group is a group of instances that are each placed on distinct underlying hardware. Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other. Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware A cluster placement group is a logical grouping of instances within a single Availability Zone. Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both, and if the majority of the network traffic is between the instances in the group Using a single instance on each dedicated host would be extremely expensive References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

A new mobile application that your company is deploying will be hosted on AWS. The users of the application will use mobile devices to upload small amounts of data on a frequent basis. It is expected that the number of users connecting each day could be over 1 million. The data that is uploaded must be stored in a durable and persistent data store. The data store must also be highly available and easily scalable.

Which AWS services would you use?

Options are :

  • DynamoDB (Correct)
  • RDS
  • Kinesis
  • RedShift

Answer : DynamoDB

Explanation Amazon DynamoDB is a fully managed NoSQL database service that provides a durable and persistent data store. You can scale DynamoDB using push button scaling which means that you can scale the DB at any time without incurring downtime. Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability RedShift is a data warehousing solution that is used for analytics on data, it is not used for transactional databases RDS is not highly available unless you use multi-AZ, which is not specified in the answer. It is also harder to scale RDS as you must change the instance size and incur downtime Kinesis is used for collecting, processing and analyzing streaming data. It is not used as a data store References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

You are creating an operational dashboard in CloudWatch for a number of EC2 instances running in your VPC. Which one of the following metrics will not be available by default?   

Options are :

  • Disk read operations
  • Network in and out
  • CPU utilization
  • Memory usage (Correct)

Answer : Memory usage

Explanation ul> There is no standard metric for memory usage on EC2 instances. Use the AWS website link below for a comprehensive list of the metrics that are collected References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/ https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.html

An application you manage uses and Elastic Load Balancer (ELB) and you need to enable session affinity. You are using the Application Load Balancer type and need to understand how the sticky sessions feature works. Which of the statements below are correct in relation to sticky sessions? (choose 2)   

Options are :

  • Cookies can be inserted by the application or by the load balancer when configured
  • Sticky sessions are enabled at the target group level (Correct)
  • ALB supports load balancer-generated cookies only (Correct)
  • The name of the cookie is AWSSTICKY
  • With application-inserted cookies if the back-end instance becomes unhealthy, new requests will be routed by the load balancer normally and the session will be sticky

Answer : Sticky sessions are enabled at the target group level ALB supports load balancer-generated cookies only

Explanation The Application Load Balancer supports load balancer-generated cookies only (not application-generated) and the cookie name is always AWSALB. Sticky session are enabled at the target group level Session stickiness uses cookies and ensures a client is bound to an individual back-end instance for the duration of the cookie lifetime With ELB-inserted cookies if the back-end instance becomes unhealthy, new requests will be routed by the load balancer normally BUT the session will no longer be sticky References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

The security team in your company is defining new policies for enabling security analysis, resource change tracking, and compliance auditing. They would like to gain visibility into user activity by recording API calls made within the company’s AWS account. The information that is logged must be encrypted. This requirement applies to all AWS regions in which your company has services running.

How will you implement this request? (choose 2)

Options are :

  • Enable encryption with a single KMS key (Correct)
  • Use CloudWatch to monitor API calls
  • Create a CloudTrail trail and apply it to all regions (Correct)
  • Create a CloudTrail trail in each region in which you have services
  • Enable encryption with multiple KMS keys

Answer : Enable encryption with a single KMS key Create a CloudTrail trail and apply it to all regions

Explanation CloudTrail is used for recording API calls (auditing) whereas CloudWatch is used for recording metrics (performance monitoring). The solution can be deployed with a single trail that is applied to all regions. A single KMS key can be used to encrypt log files for trails applied to all regions. CloudTrail log files are encrypted using S3 Server Side Encryption (SSE) and you can also enable encryption SSE KMS for additional security You do not need to create a separate trail in each region or use multiple KMS keys CloudWatch is not used for monitoring API calls References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudtrail/

You are a Solutions Architect at Digital Cloud Training and you’re reviewing a customer’s design for a two-tier application with a stateless web front-end running on EC2 and a database back-end running on DynamoDB. The current design consists of a single EC2 web server that connects to the DynamoDB table to store session state data.

The customer has requested that the data is stored across multiple physically separate locations for high availability and durability and the web front-end should be fault tolerant and able to scale automatically in times of high load.

What changes will you recommend to the client? (choose 2)

Options are :

  • Launch an Elastic Load Balancer and attach it to the Auto Scaling Group (Correct)
  • Setup an Auto Scaling Group across multiple Availability Zones configured to run multiple EC2 instances across zones and use simple scaling to increase the group size during periods of high utilization (Correct)
  • Add another compute in another Availability Zone and use Route 53 to distribute traffic using Round Robin
  • Use RDS database in a Multi-AZ configuration to add high availability
  • Use Elasticache Memcached for the datastore to gain high availability across AZs

Answer : Launch an Elastic Load Balancer and attach it to the Auto Scaling Group Setup an Auto Scaling Group across multiple Availability Zones configured to run multiple EC2 instances across zones and use simple scaling to increase the group size during periods of high utilization

Explanation Availability Zones are physically separate and isolated from each other and you can use Auto Scaling to launch instances into multiple AZs within a region. This along with an ELB to distribute incoming connections between the instances in each AZ will provide the required fault tolerance. Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability so the session state data is already highly available and durable Adding another compute node in another AZ and using Route 53 round robin to distributed incoming connections may work but wouldn't provide the required ability to scale automatically in times of high load. This is where Auto Scaling and ELB can assist RDS is not used for storing session state data Elasticache Memcached cannot be used as a persistent datastore and does not support replication across AZs References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

You work as an Enterprise Architect for Digital Cloud Training which employs 1500 people. The company is growing at around 5% per annum. The company strategy is to increasingly adopt AWS cloud services. There is an existing Microsoft Active Directory (AD) service that is used as the on-premise identity and access management system. You want to avoid synchronizing your directory into the AWS cloud or adding permissions to resources in another AD domain.   

Options are :

  • Install a Microsoft Active Directory Domain Controller on AWS and add it into your existing on-premise domain
  • Launch a large AWS Directory Service AD Connector to proxy all authentication back to your on-premise AD service for authentication (Correct)
  • Launch an AWS Active Directory Service for Microsoft Active Directory and setup trust relationships with your on-premise domain
  • Use a large AWS Simple AD in AWS

Answer : Launch a large AWS Directory Service AD Connector to proxy all authentication back to your on-premise AD service for authentication

Explanation The important points here are that you need to utilize the on-premise AD for authentication with AWS services whilst not synchronizing the AD database into the cloud or setting up trust relationships(adding permissions to resources in another AD domain). AD Connector is a directory gateway for redirecting directory requests to your on-premise Active Directory and eliminates the need for directory synchronization. AD connector is considered the best choice when you want to use an existing AD with AWS services. The small AD connector is for up to 500 users and the large version caters for up to 5000 so in this case we need to use the large AD connector Active Directory Service for Microsoft Active Directory is the best choice if you have more than 5000 users and is a standalone AD service in the cloud. You can also setup trust relationships with existing on-premise AD instances (though you can't replicate/synchronize). In this case we want to leverage the on-premise AD and want to avoid trust relationships The AWS Simple AD is an Active Directory compatible directory service in the cloud - it cannot be used to proxy authentication requests to the on-premise AD References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

As a Solutions Architect at Digital Cloud Training you are helping a client to design a multi-tier web application architecture. The client has requested that the architecture provide low-latency connectivity between all servers and be resilient across multiple locations. They would also like to use their existing Microsoft SQL licenses for the database tier. The client needs to maintain the ability to access the operating systems of all servers for the installation of monitoring software.

How would you recommend the database tier is deployed?

Options are :

  • Amazon EC2 instances with Microsoft SQL Server and data replication within an AZ
  • Amazon EC2 instances with Microsoft SQL Server and data replication between two different AZs (Correct)
  • Amazon RDS with Microsoft SQL Server in a Multi-AZ configuration
  • Amazon RDS with Microsoft SQL Server

Answer : Amazon EC2 instances with Microsoft SQL Server and data replication between two different AZs

Explanation As the client needs to access the operating system of the database servers, we need to use EC2 instances not RDS (which does not allow operating system access). We can implement EC2 instances with Microsoft SQL in two different AZs which provides the requested location redundancy and AZs are connected by low-latency, high throughput and redundant networking Implementing the solution in a single AZ would not provide the resiliency requested RDS is a fully managed service and you do not have access to the underlying EC2 instance (no root access) References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

A mobile client requires data from several application-layer services to populate its user interface. What can the application team use to decouple the client interface from the underlying services behind them?     

Options are :

  • AWS Device Farm
  • Amazon Cognito
  • Amazon API Gateway (Correct)
  • Application Load Balancer

Answer : Amazon API Gateway

Explanation Amazon API Gateway decouples the client application from the back-end application-layer services by providing a single endpoint for API requetss An application load balancer distributes incoming connection requests to back-end EC2 instances. It is not used for decoupling application-layer services from mobile clients Amazon Cognito is used for adding sign-up, sign-in and access control to mobile apps AWS Device farm is an app testing service for Android, iOS and web apps References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

A member of the security team in your organization has brought an issue to your attention. External monitoring tools have noticed some suspicious traffic coming from a small number of identified public IP addresses. The traffic is destined for multiple resources in your VPC. What would be the easiest way to temporarily block traffic from the IP addresses to any resources in your VPC?   

Options are :

  • Add a rule in each Security Group that is associated with the affected resources that denies traffic from the identified IP addresses
  • Add a rule to the Network ACL to deny traffic from the identified IP addresses. Ensure all subnets are associated with the Network ACL (Correct)
  • Add a rule in the VPC route table that denies access to the VPC from the identified IP addresses
  • Configure the NAT Gateway to deny traffic from the identified IP addresses

Answer : Add a rule to the Network ACL to deny traffic from the identified IP addresses. Ensure all subnets are associated with the Network ACL

Explanation The best way to handle this situation is to create a deny rule in a network ACL using the identified IP addresses as the source. You would apply the network ACL to the subnet(s) that are seeing suspicious traffic You cannot create a deny rule with a security group You cannot use the route table to create security rules NAT Gateways are used for allowing instances in private subnets to access the Internet, they do not provide any inbound services References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

A customer runs an API on their website that receives around 1,000 requests each day and has an average response time of 50 ms. It is currently hosted on a single c4.large EC2 instance.

How can high availability be added to the architecture at the LOWEST cost?

Options are :

  • Create an Auto Scaling group with a minimum of one instance and a maximum of two instances, then use an Application Load Balancer to balance the traffic
  • Recreate the API using API Gateway and integrate the API with the existing back-end
  • Create an Auto Scaling group with a maximum of two instances, then use an Application Load Balancer to balance the traffic
  • Recreate the API using API Gateway and use AWS Lambda as the service back-end (Correct)

Answer : Recreate the API using API Gateway and use AWS Lambda as the service back-end

Explanation The API does not receive a high volume of traffic or require extremely low latency. It would not be cost efficient to use multiple EC2 instances and Elastic Load Balancers. Instead the best course of action would be to recreate the API using API Gateway which will allow the customer to only pay for what they use. AWS Lambda can likewise be used for the back-end processing reducing cost by utilizing a pay for what you use serverless service If the architect recreates the API using API Gateway but integrates the API with the existing back-end this is not highly available and is not the lowest cost option Using Application Load Balancers with multiple EC2 instances would not be cost effective References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/

You would like to grant additional permissions to an individual ECS application container on an ECS cluster that you have deployed. You would like to do this without granting additional permissions to the other containers that are running on the cluster.

How can you achieve this?

Options are :

  • Use EC2 instances instead as you can assign different IAM roles on each instance
  • In the same Task Definition, specify a separate Task Role for the application container
  • Create a separate Task Definition for the application container that uses a different Task Role (Correct)
  • You cannot implement granular permissions with ECS containers

Answer : Create a separate Task Definition for the application container that uses a different Task Role

Explanation You can only apply one IAM role to a Task Definition so you must create a separate Task Definition.. A Task Definition is required to run Docker containers in Amazon ECS and you can specify the IAM role (Task Role) that the task should use for permissions It is incorrect to say that you cannot implement granular permissions with ECS containers as IAM roles are granular and are applied through Task Definitions/Task Roles You can apply different IAM roles to different EC2 instances, but to grant permissions to ECS application containers you must use Task Definitions and Task Roles References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

A Solutions Architect is designing a front-end that accepts incoming requests for back-end business logic applications. The Architect is planning to use Amazon API Gateway, which statements are correct in relation to the service? (choose 2)   

Options are :

  • API Gateway uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns
  • API Gateway is a network service that provides an alternative to using the Internet to connect customers’ on-premise sites to AWS
  • Throttling can be configured at multiple levels including Global and Service Call (Correct)
  • API Gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda functions or other AWS services (Correct)
  • API Gateway is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds

Answer : Throttling can be configured at multiple levels including Global and Service Call API Gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda functions or other AWS services

Explanation An Amazon API Gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda function or other AWS services. API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. Throttling can be configured at multiple levels including Global and Service Call CloudFront is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds Direct Connect is a network service that provides an alternative to using the Internet to connect customers’ on-premise sites to AWS DynamoDB uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

The Perfect Forward Secrecy (PFS) security feature uses a derived session key to provide additional safeguards against the eavesdropping of encrypted data. Which two AWS services support PFS? (choose 2)   

Options are :

  • Elastic Load Balancing (Correct)
  • Auto Scaling
  • EC2
  • CloudFront (Correct)
  • EBS

Answer : Elastic Load Balancing CloudFront

Explanation CloudFront and ELB support Perfect Forward Secrecy which creates a new private key for each SSL session Perfect Forward Secrecy (PFS) provides additional safeguards against the eavesdropping of encrypted data, through the use of a unique random session key The other services listed do not support PFS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

You have been asked to review the security posture of your EC2 instances in AWS. When reviewing security groups, which rule types do you need to inspect? (choose 2)   

Options are :

  • Stateful
  • Inbound (Correct)
  • Stateless
  • Deny
  • Outbound (Correct)

Answer : Inbound Outbound

Explanation Security Groups can be configured with Inbound (ingress) and Outbound (egress) rules. You can only assign permit rules in a security group, You cannot assign deny rules and all rules are evaluated until a permit is encountered or continues until the implicit deny Security groups are stateful (whereas Network ACLs are stateless) and this is not something that is configured in a rule References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

An application you manage exports data from a relational database into an S3 bucket. The data analytics team wants to import this data into a RedShift cluster in a VPC in the same account. Due to the data being sensitive the security team has instructed you to ensure that the data traverses the VPC without being routed via the public Internet.

Which combination of actions would meet this requirement? (choose 2)

Options are :

  • Set up a NAT gateway in a private subnet to allow the Amazon RedShift cluster to access Amazon S3
  • Create and configure an Amazon S3 VPC endpoint (Correct)
  • Create a NAT gateway in a public subnet to allows the Amazon RedShift cluster to access Amazon S3
  • Enable Amazon RedShift Enhanced VPC routing (Correct)
  • Create a cluster Security Group to allow the Amazon RedShift cluster to access Amazon S3

Answer : Create and configure an Amazon S3 VPC endpoint Enable Amazon RedShift Enhanced VPC routing

Explanation Amazon RedShift Enhanced VPC routing forces all COPY and UNLOAD traffic between clusters and data repositories through a VPC Implementing an S3 VPC endpoint will allow S3 to be accessed from other AWS services without traversing the public network. Amazon S3 uses the Gateway Endpoint type of VPC endpoint with which a target for a specified route is entered into the VPC route table and used for traffic destined to a supported AWS service Cluster Security Groups are used with RedShift on EC2-Classic VPCs, regular security groups are used in EC2-VPC A NAT Gateway is used to allow instances in a private subnet to access the Internet and is of no use in this situation References: https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

A Solutions Architect is creating a new VPC and is creating a security group and network ACL design. Which of the statements below are true regarding network ACLs? (choose 2) 

Options are :

  • Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny (Correct)
  • Network ACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet (Correct)
  • With Network ACLs all rules are evaluated until a permit is encountered or continues until the implicit deny
  • Network ACLs operate at the instance level
  • With Network ACLs you can only create allow rules

Answer : Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny Network ACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet

Explanation Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny. Network ACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet Network ACL’s function at the subnet level, not the instance level With NACLs you can have permit and deny rules All rules are not evaluated before making a decision (security groups do this), they are evaluated in order until a permit or deny is encountered References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You created a new Auto Scaling Group (ASG) with two subnets across AZ1 and AZ2 in your VPC. You set the minimum size to 6 instances. After creating the ASG you noticed that all EC2 instances were launched in AZ1 due to limited capacity of the required instance family within AZ2. You’re concerned about the imbalance of resources. What would be the expected behavior of Auto Scaling once the capacity constraints are resolved in AZ2?   

Options are :

  • The ASG will launch three additional EC2 instances in AZ2 and keep the six in AZ1
  • The ASG will not do anything until the next scaling event
  • The ASG will launch six additional EC2 instances in AZ2
  • The ASG will try to rebalance by first creating three new instances in AZ2 and then terminating three instances in AZ1 (Correct)

Answer : The ASG will try to rebalance by first creating three new instances in AZ2 and then terminating three instances in AZ1

Explanation Auto Scaling can perform rebalancing when it finds that the number of instances across AZs is not balanced. Auto Scaling rebalances by launching new EC2 instances in the AZs that have fewer instances first, only then will it start terminating instances in AZs that had more instances After launching 3 new instance in AZ2 Auto Scaling will not keep all of the 6 in AZ1, it will terminate 3 of them The ASG will not launch 6 new instances in AZ2 as you only need 6 in total spread (ideally) between both AZs The ASG does not wait for any scaling events, it will automatically perform rebalancing References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

An EC2 status check on an EBS volume is showing as insufficient-data. What is the most likely explanation?   

Options are :

  • The checks have failed on the volume
  • The volume does not have enough data on it to check properly
  • The checks may still be in progress on the volume (Correct)
  • The checks require more information to be manually entered

Answer : The checks may still be in progress on the volume

Explanation The possible values are ok, impaired, warning, or insufficient-data. If all checks pass, the overall status of the volume is ok. If the check fails, the overall status is impaired. If the status is insufficient-data, then the checks may still be taking place on your volume at the time The checks do not require manual input and they have not failed or the status would be impaired. The volume does not need a certain amount of data on it to be checked properly References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/ https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVolumeStatus.html

You have a three-tier web application running on AWS that utilizes Route 53, ELB, Auto Scaling and RDS. One of the EC2 instances that is registered against the ELB fails a health check. What actions will the ELB take in this circumstance?   

Options are :

  • The ELB will update Route 53 by removing any references to the instance
  • The ELB will stop sending traffic to the instance that failed the health check (Correct)
  • The ELB will terminate the instance that failed the health check
  • The ELB will instruct Auto Scaling to terminate the instance and launch a replacement

Answer : The ELB will stop sending traffic to the instance that failed the health check

Explanation The ELB will simply stop sending traffic to the instance as it has determined it to be unhealthy ELBs are not responsible for terminating EC2 instances. The ELB does not send instructions to the ASG, the ASG has its own health checks and can also use ELB health checks to determine the status of instances ELB does not update Route 53 records References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

You run a two-tier application with a web tier that is behind an Internet-facing Elastic Load Balancer (ELB). You need to restrict access to the web tier to a specific list of public IP addresses.

What are two possible ways you can implement this requirement? (Choose 2)

Options are :

  • Configure your ELB to send the X-forwarded for headers and the web servers to filter traffic based on the ELB’s “X-forwarded-for? header (Correct)
  • Configure a VPC NACL to allow web traffic from the list of IPs and deny all outbound traffic
  • Configure the ELB security group to allow traffic only from the specific list of IPs (Correct)
  • Configure the proxy protocol on the web servers and filter traffic based on IP address
  • Configure the VPC internet gateway to allow incoming traffic from these IP addresses

Answer : Configure your ELB to send the X-forwarded for headers and the web servers to filter traffic based on the ELB’s “X-forwarded-for? header Configure the ELB security group to allow traffic only from the specific list of IPs

Explanation There are two methods you can use to restrict access from some known IP addresses. You can either use the ELB security group rules or you can configure the ELB to send the X-Forwarded For headers to the web servers. The web servers can then filter traffic using a local firewall such as iptables X-forwarded-for for HTTP/HTTPS carries the source IP/port information. X-forwarded-for only applies to L7. The ELB security group controls the ports and protocols that can reach the front-end listener Proxy protocol applies to layer 4 and is not configured on the web servers A NACL is applied at the subnet level and as they are stateless if you deny all outbound traffic return traffic will be blocked You cannot configure an Internet gateway to allow this traffic. Internet gateways are used for outbound Internet access from public subnets References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

You have been asked to deploy a new High-Performance Computing (HPC) cluster. You need to create a design for the EC2 instances that ensures close proximity, low latency and high network throughput.

Which AWS features will help you to achieve this requirement whilst considering cost? (choose 2)

Options are :

  • Use Placement groups (Correct)
  • Use dedicated hosts
  • Use Provisioned IOPS EBS volumes
  • Use EC2 instances with Enhanced Networking (Correct)
  • Launch I/O Optimized EC2 instances in one private subnet in an AZ

Answer : Use Placement groups Use EC2 instances with Enhanced Networking

Explanation Placement groups are a logical grouping of instances in one of the following configurations: - Cluster—clusters instances into a low-latency group in a single AZ - Spread—spreads instances across underlying hardware (can span AZs) Placement groups are recommended for applications that benefit from low latency and high bandwidth and it s recommended to use an instance type that supports enhanced networking. Instances within a placement group can communicate with each other using private or public IP addresses I/O optimized instances and provisioned IOPS EBS volumes are more geared towards storage performance than network performance Dedicated hosts might ensure close proximity of instances but would not be cost efficient References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A Solutions Architect is planning to run some Docker containers on Amazon ECS. The Architect needs to define some parameters for the containers. What application parameters can be defined in an ECS task definition? (choose 2)   

Options are :

  • The application configuration
  • The container images to use and the repositories in which they are located (Correct)
  • The ELB node to be used to scale the task containers
  • The security group rules to apply
  • The ports that should be opened on the container instance for your application (Correct)

Answer : The container images to use and the repositories in which they are located The ports that should be opened on the container instance for your application

Explanation Some of the parameters you can specify in a task definition include: Which Docker images to use with the containers in your task How much CPU and memory to use with each container Whether containers are linked together in a task The Docker networking mode to use for the containers in your task What (if any) ports from the container are mapped to the host container instances Whether the task should continue if the container finished or fails The commands the container should run when it is started Environment variables that should be passed to the container when it starts Data volumes that should be used with the containers in the task IAM role the task should use for permissions References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

You need to create a file system that can be concurrently accessed by multiple EC2 instances within an AZ. The file system needs to support high throughput and the ability to burst. As the data that will be stored on the file system will be sensitive you need to ensure it is encrypted at rest and in transit.

What storage solution would you implement for the EC2 instances?

Options are :

  • Add EBS volumes to each EC2 instance and use an ELB to distribute data evenly between the volumes
  • Add EBS volumes to each EC2 instance and configure data replication
  • Use the Elastic Block Store (EBS) and mount the file system at the block level
  • Use the Elastic File System (EFS) and mount the file system using NFS v4.1 (Correct)

Answer : Use the Elastic File System (EFS) and mount the file system using NFS v4.1

Explanation EFS is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud EFS uses the NFSv4.1 protocol Amazon EFS is designed to burst to allow high throughput levels for periods of time EFS offers the ability to encrypt data at rest and in transit References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

Which service uses a simple text file to model and provision infrastructure resources, in an automated and secure manner?   

Options are :

  • OpsWorks
  • Simple Workflow Service
  • Elastic Beanstalk
  • CloudFormation (Correct)

Answer : CloudFormation

Explanation AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion. CloudFormation can be used to provision a broad range of AWS resources. Think of CloudFormation as deploying infrastructure as code Elastic Beanstalk is a PaaS solution for deploying and managing applications SWF helps developers build, run, and scale background jobs that have parallel or sequential steps OpsWorks is a configuration management service that provides managed instances of Chef and Puppet References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/

An Architect is designing a serverless application that will accept images uploaded by users from around the world. The application will make API calls to back-end services and save the session state data of the user to a database.

Which combination of services would provide a solution that is cost-effective while delivering the least latency?

Options are :

  • Amazon S3, API Gateway, AWS Lambda, Amazon RDS
  • Amazon CloudFront, API Gateway, Amazon S3, AWS Lambda, DynamoDB (Correct)
  • API Gateway, Amazon S3, AWS Lambda, DynamoDB
  • Amazon CloudFront, API Gateway, Amazon S3, AWS Lambda, Amazon RDS

Answer : Amazon CloudFront, API Gateway, Amazon S3, AWS Lambda, DynamoDB

Explanation Amazon CloudFront caches content closer to users at Edge locations around the world. This is the lowest latency option for uploading content. API Gateway and AWS Lambda are present in all options. DynamoDB can be used for storing session state data The option that presents API Gateway first does not offer a front-end for users to upload content to Amazon RDS is not a serverless service so this option can be ruled out Amazon S3 alone will not provide the least latency for users around the world unless you have many buckets in different regions and a way of directing users to the closest bucket (such as Route 3 latency based routing). However, you would then need to manage replicating the data References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/ https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/

You are a Solutions Architect for an insurance company. An application you manage is used to store photos and video files that relate to insurance claims. The application writes data using the iSCSI protocol to a storage array. The array currently holds 10TB of data and is approaching capacity.

Your manager has instructed you that he will not approve further capital expenditure for on-premises infrastructure. Therefore, you are planning to migrate data into the cloud. How can you move data into the cloud whilst retaining low-latency access to frequently accessed data on-premise using the iSCSI protocol?

Options are :

  • Use an AWS Storage Gateway Virtual Tape Library
  • Use an AWS Storage Gateway Volume Gateway in stored volume mode
  • Use an AWS Storage Gateway Volume Gateway in cached volume mode (Correct)
  • Use an AWS Storage Gateway File Gateway in cached volume mode

Answer : Use an AWS Storage Gateway Volume Gateway in cached volume mode

Explanation The AWS Storage Gateway service enables hybrid storage between on-premises environments and the AWS Cloud. It provides low-latency performance by caching frequently accessed data on premises, while storing data securely and durably in Amazon cloud storage services AWS Storage Gateway supports three storage interfaces: file, volume, and tape File: - File gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3 - File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching -- the question asks for an iSCSI (block) storage solution so a file gateway is not the right solution Volume: - The volume gateway represents the family of gateways that support block-based volumes, previously referred to as gateway-cached and gateway-stored modes - Block storage – iSCSI based – the volume gateway is the correct solution choice as it provides iSCSI (block) storage which is compatible with the existing configuration Tape: - Used for backup with popular backup software - Each gateway is preconfigured with a media changer and tape drives. Supported by NetBackup, Backup Exec, Veeam etc. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/aws-storage-gateway/

You need a service that can provide you with control over which traffic to allow or block to your web applications by defining customizable web security rules. You need to block common attack patterns, such as SQL injection and cross-site scripting, as well as creating custom rules for your own applications.

Which AWS service fits these requirements?

Options are :

  • Security Groups
  • AWS WAF (Correct)
  • Route 53
  • CloudFront

Answer : AWS WAF

Explanation AWS WAF is a web application firewall that helps detect and block malicious web requests targeted at your web applications. AWS WAF allows you to create rules that can help protect against common web exploits like SQL injection and cross-site scripting. With AWS WAF you first identify the resource (either an Amazon CloudFront distribution or an Application Load Balancer) that you need to protect. You then deploy the rules and filters that will best protect your applications The other services listed do not enable you to create custom web security rules that can block known malicious attacks References: https://aws.amazon.com/waf/details/

Your company stores important production data on S3 and you have been asked by your manager to ensure that data is protected from accidental deletion. Which of the choices represent the most cost-effective solutions to protect against accidental object deletion for data in an Amazon S3 bucket? (choose 2)

Options are :

  • Copy your objects to an EBS volume
  • Enable versioning on the bucket (Correct)
  • Use lifecycle actions to backup the data into Glacier (Correct)
  • You do not need to do anything, by default versioning is enabled
  • Use Cross Region Replication to replicate the data to an S3 bucket in another AZ

Answer : Enable versioning on the bucket Use lifecycle actions to backup the data into Glacier

Explanation You must consider multiple facts including cost and the practicality of maintaining a solution. This question has more than two possible solutions so you need to choose the best options from the list. The questions asks for the most cost-effective solution - based on this Glacier and Versioning are the best solutions Glacier can be used to copy or archive files. Glacier integrates with versioning to allow you to choose policies for transitioning current and previous versions to a Glacier archive Versioning stores all versions of an object (including all writes and even if an object is deleted). With versioning you have to pay for the extra consumed space but there are no data egress costs Versioning protects against accidental object/data deletion or overwrites CRR is an Amazon S3 feature that automatically replicates data across AWS Regions. However, there are data egress costs to consider when copying data across regions and you have to pay for 2 copies of the data (vs. a lower cost copy in Glacier) Copying data into an EBS volume would not be cost-effective as it is a higher cost than the other solutions References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

You are designing a solution on AWS that requires a file storage layer that can be shared between multiple EC2 instances. The storage should be highly-available and should scale easily.

Which AWS service can be used for this design?

Options are :

  • Amazon S3
  • Amazon EC2 instance store
  • Amazon EFS (Correct)
  • Amazon EBS

Answer : Amazon EFS

Explanation Amazon Elastic File Service (EFS) allows concurrent access from many EC2 instances and is mounted over NFS which is a file-level protocol An Amazon Elastic Block Store (EBS) volume can only be attached to a single instance and cannot be shared Amazon S3 is an object storage system that is accessed via REST API not file-level protocols. It cannot be attached to EC2 instances An EC2 instance store is an ephemeral storage volume that is local to the server on which the instances runs and is not persistent. It is accessed via block protocols and also cannot be shared between instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

A company runs a service on AWS to provide offsite backups for images on laptops and phones. The solution must support millions of customers, with thousands of images per customer. Images will be retrieved infrequently, but must be available for retrieval immediately.

Which is the MOST cost-effective storage option that meets these requirements?

Options are :

  • Amazon Glacier with expedited retrievals
  • Amazon S3 Standard-Infrequent Access (Correct)
  • Amazon S3 Standard
  • Amazon EFS

Answer : Amazon S3 Standard-Infrequent Access

Explanation Amazon S3 Standard-Infrequent Access is the most cost-effective choice Amazon Glacier with expedited retrievals is fast (1-5 minutes) but not immediate Amazon EFS is a high performance file system and not ideally suited to this scenario, it is also not the most cost-effective option Amazon S3 Standard provides immediate retrieval but is not less cost-effective compared to Standard-Infrequent access References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

Your Business Intelligence team use SQL tools to analyze data. What would be the best solution for performing queries on structured data that is being received at a high velocity?   

Options are :

  • EMR running Apache Spark
  • Kinesis Firehose with RDS
  • Kinesis Firehose with RedShift (Correct)
  • EMR using Hive

Answer : Kinesis Firehose with RedShift

Explanation Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. Firehose Destinations include: Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools EMR is a hosted Hadoop framework and doesn't natively support SQL RDS is a transactional database and is not a supported Kinesis Firehose destination References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

You are implementing an Elastic Load Balancer (ELB) for an application that will use encrypted communications. Which two types of security policies are supported by the Elastic Load Balancer for SSL negotiations between the ELB and clients? (Choose 2)   

Options are :

  • ELB predefined Security policies (Correct)
  • AES 256
  • Custom security policies (Correct)
  • None of the answers are correct
  • Security groups

Answer : ELB predefined Security policies Custom security policies

Explanation AWS recommend that you always use the default predefined security policy. When choosing a custom security policy you can select the ciphers and protocols (only for CLB) Security groups and network ACLs are security controls that apply to instances and subnets AES 256 is an encryption protocol, not a policy References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A Solutions Architect is designing a web page for event registrations, and needs a managed service to send a text message to users every time users sign up for an event.

Which AWS service should the Architect use to achieve this?

Options are :

  • Amazon SNS (Correct)
  • AWS Lambda
  • Amazon SQS
  • Amazon STS

Answer : Amazon SNS

Explanation Amazon Simple Notification Service (SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud and supports notifications over multiple transports including HTTP/HTTPS, Email/Email-JSON, SQS and SMS Amazon Security Token Service (STS) is used for requesting temporary credentials Amazon Simple Queue Service (SQS) is a message queue used for decoupling application components Lambda is a serverless service that runs code in response to events/triggers References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sns/

A company are moving to a hybrid cloud model and will be setting up private links between all cloud data centers. An Architect needs to determine the connectivity options available when using AWS Direct Connect and public and private VIFs?

Which options are available to the Architect (choose 2)

Options are :

  • You can substitute your internet connection at your DC with AWS’s public Internet through the use of a NAT gateway in your VPC
  • You can connect to your private VPC subnets over the public VIF
  • You can connect to your private VPC subnets over the private VIF, and to Public AWS services over the public VIF (Correct)
  • You can connect to AWS services over the private VIF
  • Once connected to your VPC through Direct connect you can connect to all AZs within the region (Correct)

Answer : You can connect to your private VPC subnets over the private VIF, and to Public AWS services over the public VIF Once connected to your VPC through Direct connect you can connect to all AZs within the region

Explanation Each AWS Direct Connect connection can be configured with one or more virtual interfaces (VIFs). Public VIFs allow access to public services such as S3, EC2, and DynamoDB. Private VIFs allow access to your VPC. You must use public IP addresses on public VIFs, and private IP addresses on private VIFs Once you have connected to an AWS region using AWS Direct Connect you can connect to all AZs within that region. You can also establish IPSec connections over public VIFs to remote regions. You cannot substitute the Internet connection at the DC with a NAT Gateway -- these are used to allow EC2 instances in private subnets to access the Internet References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/aws-direct-connect/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions