Practical : AWS Certified Solutions Architect Associate

A company is generating large datasets with millions of rows that must be summarized by column. Existing business intelligence tools will be used to build daily reports.

 Which storage service meets the requirements?

Options are :

  • Amazon DynamoDB
  • Amazon RDS
  • Amazon ElastiCache
  • Amazon RedShift (Correct)

Answer : Amazon RedShift

Explanation Amazon RedShift uses columnar storage and is used for analyzing data using business intelligence tools (SQL) Amazon RDS is more suited to OLTP workloads rather than analytics workloads Amazon ElastiCache is an in-memory caching service Amazon DynamoDB is a fully managed NoSQL database service, it is not a columnar database References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/

An application architect has requested some assistance with selecting a database for a new data warehouse requirement. The database must provide high performance and scalability. The data will be structured and persistent and the DB must support complex queries using SQL and BI tools.

Which AWS service will you recommend?

Options are :

  • RedShift (Correct)
  • RDS
  • DynamoDB
  • ElastiCache

Answer : RedShift

Explanation Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools. RedShift is a SQL based data warehouse that is used for analytics applications. RedShift is 10x faster than a traditional SQL DB DynamoDB is a NoSQL database and so is not used for SQL ElastiCache is not a data warehouse, it is an in-memory database RDS is a relational database (SQL) but is used for transactional database implementations not data warehouses References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/

You are deploying an application on Amazon EC2 that must call AWS APIs. Which method of securely passing credentials to the application should you use?

Options are :

  • Store the API credentials on the instance using instance metadata
  • Store API credentials as an object in Amazon S3
  • Embed the API credentials into your application files
  • Assign IAM roles to the EC2 instances (Correct)

Answer : Assign IAM roles to the EC2 instances

Explanation Always use IAM roles when you can It is an AWS best practice not to store API credentials within applications, on file systems or on instances (such as in metadata). References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

You work as a System Administrator at Digital Cloud Training and your manager has asked you to investigate an EC2 web server hosting videos that is constantly running at over 80% CPU utilization. Which of the approaches below would you recommend to fix the issue?   

Options are :

  • Create a Launch Configuration from the instance using the CreateLaunchConfiguration action
  • Create an Elastic Load Balancer and register the EC2 instance to it
  • Create a CloudFront distribution and configure the Amazon EC2 instance as the origin (Correct)
  • Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action

Answer : Create a CloudFront distribution and configure the Amazon EC2 instance as the origin

Explanation Using the CloudFront content delivery network (CDN) would offload the processing from the EC2 instance as the videos would be cached and accessed without hitting the EC2 instance CloudFront is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds. CloudFront is a good choice for distribution of frequently accessed static content that benefits from edge delivery—like popular website images, videos, media files or software downloads. An origin is the origin of the files that the CDN will distribute. Origins can be either an S3 bucket, an EC2 instance, and Elastic Load Balancer, or Route53) – can also be external (non-AWS) Using CloudFront is preferable to using an Auto Scaling group to launch more instances as it is designed for caching content and would provide the best user experience Creating an ELB will not help unless there a more instances to distributed the load to References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

You have been asked to describe the benefits of using AWS Lambda compared to EC2 instances. Which of the below statements are incorrect?   

Options are :

  • With AWS Lambda you only pay for what you use
  • AWS Lambda scales automatically
  • With AWS lambda, the client is responsible for launching and administering the underlying AWS compute infrastructure (Correct)
  • With AWS Lambda the customer does not have any responsibility for deploying and managing the compute infrastructure

Answer : With AWS lambda, the client is responsible for launching and administering the underlying AWS compute infrastructure

Explanation AWS Lambda lets you run code as functions without provisioning or managing servers. With serverless computing, your application still runs on servers, but all the server management is done by AWS The other statements are correct References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/

You are putting together a design for a three-tier web application. The application tier requires a minimum of 6 EC2 instances to be running at all times. You need to provide fault tolerance to ensure that the failure of a single Availability Zone (AZ) will not affect application performance.

Which of the options below is the optimum solution to fulfil these requirements?

Options are :

  • Create an ASG with 12 instances spread across 4 AZs behind an ELB
  • Create an ASG with 18 instances spread across 3 AZs behind an ELB
  • Create an ASG with 9 instances spread across 3 AZs behind an ELB (Correct)
  • Create an ASG with 6 instances spread across 3 AZs behind an ELB

Answer : Create an ASG with 9 instances spread across 3 AZs behind an ELB

Explanation This is simply about numbers. You need 6 EC2 instances to be running even in the case of an AZ failure. The question asks for the “optimum? solution so you don’t want to over provision. Remember that it takes time for EC2 instances to boot and applications to initialize so it may not be acceptable to have a reduced fleet of instances during this time, therefore you need enough that the minimum number of instances are running without interruption in the event of an AZ outage. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

You need to provide AWS Management Console access to a team of new application developers. The team members who perform the same role are assigned to a Microsoft Active Directory group and you have been asked to use Identity Federation and RBAC.

Which AWS services would you use to configure this access? (choose 2)

Options are :

  • AWS IAM Groups
  • AWS Directory Service AD Connector (Correct)
  • AWS Directory Service Simple AD
  • AWS IAM Users
  • AWS IAM Roles (Correct)

Answer : AWS Directory Service AD Connector AWS IAM Roles

Explanation AD Connector is a directory gateway for redirecting directory requests to your on-premise Active Directory. AD Connector eliminates the need for directory synchronization and the cost and complexity of hosting a federation infrastructure and connects your existing on-premise AD to AWS. It is the best choice when you want to use an existing Active Directory with AWS services IAM Roles are created and then “assumed? by trusted entities and define a set of permissions for making AWS service requests. With IAM Roles you can delegate permissions to resources for users and services without using permanent credentials (e.g. user name and password) AWS Directory Service Simple AD is an inexpensive Active Directory-compatible service with common directory features. It is a fully cloud-based solution and does not integrate with an on-premises Active Directory service You map the groups in AD to IAM Roles, not IAM users or groups References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

Your organization is considering using DynamoDB for a new application that requires elasticity and high-availability. Which of the statements below is true about DynamoDB? (choose 2)   

Options are :

  • When reading data from Amazon DynamoDB, users can specify whether they want the read to be eventually consistent or strongly consistent (Correct)
  • Supports cross-region replication which allows you to replicate across regions (Correct)
  • Data is synchronously replicated across 3 regions
  • There is no default limit of the throughput you can provision
  • To scale DynamoDB you must increase the instance size

Answer : When reading data from Amazon DynamoDB, users can specify whether they want the read to be eventually consistent or strongly consistent Supports cross-region replication which allows you to replicate across regions

Explanation DynamoDB uses push button scaling in which you specify the read and write capacity units you need – it does not rely on instance sizes There are limits on the throughput you can provision by default (region specific): US East (N. Virginia) Region: - Per table – 40,000 read capacity units and 40,000 write capacity units - Per account – 80,000 read capacity units and 80,000 write capacity units All Other Regions: - Per table – 10,000 read capacity units and 10,000 write capacity units - Per account – 20,000 read capacity units and 20,000 write capacity unit References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

For which of the following workloads should a Solutions Architect consider using Elastic Beanstalk? (choose 2)

Options are :

  • A management task run occasionally
  • A long running worker process (Correct)
  • A data lake
  • A web application using Amazon RDS (Correct)
  • Caching content for Internet-based delivery

Answer : A long running worker process A web application using Amazon RDS

Explanation A web application using RDS is a good fit as it includes multiple services and Elastic Beanstalk is an orchestration engine A data lake would not be a good fit for Elastic Beanstalk A Long running worker process is a good Elastic Beanstalk use case where it manages an SQS queue - again this is an example of multiple services being orchestrated Content caching would be a good use case for CloudFront A management task run occasionally might be a good fit for AWS Systems Manager Automation References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-elastic-beanstalk/ https://aws.amazon.com/elasticbeanstalk/faqs/

You are working on a database migration plan from an on-premise data center that includes a variety of databases that are being used for diverse purposes. You are trying to map each database to the correct service in AWS.

Which of the below use cases are a good fit for DynamoDB (choose 2)

Options are :

  • Backup for on-premises Oracle DB
  • Complex queries and joins
  • Migration from a Microsoft SQL relational database
  • Large amounts of dynamic data that require very low latency (Correct)
  • Rapid ingestion of clickstream data (Correct)

Answer : Large amounts of dynamic data that require very low latency Rapid ingestion of clickstream data

Explanation Amazon Dynamo DB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability that provides low read and write latency. Because of its performance profile and the fact that it is a NoSQL type of database, DynamoDB is good for rapidly ingesting clickstream data You should use a relational database such as RDS when you need to do complex queries and joins. Microsoft SQL and Oracle DB are both relational databases so DynamoDB is not a good backup target or migration destination for these types of DB References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

You are a Solutions Architect at Digital Cloud Training. A client from the agricultural sector has approached you for some advice around the collection of a large volume of data from sensors they have deployed around the country. An application will collect data from over 100,000 sensors and each sensor will send around 1KB of data every minute. The data needs to be stored in a durable, low latency data store. The client also needs historical data that is over 1 year old to be moved into a data warehouse where they can perform analytics using standard SQL queries.

What combination of AWS services would you recommend to the client? (choose 2)

Options are :

  • DynamoDB for data ingestion (Correct)
  • Kinesis Data Streams for data ingestion
  • Elasticache for analytics
  • RedShift for the analytics (Correct)
  • EMR for analytics

Answer : DynamoDB for data ingestion RedShift for the analytics

Explanation The key requirements are that historical data that data is recorded in a low latency, durable data store and then moved into a data warehouse when the data is over 1 year old for historical analytics. This is a good use case for DynamoDB as a data store and RedShift as a data warehouse. Kinesis is used for real-time data, not historical data so is not a good fit Amazon Dynamo DB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB provides low read and write latency and is ideal for data ingestion use cases such as this one Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools. RedShift is a SQL based data warehouse used for analytics applications Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. In this scenario the data being analyzed is not real-time, it is historical Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. We're looking for a data warehouse in this solution so running up EC2 instances may not be cost-effective References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/

You are planning to deploy a number of EC2 instances in your VPC. The EC2 instances will be deployed across several subnets and multiple AZs. What AWS feature can act as an instance-level firewall to control traffic between your EC2 instances?   

Options are :

  • Route table
  • Network ACL
  • AWS WAF
  • Security Group (Correct)

Answer : Security Group

Explanation Network ACL’s function at the subnet level Route tables are not firewalls Security groups act like a firewall at the instance level Specifically, security groups operate at the network interface level AWS WAF is a web application firewall and does not work at the instance level References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

You have been asked to take a snapshot of a non-root EBS volume that contains sensitive corporate data. You need to ensure you can capture all data that has been written to your Amazon EBS volume at the time the snapshot command is issued and are unable to pause any file writes to the volume long enough to take a snapshot.

What is the best way to take a consistent snapshot whilst minimizing application downtime?

Options are :

  • Stop the instance and take the snapshot
  • Take the snapshot while the EBS volume is attached and the instance is running
  • Un-mount the EBS volume, take the snapshot, then re-mount it again (Correct)
  • You can’t take a snapshot for a non-root EBS volume

Answer : Un-mount the EBS volume, take the snapshot, then re-mount it again

Explanation The key facts here are that whilst minimizing application downtime you need to take a consistent snapshot and are unable to pause writes long enough to do so. Therefore the best option is to unmount the EBS volume and take the snapshot. This will be much faster than shutting down the instance, taking the snapshot, and then starting it back up again Snapshots capture a point-in-time state of an instance and are stored on S3. To take a consistent snapshot writes must be stopped (paused) until the snapshot is complete – if not possible the volume needs to be detached, or if it’s an EBS root volume the instance must be stopped If you take the snapshot with the EBS volume attached you may not get a fully consistent snapshot. Though stopping the instance and taking a snapshot will ensure the snapshot if fully consistent the requirement is that you minimize application downtime. You can take snapshots of any EBS volume References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

A company needs to deploy virtual desktops for its customers in an AWS VPC and would like to leverage their existing on-premise security principals. AWS Workspaces will be used as the virtual desktop solution.

Which set of AWS services and features will meet the company’s requirements?

Options are :

  • A VPN connection, VPC NACLs and Security Groups
  • A VPN connection, and AWS Directory Services (Correct)
  • AWS Directory Service and AWS IAM
  • Amazon EC2, and AWS IAM

Answer : A VPN connection, and AWS Directory Services

Explanation A security principle is an individual identity such as a user account within a directory. The AWS Directory service includes: Active Directory Service for Microsoft Active Directory, Simple AD, AD Connector. One of these services may be ideal depending on detailed requirements. The Active Directory Service for Microsoft AD and AD Connector both require a VPN or Direct Connect connection A VPN with NACLs and security groups will not deliver the required solution. AWS Directory Service with IAM or EC2 with IAM are also not sufficient for leveraging on-premise security principles. You must have a VPN References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

You have an application running in ap-southeast that requires six EC2 instances running at all times. 

With three Availability Zones available in that region (ap-southeast-2a, ap-southeast-2b, and ap-southeast-2c), which of the following deployments provides fault tolerance if any single Availability Zone in ap-southeast-2 becomes unavailable? (choose 2)

Options are :

  • 3 EC2 instances in ap-southeast-2a, 3 EC2 instances in ap-southeast-2b, no EC2 instances in ap-southeast-2c
  • 2 EC2 instances in ap-southeast-2a, 2 EC2 instances in ap-southeast-2b, 2 EC2 instances in ap-southeast-2c
  • 4 EC2 instances in ap-southeast-2a, 2 EC2 instances in ap-southeast-2b, 2 EC2 instances in ap-southeast-2c
  • 6 EC2 instances in ap-southeast-2a, 6 EC2 instances in ap-southeast-2b, no EC2 instances in ap-southeast-2c (Correct)
  • 3 EC2 instances in ap-southeast-2a, 3 EC2 instances in ap-southeast-2b, 3 EC2 instances in ap-southeast-2c (Correct)

Answer : 6 EC2 instances in ap-southeast-2a, 6 EC2 instances in ap-southeast-2b, no EC2 instances in ap-southeast-2c 3 EC2 instances in ap-southeast-2a, 3 EC2 instances in ap-southeast-2b, 3 EC2 instances in ap-southeast-2c

Explanation This is a simple mathematical problem. Take note that the question asks that 6 instances must be available in the event that ANY SINGLE AZ becomes unavailable. There are only 2 options that fulfil these criteria References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

A major upcoming sales event is likely to result in heavy read traffic to a web application your company manages. As the Solutions Architect you have been asked for advice on how best to protect the database tier from the heavy load and ensure the user experience is not impacted.

The web application owner has also requested that the design be fault tolerant. The current configuration consists of a web application behind an ELB that uses Auto Scaling and an RDS MySQL database running in a multi-AZ configuration. As the database load is highly changeable the solution should allow elasticity by adding and removing nodes as required and should also be multi-threaded.

What recommendations would you make?

Options are :

  • Deploy an ElastiCache Redis cluster with cluster mode disabled and multi-AZ with automatic failover
  • Deploy an ElastiCache Memcached cluster in multi-AZ mode in the same AZs as RDS
  • Deploy an ElastiCache Memcached cluster in both AZs in which the RDS database is deployed (Correct)
  • Deploy an ElastiCache Redis cluster with cluster mode enabled and multi-AZ with automatic failover

Answer : Deploy an ElastiCache Memcached cluster in both AZs in which the RDS database is deployed

Explanation ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads Memcached - Not persistent - Cannot be used as a data store - Supports large nodes with multiple cores or threads - Scales out and in, by adding and removing nodes Redis - Data is persistent - Can be used as a datastore - Not multi-threaded - Scales by adding shards, not nodes References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-elasticache/ https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.html

An EC2 instance in an Auto Scaling group that has been reported as unhealthy has been marked for replacement. What is the process Auto Scaling uses to replace the instance? (choose 2)   

Options are :

  • Auto Scaling has to perform rebalancing first, and then terminate the instance
  • Auto Scaling has to launch a replacement first before it can terminate the unhealthy instance
  • If connection draining is enabled, Auto Scaling will wait for in-flight connections to complete or timeout (Correct)
  • Auto Scaling will send a notification to the administrator
  • Auto Scaling will terminate the existing instance before launching a replacement instance (Correct)

Answer : If connection draining is enabled, Auto Scaling will wait for in-flight connections to complete or timeout Auto Scaling will terminate the existing instance before launching a replacement instance

Explanation If connection draining is enabled, Auto Scaling waits for in-flight requests to complete or timeout before terminating instances. Auto Scaling will terminate the existing instance before launching a replacement instance Auto Scaling does not send a notification to the administrator Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

A Solutions Architect is determining the best method for provisioning Internet connectivity for a data-processing application that will pull large amounts of data from an object storage system via the Internet. The solution must be redundant and have no constraints on bandwidth.

Which option satisfies these requirements?

Options are :

  • Attach an Internet Gateway (Correct)
  • Create a VPC endpoint
  • Deploy NAT Instances in a public subnet
  • Use a NAT Gateway

Answer : Attach an Internet Gateway

Explanation Both a NAT gateway and an Internet gateway offer redundancy however the NAT gateway is limited to 45 Gbps whereas the IGW does not impose any limits A VPC endpoint is used to access public services from a VPC without traversing the Internet NAT instances are EC2 instances that are used, in a similar way to NAT gateways, by instances in private subnets to access the Internet. However they are not redundant and are limited in bandwidth References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

For security reasons, you need to ensure that an On-Demand EC2 instance can only be accessed from a specific public IP address (100.156.52.12) using the SSH protocol. You are configuring the Security Group of the EC2 instance, and need to configure an Inbound rule.

Which of the rules below will achieve the requirement?

Options are :

  • Protocol - UDP, Port Range - 22, Source 100.156.52.12/0
  • Protocol - UDP, Port Range - 22, Source 100.156.52.12/32
  • Protocol - TCP, Port Range - 22, Source 100.156.52.12/0
  • Protocol - TCP, Port Range - 22, Source 100.156.52.12/32 (Correct)

Answer : Protocol - TCP, Port Range - 22, Source 100.156.52.12/32

Explanation The SSH protocol uses TCP port 22 and to specify an individual IP address in a security group rule you use the format X.X.X.X/32. Therefore the rule should allow TCP port 22 from 100.156.52.12/32 Security groups act like a firewall at the instance level. Specifically, security groups operate at the network interface level and you can only assign permit rules in a security group, you cannot assign a deny rule References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

A company is migrating an on-premises 10 TB MySQL database to AWS. The company expects the database to quadruple in size and the business requirement is that replicate lag must be kept under 100 milliseconds.

Which Amazon RDS engine meets these requirements?

Options are :

  • Microsoft SQL Server
  • Amazon Aurora (Correct)
  • Oracle
  • MySQL

Answer : Amazon Aurora

Explanation Aurora databases can scale up to 64 TB and Aurora replicas features millisecond latency All other RDS engines have a limit of 16 TiB maximum DB size and asynchronous replication typically takes seconds References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_Limits.html

A Solutions Architect is designing a solution for a financial application that will receive trading data in large volumes. What is the best solution for ingesting and processing a very large number of data streams in near real time?   

Options are :

  • RedShift
  • Kinesis Data Streams (Correct)
  • Kinesis Firehose
  • EMR

Answer : Kinesis Data Streams

Explanation Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. It enables real-time processing of streaming big data and can be used for rapidly moving data off data producers and then continuously processing the data. Kinesis Data Streams stores data for later processing by applications (key difference with Firehose which delivers data directly to AWS services) Kinesis Firehose can allow transformation of data and it then delivers data to supported services RedShift is a data warehouse solution used for analyzing data EMR is a hosted Hadoop framework that is used for analytics References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

You are creating a design for an internal-only AWS service that uses EC2 instances to process information on S3 and store the results in DynamoDB. You need to allow access to several developers who will be testing code and need to apply security best practices to the architecture.

Which of the security practices below are recommended? (choose 2)

Options are :

  • Store the access keys and secret IDs within the application
  • Assign an IAM user for each EC2 instance
  • Use bastion hosts to enforce control and visibility (Correct)
  • Disable root API access keys and secret key (Correct)
  • Control user access through network ACLs

Answer : Use bastion hosts to enforce control and visibility Disable root API access keys and secret key

Explanation Best practices for securing operating systems and applications include: Disable root API access keys and secret key Restrict access to instances from limited IP ranges using Security Groups Password protect the .pem file on user machines Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access Rotate credentials (DB, Access Keys) Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys Use bastion hosts to enforce control and visibility References: https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

You have an unhealthy EC2 instance attached to an ELB that is being taken out of service. While the EC2 instance is being de-registered from the ELB, which ELB feature will cause the ELB to stop sending any new requests to the EC2 instance whilst allowing in-flight sessions to complete?   

Options are :

  • ELB connection draining (Correct)
  • ELB session affinity (sticky session)
  • ELB proxy protocol
  • ELB Cross zone load balancing

Answer : ELB connection draining

Explanation Connection draining is enabled by default and provides a period of time for existing connections to close cleanly. When connection draining is in action an CLB will be in the status “InService: Instance deregistration currently in progress? Cross-zone load balancing is used to enable equal distribution of connections to targets in multiple AZs Session affinity enables the load balancer to bind a user's session to a specific instance Proxy Protocol is an Internet protocol used to carry connection information from the source requesting the connection to the destination for which the connection was requested References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A Solutions Architect is designing a static website that will use the zone apex of a DNS domain (e.g. example.com). The Architect wants to use the Amazon Route 53 service. Which steps should the Architect take to implement a scalable and cost-effective solution? (choose 2)   

Options are :

  • Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53 name servers (Correct)
  • Host the website on an Amazon EC2 instance, and map a Route 53 Alias record to the public IP address of the EC2 instance
  • Host the website using AWS Elastic Beanstalk, and map a Route 53 Alias record to the Beanstalk stack
  • Serve the website from an Amazon S3 bucket, and map a Route 53 Alias record to the website endpoint (Correct)
  • Host the website on an Amazon EC2 instance with ELB and Auto Scaling, and map a Route 53 Alias record to the ELB endpoint

Answer : Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53 name servers Serve the website from an Amazon S3 bucket, and map a Route 53 Alias record to the website endpoint

Explanation To use Route 53 for an existing domain the Architect needs to change the NS records to point to the Amazon Route 53 name servers. This will direct name resolution to Route 53 for the domain name. The most cost-effective solution for hosting the website will be to use an Amazon S3 bucket. To do this you create a bucket using the same name as the domain name (e.g. example.com) and use a Route 53 Alias record to map to it Using an EC2 instance instead of an S3 bucket would be more costly so that rules out 2 options that explicitly mention EC3 Elastic Beanstalk provisions EC2 instances so again this would be a more costly option References: https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

You have been asked to design a cloud-native application architecture using AWS services. What is a typical use case for SQS?   

Options are :

  • Sending emails to clients when a job is completed
  • Co-ordination of work items between different human and non-human workers
  • Providing fault tolerance for S3
  • Decoupling application components to ensure that there is no dependency on the availability of a single component (Correct)

Answer : Decoupling application components to ensure that there is no dependency on the availability of a single component

Explanation Amazon Simple Queue Service (Amazon SQS) is a web service that gives you access to message queues that store messages waiting to be processed. SQS offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers. SQS is used for distributed/decoupled applications and can be used with RedShift, DynamoDB, EC2, ECS, RDS, S3 and Lambda SQS cannot be used for providing fault tolerance for S3 as messages can only be stored in the queue for a maximum amount of time Simple Workflow Service (SWF) is used for co-ordination of work items between different human and non-human workers Simple Notification Service (SNS) can be used for sending email notifications when certain events happen References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sqs/

A new security mandate requires that all personnel data held in the cloud is encrypted at rest. Which two methods allow you to encrypt data stored in S3 buckets at rest cost-efficiently? (choose 2)

Options are :

  • Make use of AWS S3 bucket policies to control access to the data at rest
  • Use AWS S3 server-side encryption with Key Management Service keys or Customer-provided keys (Correct)
  • Encrypt the data at the source using the client's CMK keys before transferring it to S3 (Correct)
  • Use CloudHSM
  • Use Multipart upload with SSL

Answer : Use AWS S3 server-side encryption with Key Management Service keys or Customer-provided keys Encrypt the data at the source using the client's CMK keys before transferring it to S3

Explanation When using S3 encryption your data is always encrypted at rest and you can choose to use KMS managed keys or customer-provided keys. If you encrypt the data at the source and transfer it in an encrypted state it will also be encrypted in-transit With client side encryption data is encrypted on the client side and transferred in an encrypted state and with server-side encryption data is encrypted by S3 before it is written to disk (data is decrypted when it is downloaded) You can use bucket policies to control encryption of data that is uploaded but use of encryption is not stated in the answer given. Simply using bucket policies to control access to the data does not meet the security mandate that data must be encrypted Multipart upload helps with uploading large files but does not encrypt your data CloudHSM can be used to encrypt data but as a dedicated service it is charged on an hourly basis and is less cost-efficient compared to S3 encryption or encrypting the data at the source. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

A Solutions Architect is developing an application that will store and index large (>1 MB) JSON files. The data store must be highly available and latency must be consistently low even during times of heavy usage.

Which service should the Architect use?

Options are :

  • AWS CloudFormation
  • DynamoDB
  • Amazon RedShift
  • Amazon EFS (Correct)

Answer : Amazon EFS

Explanation EFS provides a highly-available data store with consistent low latencies and elasticity to scale as required RedShift is a data warehouse that is used for analyzing data using SQL DynamoDB is a low latency, highly available NoSQL DB. You can store JSON files up to 400KB in size in a DynamoDB table, for anything bigger you'd want to store a pointer to an object outside of the table CloudFormation is an orchestration tool and does not help with storing documents References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

You have created an application in a VPC that uses a Network Load Balancer (NLB). The application will be offered in a service provider model for AWS principals in other accounts within the region to consume. Based on this model, what AWS service will be used to offer the service for consumption?   

Options are :

  • API Gateway
  • VPC Endpoint Services using AWS PrivateLink (Correct)
  • Route 53
  • IAM Role Based Access Control

Answer : VPC Endpoint Services using AWS PrivateLink

Explanation An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

A critical database runs in your VPC for which availability is a concern. Which RDS DB instance events may force the DB to be taken offline during a maintenance window?   

Options are :

  • Security patching (Correct)
  • Updating DB parameter groups
  • Promoting a Read Replica
  • Selecting the Multi-AZ feature

Answer : Security patching

Explanation Maintenance windows are configured to allow DB instance modifications to take place such as scaling and software patching. Some operations require the DB instance to be taken offline briefly and this includes security patching Enabling Multi-AZ, promoting a Read Replica and updating DB parameter groups are not events that take place during a maintenance window References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-rds/

An application currently stores all data on Amazon EBS volumes. All EBS volumes must be backed up durably across multiple Availability Zones.

 What is the MOST resilient way to back up volumes?

Options are :

  • Mirror data across two EBS volumes
  • Take regular EBS snapshots (Correct)
  • Create a script to copy data to an EC2 instance store
  • Enable EBS volume encryption

Answer : Take regular EBS snapshots

Explanation EBS snapshots are stored in S3 and are therefore replicated across multiple locations Enabling volume encryption would not increase resiliency Instance stores are ephemeral (non-persistent) data stores so would not add any resilience Mirroring data would provide resilience however both volumes would need to be mounted to the EC2 instance within the same AZ so you are not getting the redundancy required References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You would like to deploy an EC2 instance with enhanced networking. What are the pre-requisites for using enhanced networking? (choose 2)   

Options are :

  • Instances must be of T2 Micro type
  • Instances must be EBS backed, not Instance-store backed
  • Instances must be launched from a PV AMI
  • Instances must be launched in a VPC (Correct)
  • Instances must be launched from a HVM AMI (Correct)

Answer : Instances must be launched in a VPC Instances must be launched from a HVM AMI

Explanation AWS currently supports enhanced networking capabilities using SR-IOV which provides direct access to network adapters, provides higher performance (packets-per-second) and lower latency. You must launch an HVM AMI with the appropriate drivers and it is only available for certain instance types and only supported in VPC You cannot use enhanced networking with instances launched from a PV AMI. There is not restriction on EBS vs Instance Store backed VMs and instances do not need to be T2 Micros References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A company is serving videos to their customers from us-east-1 from an Amazon S3 bucket. The company's customers are located around the world and there is high demand during peak hours. Customers in Asia complain about slow download speeds during peak hours and customers in all locations have reported experiencing HTTP 500 errors.

How can a Solutions Architect address the issues?

Options are :

  • Use an Amazon Route 53 weighted routing policy for the CloudFront domain name to distribute GET requests between CloudFront and the S3 bucket
  • Cache the web content using Amazon CloudFront and use all Edge locations for content delivery (Correct)
  • Replicate the bucket in us-east-1 and use Amazon Route 53 failover routing to determine which bucket to serve the content from
  • Place an Amazon ElastiCache cluster in front of the S3 bucket

Answer : Cache the web content using Amazon CloudFront and use all Edge locations for content delivery

Explanation The most straightforward solution is to use CloudFront to cache the content in the Edge locations around the world that are close to users. This is easy to implement and will solve the issues reported ElastiCache is a database caching service, it does not cache content from S3 buckets You could replicate the data in the buckets and use latency based routing to direct clients to the closest bucket but this option isn't presented. Failover routing is used for high availability and would not assist here Route 53 weighted policies are used to direct traffic proportionally to different sites not based on latency or geography. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

A systems integration company that helps customers migrate into AWS repeatedly build large, standardized architectures using several AWS services. The Solutions Architects have documented the architectural blueprints for these solutions and are looking for a method of automating the provisioning of the resources.

Which AWS service would satisfy this requirement?

Options are :

  • Elastic Beanstalk
  • AWS CloudFormation (Correct)
  • AWS CodeDeploy
  • AWS OpsWorks

Answer : AWS CloudFormation

Explanation CloudFormation allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts Elastic Beanstalk is a PaaS service that helps you to build and manage web applications AWS OpsWorks is a configuration management service that helps you build and operate highly dynamic applications, and propagate changes instantly AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/

An organization is considering ways to reduce administrative overhead and automate build processes. An Architect has suggested using CloudFormation. Which of the statements below are true regarding CloudFormation? (choose 2)   

Options are :

  • Allows you to model your entire infrastructure in a text file (Correct)
  • It provides a common language for you to describe and provision all the infrastructure resources in your cloud environment (Correct)
  • It is used to collect and track metrics, collect and monitor log files, and set alarms
  • It provides visibility into user activity by recording actions taken on your account
  • You pay for CloudFormation and the AWS resources created

Answer : Allows you to model your entire infrastructure in a text file It provides a common language for you to describe and provision all the infrastructure resources in your cloud environment

Explanation CloudFormation allows you to model your infrastructure in a text file using a common language. You can then provision those resources using CloudFormation and only ever pay for the resources created. It provides a common language for you to describe and provision all the infrastructure resources in your cloud environment You do not pay for CloudFormation, only the resources created CloudWatch is used to collect and track metrics, collect and monitor log files, and set alarm CloudTrail provides visibility into user activity by recording actions taken on your account References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/

You have implemented API Gateway and enabled a cache for a specific stage. How can you control the cache to enhance performance and reduce load on back-end services?   

Options are :

  • Using CloudFront controls
  • Configure the throttling feature
  • Enable bursting
  • Using time-to-live (TTL) settings (Correct)

Answer : Using time-to-live (TTL) settings

Explanation Caches are provisioned for a specific stage of your APIs. Caching features include customisable keys and time-to-live (TTL) in seconds for your API data which enhances response times and reduces load on back-end services You can throttle and monitor requests to protect your back-end, but the cache is used to reduce the load on the back-end Bursting isn’t an API Gateway feature CloudFront is a bogus answer as even though it does have a cache of its own it won’t help you to enhance the performance of the API Gateway cache References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

A Solutions Architect is designing a workload that requires a high performance object-based storage system that must be shared with multiple Amazon EC2 instances.

Which AWS service delivers these requirements?

Options are :

  • Amazon S3 (Correct)
  • Amazon EFS
  • Amazon EBS
  • Amazon ElastiCache

Answer : Amazon S3

Explanation Amazon S3 is an object-based storage system. Though object storage systems aren't mounted and shared like filesystems or block based storage systems they can be shared by multiple instances as they allow concurrent access Amazon EFS is file-based storage system it is not object-based Amazon EBS is a block-based storage system it is not object-based Amazon ElastiCache is a database caching service References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

You are developing an application that uses Lambda functions. You need to store some sensitive data that includes credentials for accessing the database tier. You are planning to store this data as environment variables within Lambda. How can you ensure this sensitive information is properly secured?   

Options are :

  • There is no need to make any changes as all environment variables are encrypted by default with AWS Lambda
  • This cannot be done, only the environment variables that relate to the Lambda function itself can be encrypted
  • Use encryption helpers that leverage AWS Key Management Service to store the sensitive information as Ciphertext (Correct)
  • Store the environment variables in an encrypted DynamoDB table and configure Lambda to retrieve them as required

Answer : Use encryption helpers that leverage AWS Key Management Service to store the sensitive information as Ciphertext

Explanation Environment variables for Lambda functions enable you to dynamically pass settings to your function code and libraries, without making changes to your code. Environment variables are key-value pairs that you create and modify as part of your function configuration, using either the AWS Lambda Console, the AWS Lambda CLI or the AWS Lambda SD. You can use environment variables to help libraries know what directory to install files in, where to store outputs, store connection and logging settings, and more When you deploy your Lambda function, all the environment variables you've specified are encrypted by default after, but not during, the deployment process. They are then decrypted automatically by AWS Lambda when the function is invoked. If you need to store sensitive information in an environment variable, we strongly suggest you encrypt that information before deploying your Lambda function. The Lambda console makes that easier for you by providing encryption helpers that leverage AWS Key Management Service to store that sensitive information as Ciphertext The environment variables are not encrypted throughout the entire process so there is a need to take action here. Storing the variables in an encrypted DynamoDB table is not necessary when you can use encryption helpers References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/ https://docs.aws.amazon.com/lambda/latest/dg/env_variables.html

A Solutions Architect is designing a solution to store and archive corporate documents, and has determined that Amazon Glacier is the right solution. Data must be delivered within 10 minutes of a retrieval request.

Which features in Amazon Glacier can help meet this requirement?

Options are :

  • Standard retrieval
  • Vault Lock
  • Bulk retrieval
  • Expedited retrieval (Correct)

Answer : Expedited retrieval

Explanation Expedited retrieval enables access to data in 1-5 minutes Bulk retrievals allow cost-effective access to significant amounts of data in 5-12 hours Standard retrievals typically complete in 3-5 hours Vault Lock allows you to easily deploy and enforce compliance controls on individual Glacier vaults via a lockable policy (Vault Lock policy) References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-steps.html

A Solutions Architect is migrating a small relational database into AWS. The database will run on an EC2 instance and the DB size is around 500 GB. The database is infrequently used with small amounts of requests spread across the day. The DB is a low priority and the Architect needs to lower the cost of the solution.

 What is the MOST cost-effective storage type?

Options are :

  • Amazon EBS Provisioned IOPS SSD
  • Amazon EFS
  • Amazon EBS General Purpose SSD
  • Amazon EBS Throughput Optimized HDD (Correct)

Answer : Amazon EBS Throughput Optimized HDD

Explanation Throughput Optimized HDD is the most cost-effective storage option and for a small DB with low traffic volumes it may be sufficient. Note that the volume must be at least 500 GB in size Provisioned IOPS SSD provides high performance but at a higher cost AWS recommend using General Purpose SSD rather than Throughput Optimized HDD for most use cases but it is more expensive The Amazon Elastic File System (EFS) is not an ideal storage solution for a database References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions