Certification : AWS Certified Solutions Architect Associate Practice Exams Set 4

You are working for a large IT consultancy company as a Solutions Architect. One of your clients is launching a file sharing web application in AWS which requires a durable storage service for hosting their static contents such as PDFs, Word Documents, high resolution images and many others.

Which type of storage service should you use to meet this requirement?


Options are :

  • Amazon EBS volume
  • Amazon S3 (Correct)
  • Amazon EC2 instance store
  • Amazon RDS instance

Answer : Amazon S3

An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput.   

Which of the following should you do to meet the above requirement? (Choose 2)


Options are :

  • Retrieve the data using Amazon Glacier Select.
  • Use Expedited Retrieval to access the financial data. (Correct)
  • Use Bulk Retrieval to access the financial data.
  • Specify a range, or portion, of the financial data archive to retrieve.
  • Purchase provisioned retrieval capacity. (Correct)

Answer : Use Expedited Retrieval to access the financial data. Purchase provisioned retrieval capacity.

You are working as an IT Consultant for a large media company where you are tasked to design a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour.

What should you do to ensure optimal performance?


Options are :

  • Use Amazon Glacier instead.
  • Add a random prefix to the key names.
  • Do nothing. Amazon S3 will automatically manage performance at this scale. (Correct)
  • Use a predictable naming scheme in the key names such as sequential numbers or date time sequences.

Answer : Do nothing. Amazon S3 will automatically manage performance at this scale.

You are working as a Solutions Architect for a multinational financial firm. They have a global online trading platform in which the users from all over the world regularly upload terabytes of transactional data to a centralized S3 bucket.  What AWS feature should you use in your present system to improve throughput and ensure consistently fast data transfer to the Amazon S3 bucket, regardless of your user's location?


Options are :

  • FTP
  • AWS Direct Connect
  • Amazon S3 Transfer Acceleration (Correct)
  • Use CloudFront Origin Access Identity

Answer : Amazon S3 Transfer Acceleration

You are a new Solutions Architect working for a financial company. Your manager wants to have the ability to automatically transfer obsolete data from their S3 bucket to a low cost storage system in AWS.

What is the best solution you can provide to them?


Options are :

  • Use an EC2 instance and a scheduled job to transfer the obsolete data from their S3 location to Amazon Glacier.
  • Use Lifecycle Policies in S3 to move obsolete data to Glacier. (Correct)
  • Use AWS SQS.
  • Use AWS SWF.

Answer : Use Lifecycle Policies in S3 to move obsolete data to Glacier.

You are designing a social media website for a startup company and the founders want to know the ways to mitigate distributed denial-of-service (DDoS) attacks to their website.   

Which of the following are not viable mitigation techniques? (Choose 2)


Options are :

  • Use Dedicated EC2 instances to ensure that each instance has the maximum performance possible. (Correct)
  • Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth. (Correct)
  • Use an Amazon CloudFront service for distributing both static and dynamic content.
  • Use an Application Load Balancer with Auto Scaling groups for your EC2 instances then restrict direct Internet traffic to your Amazon RDS database by deploying to a private subnet.
  • Use AWS Shield and AWS WAF.

Answer : Use Dedicated EC2 instances to ensure that each instance has the maximum performance possible. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.

You are working for a tech company that uses a lot of EBS volumes in their EC2 instances. An incident occurred that requires you to delete the EBS volumes and then re-create them again.   

What step should you do before you delete the EBS volumes?


Options are :

  • Create a copy of the EBS volume using the CopyEBSVolume command.
  • Store a snapshot of the volume. (Correct)
  • Download the content to an EC2 instance.
  • Back up the data into a physical disk.

Answer : Store a snapshot of the volume.

One of your clients is leveraging on Amazon S3 in the ap-southeast-1 region to store their training videos for their employee onboarding process. The client is storing the videos using the Standard Storage class.

Where are your client's training videos replicated?


Options are :

  • A single facility in ap-southeast-1 and a single facility in eu-central-1
  • A single facility in ap-southeast-1 and a single facility in us-east-1
  • Multiple facilities in ap-southeast-1 (Correct)
  • A single facility in ap-southeast-1

Answer : Multiple facilities in ap-southeast-1

Your company has recently deployed a new web application which uses a serverless-based architecture in AWS. Your manager instructed you to implement CloudWatch metrics to monitor your systems more effectively. You know that Lambda automatically monitors functions on your behalf and reports metrics through Amazon CloudWatch.   

In this scenario, what types of data do these metrics monitor? (Choose 2)


Options are :

  • ReservedConcurrentExecutions
  • Invocations (Correct)
  • Errors (Correct)
  • IteratorSize
  • Dead Letter Queue

Answer : Invocations Errors

A company is hosting EC2 instances that are on non-production environment and processing non-priority batch loads, which can be interrupted at any time.   

What is the best instance purchasing option which can be applied to your EC2 instances in this case? 


Options are :

  • Reserved Instances
  • On-Demand Instances
  • Spot Instances (Correct)
  • Scheduled Reserved Instances

Answer : Spot Instances

You are working as a Solutions Architect for a leading airline company where you are building a decoupled application in AWS using EC2, Auto Scaling group, S3 and SQS. You designed the architecture in such a way that the EC2 instances will consume the message from the SQS queue and will automatically scale up or down based on the number of messages in the queue.   

In this scenario, which of the following statements is false about SQS?


Options are :

  • Standard queues provide at-least-once delivery, which means that each message is delivered at least once.
  • Standard queues preserve the order of messages. (Correct)
  • Amazon SQS can help you build a distributed application with decoupled components.
  • FIFO queues provide exactly-once processing.

Answer : Standard queues preserve the order of messages.

The media company that you are working for has a video transcoding application running on Amazon EC2. Each EC2 instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. This application has a large backlog of videos which need to be transcoded. Your manager would like to reduce this backlog by adding more EC2 instances, however, these instances are only needed until the backlog is reduced.

In this scenario, which type of Amazon EC2 instance is the most cost-effective type to use without sacrificing performance?


Options are :

  • Reserved instances
  • Spot instances (Correct)
  • Dedicated instances
  • On-demand instances

Answer : Spot instances

You currently have an Augment Reality (AR) mobile game which has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store data.

How would you design the application to improve its overall performance and make it more scalable while keeping the costs low? (Choose 2)


Options are :

  • Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. (Correct)
  • Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on client device using ElastiCache.
  • Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU.
  • Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication. (Correct)
  • Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds.

Answer : Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication.

An online job site is using NGINX for its application servers hosted in EC2 instances and MongoDB Atlas for its database-tier. MongoDB Atlas is a fully automated third-party cloud service which is not provided by AWS, but supports VPC peering to connect to your VPC. 

Which of the following items are invalid VPC peering configurations? (Choose 2)


Options are :

  • Two VPCs peered to a specific CIDR block in one VPC
  • Transitive Peering (Correct)
  • Edge to Edge routing via a gateway (Correct)
  • One to one relationship between two Virtual Private Cloud networks
  • One VPC Peered with two VPCs using longest prefix match

Answer : Transitive Peering Edge to Edge routing via a gateway

You are a new Solutions Architect in your company. Upon checking the existing Inbound Rules of your Network ACL, you saw this configuration:



If a computer with an IP address of 110.238.109.37 sends a request to your VPC, what will happen?


Options are :

  • Initially, it will be allowed and then after a while, the connection will be denied.
  • Initially, it will be denied and then after a while, the connection will be allowed.
  • It will be allowed. (Correct)
  • It will be denied.

Answer : It will be allowed.

Your company has an e-commerce application that saves the transaction logs to an S3 bucket. You are instructed by the CTO to configure the application to keep the transaction logs for one month for troubleshooting purposes, and then afterwards, purge the logs. What should you do to accomplish this requirement?


Options are :

  • Add a new bucket policy on the Amazon S3 bucket.
  • Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month (Correct)
  • Create a new IAM policy for the Amazon S3 bucket that automatically deletes the logs after a month
  • Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data

Answer : Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month

A company is using a custom shell script to automate the deployment and management of their EC2 instances. The script is using various AWS CLI commands such as revoke-security-group-ingress, revoke-security-group-egress, run-scheduled-instances and many others.   

In the shell script, what does the revoke-security-group-ingress command do?


Options are :

  • Removes one or more security groups from a rule.
  • Removes one or more security groups from an Amazon EC2 instance.
  • Removes one or more ingress rules from a security group. (Correct)
  • Removes one or more egress rules from a security group.

Answer : Removes one or more ingress rules from a security group.

You are unable to connect to your new EC2 instance via SSH from your home computer, which you have recently deployed. However, you were able to successfully access other existing instances in your VPC without any issues.   

Which of the following should you check and possibly correct to restore connectivity?


Options are :

  • Use Amazon Data Lifecycle Manager.
  • Configure the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP.
  • Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP.
  • Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP. (Correct)

Answer : Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP.

A leading media company has an application hosted in an EBS-backed EC2 instance which uses Simple Workflow Service (SWF) to handle its sequential background jobs. The application works well in production and your manager asked you to also implement the same solution to other areas of their business.   

In which other scenarios can you use both Simple Workflow Service (SWF) and Amazon EC2 as a solution? (Choose 2)


Options are :

  • For a distributed session management for your mobile application.
  • Managing a multi-step and multi-decision checkout process of an e-commerce mobile app. (Correct)
  • Orchestrating the execution of distributed business processes. (Correct)
  • For applications that require a message queue.
  • For web applications that require content delivery networks.

Answer : Managing a multi-step and multi-decision checkout process of an e-commerce mobile app. Orchestrating the execution of distributed business processes.

You are a new Solutions Architect in a large insurance firm. To maintain compliance with HIPPA laws, all data being backed up or stored on Amazon S3 needs to be encrypted at rest. In this scenario, what is the best method of encryption for your data, assuming S3 is being used for storing financial-related data? (Choose 2)


Options are :

  • Enable SSE on an S3 bucket to make use of AES-256 encryption (Correct)
  • Store the data in encrypted EBS snapshots
  • Encrypt the data locally using your own encryption keys, then copy the data to Amazon S3 over HTTPS endpoints (Correct)
  • Store the data on EBS volumes with encryption enabled instead of using Amazon S3
  • Use AWS Shield to protect your data at rest

Answer : Enable SSE on an S3 bucket to make use of AES-256 encryption Encrypt the data locally using your own encryption keys, then copy the data to Amazon S3 over HTTPS endpoints

A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be used for launching a fleet of EC2 instances.

Which of the following changes needs to be done?


Options are :

  • Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch configuration.
  • Create a new launch configuration. (Correct)
  • Create a new target group.
  • Create a new target group and launch configuration.

Answer : Create a new launch configuration.

You are working as a Cloud Engineer for a top aerospace engineering firm. One of your tasks is to set up a document storage system using S3 for all of the engineering files. In Amazon S3, which of the following statements are true? (Choose 2)


Options are :

  • The total volume of data and number of objects you can store are unlimited. (Correct)
  • The largest object that can be uploaded in a single PUT is 5 TB.
  • S3 is an object storage service that provides file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage.
  • You can only store ZIP or TAR files in S3.
  • The largest object that can be uploaded in a single PUT is 5 GB. (Correct)

Answer : The total volume of data and number of objects you can store are unlimited. The largest object that can be uploaded in a single PUT is 5 GB.

You are setting up the cloud architecture for an international money transfer service to be deployed in AWS which will have thousands of users around the globe. The service should be available 24/7 to avoid any business disruption and should be resilient enough to handle the outage of an entire AWS region. To meet this requirement, you have deployed your AWS resources to multiple AWS Regions. You need to use Route 53 and configure it to set all of your resources to be available all the time as much as possible. When a resource becomes unavailable, your Route 53 should detect that it's unhealthy and stop including it when responding to queries.   

Which of the following is the most fault tolerant routing configuration that you should use in this scenario? 


Options are :

  • Configure an Active-Active Failover with Weighted routing policy. (Correct)
  • Configure an Active-Passive Failover with Weighted Records.
  • Configure an Active-Active Failover with One Primary and One Secondary Resource.
  • Configure an Active-Passive Failover with Multiple Primary and Secondary Resources.

Answer : Configure an Active-Active Failover with Weighted routing policy.

You are working as a Solutions Architect for a global game development company. They have a web application currently running on twenty EC2 instances as part of an Auto Scaling group. All twenty instances have been running at a maximum of 100% CPU Utilization for the past 40 minutes however, the Auto Scaling group has not added any additional EC2 instances to the group.   

What could be the root cause of this issue? (Choose 2)


Options are :

  • You already have 20 on-demand instances running in your entire VPC. (Correct)
  • The maximum size of your Auto Scaling group is set to twenty. (Correct)
  • The scale down policy of your Auto Scaling group is too high.
  • The scale up policy of your Auto Scaling group, which is based on the average CPU Utilization metric, is not yet reached.
  • You are using burstable instances which have the ability to sustain high CPU performance of more than 40 minutes, which in effect, suspends your scale-up policy.

Answer : You already have 20 on-demand instances running in your entire VPC. The maximum size of your Auto Scaling group is set to twenty.

A new online banking platform has been re-designed to have a microservices architecture in which complex applications are decomposed into smaller, independent services. The new platform is using Docker considering that application containers are optimal for running small, decoupled services.

Which service can you use to migrate this new platform to AWS?


Options are :

  • EKS
  • EFS
  • ECS (Correct)
  • EBS

Answer : ECS

You are working for a large telecommunications company where you need to run analytics against all combined log files from your Application Load Balancer as part of the regulatory requirements.

Which AWS services can be used together to collect logs and then easily perform log analysis?


Options are :

  • Amazon DynamoDB for storing and EC2 for analyzing the logs.
  • Amazon EC2 with EBS volumes for storing and analyzing the log files.
  • Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application.
  • Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files. (Correct)

Answer : Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.

Your company is in a hurry of deploying their new web application written in NodeJS to AWS. As the Solutions Architect of the company, you were assigned to do the deployment without worrying about the underlying infrastructure that runs the application. Which service will you use to easily deploy and manage your new web application in AWS? 


Options are :

  • AWS Elastic Beanstalk (Correct)
  • AWS CloudFront
  • AWS CloudFormation
  • AWS CodeCommit

Answer : AWS Elastic Beanstalk

Your web application is relying entirely on slower disk-based databases, causing it to perform slowly. To improve its performance, you integrated an in-memory data store to your web application using ElastiCache. How does Amazon ElastiCache improve database performance?


Options are :

  • It securely delivers data to customers globally with low latency and high transfer speeds.
  • It provides an in-memory cache that delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second.
  • By caching database query results. (Correct)
  • It reduces the load on your database by routing read queries from your applications to the Read Replica.

Answer : By caching database query results.

In your VPC, you have a Classic Load Balancer distributing traffic to 2 running EC2 instances in ap-southeast-1a AZ and 8 EC2 instances in ap-southeast-1b AZ. However, you noticed that half of your incoming traffic goes to ap-southeast-1a AZ which over-utilize its 2 instances but underutilize the other 8 instances in the other AZ. 


What could be the most likely cause of this problem?


Options are :

  • The Classic Load Balancer listener is not set to port 80.
  • The security group of the EC2 instances does not allow HTTP traffic.
  • Cross-Zone Load Balancing is disabled. (Correct)
  • The Classic Load Balancer listener is not set to port 22.

Answer : Cross-Zone Load Balancing is disabled.

You are trying to convince a team to use Amazon RDS Read Replica for your multi-tier web application. What are two benefits of using read replicas? (Choose 2)


Options are :

  • It provides elasticity to your Amazon RDS database. (Correct)
  • Allows both read and write operations on the read replica to complement the primary database.
  • Improves performance of the primary database by taking workload from it. (Correct)
  • Automatic failover in the case of Availability Zone service failures.
  • It enhances the read performance of your primary database.

Answer : It provides elasticity to your Amazon RDS database. Improves performance of the primary database by taking workload from it.