Certification : AWS Certified Solutions Architect Associate Practice Exams Set 2

You have a web application hosted in EC2 that consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. You received 5 orders but after a few hours, you saw more than 20 email notifications in your inbox.

Which of the following could be the possible culprit for this issue?


Options are :

  • The web application is set for long polling so the messages are being sent twice.
  • The web application is not deleting the messages in the SQS queue after it has processed them. (Correct)
  • The web application is set to short polling so some messages are not being picked up
  • The web application does not have permission to consume messages in the SQS queue.

Answer : The web application is not deleting the messages in the SQS queue after it has processed them.

A company is using Redshift for its online analytical processing (OLAP) application which processes complex queries against large datasets. There is a requirement in which you have to define the number of query queues that are available and how queries are routed to those queues for processing.   

Which of the following will you use to meet this requirement?


Options are :

  • This is not possible with Redshift because it is not intended for OLAP application but rather, for OLTP. Use RDS database instead.
  • Create a Lambda function that can accept the number of query queues and use this value to control Redshift.
  • Use the workload management (WLM) in the parameter group configuration. (Correct)
  • This is not possible with Redshift because it is not intended for OLAP application but rather, for OLTP. Use a NoSQL DynamoDB database instead.

Answer : Use the workload management (WLM) in the parameter group configuration.

A Solutions Architect is working for a company which has multiple VPCs in various AWS regions. The Architect is assigned to set up a logging system which will track all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solution must ensure the security, integrity, and durability of the log data. It should also provide an event history of all API calls made in AWS Management Console and AWS CLI.

Which of the following solutions is the best fit for this scenario?


Options are :

  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies. (Correct)
  • Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
  • Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

Answer : Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

You are working as a Solutions Architect for a major telecommunications company where you are assigned to improve the security of your database tier by tightly managing the data flow of your Amazon Redshift cluster. One of the requirements is to use VPC flow logs to monitor all the COPY and UNLOAD traffic of your Redshift cluster that moves in and out of your VPC. 

Which of the following is the most suitable solution to implement in this scenario?


Options are :

  • Enable Audit Logging in your Amazon Redshift cluster.
  • Enable Enhanced VPC routing on your Amazon Redshift cluster. (Correct)
  • Use the Amazon Redshift Spectrum feature.
  • Create a new flow log that tracks the traffic of your Amazon Redshift cluster.

Answer : Enable Enhanced VPC routing on your Amazon Redshift cluster.

A content management system (CMS) is hosted on a fleet of auto-scaled, On-Demand EC2 instances which use Amazon Aurora as its database. Currently, the system stores the file documents that the users uploaded in one of the attached EBS Volumes. Your manager noticed that the system performance is quite slow and he has instructed you to improve the architecture of the system.

In this scenario, what will you do to implement a scalable, high throughput POSIX-compliant file system?


Options are :

  • Create an S3 bucket and use this as the storage for the CMS
  • Use EFS (Correct)
  • Upgrade your existing EBS volumes to Provisioned IOPS SSD Volumes
  • Use ElastiCache

Answer : Use EFS

You are an AWS Solutions Architect designing an online analytics application that uses Redshift Cluster for its data warehouse. Which service will allow you to monitor all API calls to your Redshift instance and can also provide secured data for auditing and compliance purposes?


Options are :

  • CloudTrail for security logs (Correct)
  • CloudWatch
  • AWS X-Ray
  • Redshift Spectrum

Answer : CloudTrail for security logs

A popular social network is hosted in AWS and is using a DynamoDB table as its database. There is a requirement to implement a 'follow' feature where users can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution that you should implement to meet the requirement?


Options are :

  • Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS.
  • Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via email when there is an update made by a particular user.
  • Set up a DAX cluster to access the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every update made in the user data, the trigger will send data to the Lambda function which will then notify the subscribers via email using SNS.
  • Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email. (Correct)

Answer : Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email.

A media company has an Amazon ECS Cluster, which uses the Fargate launch type, to host its news website. The database credentials should be supplied using environment variables, to comply with strict security compliance. As the Solutions Architect, you have to ensure that the credentials are secure and that they cannot be viewed in plaintext on the cluster itself.

Which of the following is the most suitable solution in this scenario that you can implement with minimal effort?


Options are :

  • In the ECS task definition file of the ECS Cluster, store the database credentials using Docker Secrets to centrally manage these sensitive data and securely transmit it to only those containers that need access to it. Secrets are encrypted during transit and at rest. A given secret is only accessible to those services which have been granted explicit access to it via IAM Role, and only while those service tasks are running.
  • Store the database credentials in the ECS task definition file of the ECS Cluster and encrypt it with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definition script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the database credentials.
  • Use the AWS Secrets Manager to store the database credentials and then encrypt them using AWS KMS. Create a resource-based policy for your Amazon ECS task execution role and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Secrets Manager secret which contains the sensitive data, to present to the container.
  • Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container. (Correct)

Answer : Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.

A cryptocurrency trading platform is using an API built in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price surge of Bitcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead.

In this scenario, how can you protect the backend systems of the platform from traffic spikes?


Options are :

  • Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling.
  • Enable throttling limits and result caching in API Gateway. (Correct)
  • Use CloudFront in front of the API Gateway to act as a cache.
  • Move the Lambda function in a VPC.

Answer : Enable throttling limits and result caching in API Gateway.

You are working as a Solutions Architect in a top software development company in Silicon Valley. The company has multiple applications hosted in their VPC. While you are monitoring the system, you noticed that multiple port scans are coming in from a specific IP address block which are trying to connect to several AWS resources inside your VPC. The internal security team has requested that all offending IP addresses be denied for the next 24 hours for security purposes.

Which of the following is the best method to quickly and temporarily deny access from the specified IP addresses?


Options are :

  • Create a policy in IAM to deny access from the IP Address block.
  • Modify the Network Access Control List associated with all public subnets in the VPC to deny access from the IP Address block. (Correct)
  • Add a rule in the Security Group of the EC2 instances to deny access from the IP Address block.
  • Configure the firewall in the operating system of the EC2 instances to deny access from the IP address block.

Answer : Modify the Network Access Control List associated with all public subnets in the VPC to deny access from the IP Address block.

Your cloud architecture is composed of Linux and Windows EC2 instances which process high volumes of financial data 24 hours a day, 7 days a week. To ensure high availability of your systems, you are required to monitor the memory and disk utilization of all of your instances.   

Which of the following is the most suitable monitoring solution to implement?


Options are :

  • Use the default CloudWatch configuration to your EC2 instances where the memory and disk utilization metrics are already available. Install the AWS Systems Manager (SSM) Agent to all of your EC2 instances.
  • Install the CloudWatch agent to all of your EC2 instances which gathers the memory and disk utilization data. View the custom metrics in the Amazon CloudWatch console. (Correct)
  • Enable the Enhanced Monitoring option in EC2 and install CloudWatch agent to all of your EC2 instances to be able to view the memory and disk utilization in the CloudWatch dashboard.
  • Use Amazon Inspector and install the Inspector agent to all of your EC2 instances.

Answer : Install the CloudWatch agent to all of your EC2 instances which gathers the memory and disk utilization data. View the custom metrics in the Amazon CloudWatch console.

You are working for a software company that has moved a legacy application from an on-premises data center to the cloud. The legacy application requires a static IP address hard-coded into the backend, which blocks you from using an Application Load Balancer.

Which steps would you take to apply high availability and fault tolerance to this application without ELB? (Choose 2)


Options are :

  • Write a script that checks the health of the EC2 instance. If the instance stops responding, the script will switch the elastic IP address to a standby EC2 instance. (Correct)
  • Assign an Elastic IP address to the instance. (Correct)
  • Postpone the deployment until you have fully converted the application to work with the ELB and Auto Scaling.
  • Launch the instance using Auto Scaling which will deploy the instance again if it becomes unhealthy.
  • Use Cloudfront with a custom origin pointed to your on-premises network where the web application is deployed.

Answer : Write a script that checks the health of the EC2 instance. If the instance stops responding, the script will switch the elastic IP address to a standby EC2 instance. Assign an Elastic IP address to the instance.

A web application is using CloudFront to distribute their images, videos, and other static contents stored in their S3 bucket to its users around the world. The company has recently introduced a new member-only access to some of its high quality media files. There is a requirement to provide access to multiple private media files only to their paying subscribers without having to change their current URLs.   

Which of the following is the most suitable solution that you should implement to satisfy this requirement?


Options are :

  • Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member.
  • Create a Signed URL with a custom policy which only allows the members to see the private files.
  • Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members.
  • Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them. (Correct)

Answer : Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them.

You have triggered the creation of a snapshot of your EBS volume and is currently on-going. At this point, what are the things that the EBS volume can or cannot do?


Options are :

  • The volume can be used as normal while the snapshot is in progress. (Correct)
  • The volume can be used in write-only mode while the snapshot is in progress.
  • The volume can be used in read-only mode while the snapshot is in progress.
  • The volume cannot be used until the snapshot completes.

Answer : The volume can be used as normal while the snapshot is in progress.

A startup based in Australia is deploying a new two-tier web application in AWS. The Australian company wants to store their most frequently used data in an in-memory data store to improve the retrieval and response time of their web application.   

Which of the following is the most suitable service to be used for this requirement?


Options are :

  • DynamoDB
  • Amazon RDS
  • Amazon ElastiCache (Correct)
  • Amazon Redshift

Answer : Amazon ElastiCache

You are designing a multi-tier web application architecture that consists of a fleet of EC2 instances and an Oracle relational database server. It is required that the database is highly available and that you have full control over its underlying operating system.

Which AWS service will you use for your database tier?


Options are :

  • Amazon RDS
  • Amazon RDS with Multi-AZ deployments
  • Amazon EC2 instances with data replication in one Availability Zone
  • Amazon EC2 instances with data replication between two different Availability Zones (Correct)

Answer : Amazon EC2 instances with data replication between two different Availability Zones

You have a new e-commerce web application written in Angular framework which is deployed to a fleet of EC2 instances behind an Application Load Balancer. You configured the load balancer to perform health checks on these EC2 instances.

What will happen if one of these EC2 instances failed the health checks?


Options are :

  • The EC2 instance gets terminated automatically by the Application Load Balancer.
  • The EC2 instance gets quarantined by the Application Load Balancer for root cause analysis.
  • The EC2 instance is replaced automatically by the Application Load Balancer.
  • The Application Load Balancer stops sending traffic to the instance that failed its health check. (Correct)

Answer : The Application Load Balancer stops sending traffic to the instance that failed its health check.

A suite of web applications is composed of several different Auto Scaling group of EC2 instances which is configured with default settings and then deployed across three Availability Zones. There is an Application Load Balancer that forwards the request to the respective target group on the URL path. The scale-in policy has been triggered due to the low number of incoming traffic to the application. 

Which EC2 instance will be the first one to be terminated by your Auto Scaling group?


Options are :

  • The EC2 instance which has the least number of user sessions
  • The EC2 instance which has been running for the longest time
  • The EC2 instance which belongs to an Auto Scaling group with the oldest launch configuration (Correct)
  • The instance will be randomly selected by the Auto Scaling group

Answer : The EC2 instance which belongs to an Auto Scaling group with the oldest launch configuration

You are building a new data analytics application in AWS which will be deployed in an Auto Scaling group of On-Demand EC2 instances and a MongoDB database. It is expected that the database will have high-throughput workloads performing small, random I/O operations. As the Solutions Architect, you are required to properly set up and launch the required resources in AWS. 

Which of the following is the most suitable EBS type to use for your database?


Options are :

  • General Purpose SSD
  • Provisioned IOPS SSD (Correct)
  • Throughput Optimized HDD
  • Cold HDD

Answer : Provisioned IOPS SSD

You have launched a travel photo sharing website using Amazon S3 to serve high-quality photos to visitors of your website. After a few days, you found out that there are other travel websites linking and using your photos. This resulted in financial losses for your business.   

What is an effective method to mitigate this issue?


Options are :

  • Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates. (Correct)
  • Use CloudFront distributions for your photos.
  • Block the IP addresses of the offending websites using NACL.
  • Store photos on an Amazon EBS volume of the web server.

Answer : Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates.

A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. 

Which of the following is the most suitable solution to meet the requirement?


Options are :

  • Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled.
  • Launch an Oracle Real Application Clusters (RAC) in RDS.
  • Create an Oracle database in RDS with Multi-AZ deployments. (Correct)
  • Migrate your Oracle data to Amazon Aurora by converting the database schema using AWS Schema Conversion Tool and AWS Database Migration Service.

Answer : Create an Oracle database in RDS with Multi-AZ deployments.

A multi-tiered application hosted in your on-premises data center is scheduled to be migrated to AWS. The application has a message broker service which uses industry standard messaging APIs and protocols that must be migrated as well, without rewriting the messaging code in your application. 

Which of the following is the most suitable service that you should use to move your messaging service to AWS?


Options are :

  • Amazon MQ (Correct)
  • Amazon SQS
  • Amazon SNS
  • Amazon SWF

Answer : Amazon MQ

You are using a combination of API Gateway and Lambda for the web services of your online web portal that is being accessed by hundreds of thousands of clients each day. Your company will be announcing a new revolutionary product and it is expected that your web portal will receive a massive number of visitors all around the globe. How can you protect your backend systems and applications from traffic spikes?


Options are :

  • Use throttling limits in API Gateway (Correct)
  • API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything.
  • Manually upgrade the EC2 instances being used by API Gateway
  • Deploy Multi-AZ in API Gateway with Read Replica

Answer : Use throttling limits in API Gateway

An online shopping platform is hosted on an Auto Scaling group of Spot EC2 instances and uses Amazon Aurora PostgreSQL as its database. There is a requirement to optimize your database workloads in your cluster where you have to direct the write operations of the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances.

Which is the most suitable configuration for your application as well as your Aurora database cluster to achieve this requirement?


Options are :

  • Configure your application to use the reader endpoint for both production traffic and reporting queries, which will enable your Aurora database to automatically perform load-balancing among all the Aurora Replicas.
  • In your application, use the cluster endpoint of your Aurora database to handle the incoming production traffic and use the instance endpoint to handle reporting queries.
  • Create a new custom endpoint in Aurora which will load-balance database connections based on the specified criteria. Configure your application to use the custom endpoint for both production traffic and reporting queries. (Correct)
  • In your application, use the writer endpoint of your Aurora database to handle the production traffic. Create a new custom endpoint to handle reporting queries.

Answer : Create a new custom endpoint in Aurora which will load-balance database connections based on the specified criteria. Configure your application to use the custom endpoint for both production traffic and reporting queries.

You are a Solutions Architect in your company working with 3 DevOps Engineers under you. One of the engineers accidentally deleted a file hosted in Amazon S3 which has caused disruption of service.

What can you do to prevent this from happening again?


Options are :

  • Use S3 Infrequently Accessed storage to store the data.
  • Enable S3 Versioning and Multi-Factor Authentication Delete on the bucket. (Correct)
  • Set up a signed URL for all users.
  • Create an IAM bucket policy that disables delete operation.

Answer : Enable S3 Versioning and Multi-Factor Authentication Delete on the bucket.

There are a lot of outages in the Availability Zone of your RDS database instance to the point that you have lost access to the database. What could you do to prevent losing access to your database in case that this event happens again?


Options are :

  • Make a snapshot of the database
  • Enabled Multi-AZ failover (Correct)
  • Increase the database instance size
  • Create a read replica

Answer : Enabled Multi-AZ failover

You are working for a large financial company as an IT consultant. Your role is to help their development team to build a highly available web application using stateless web servers. In this scenario, which AWS services are suitable for storing session state data? (Choose 2)


Options are :

  • Redshift Spectrum
  • DynamoDB (Correct)
  • RDS
  • ElastiCache (Correct)
  • Glacier

Answer : DynamoDB ElastiCache

A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user's login time to further optimize the system.

Which of the following options should you use together to set up a cost-effective solution that can improve your application's performance? (Choose 2)


Options are :

  • Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. (Correct)
  • Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service.
  • Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution.
  • Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.
  • Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses. (Correct)

Answer : Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.

You have a new joiner in your organization. You have provisioned an IAM user for the new employee in AWS however, the user is not able to perform any actions. What could be the reason for this?


Options are :

  • IAM users are created by default with partial permissions
  • IAM users are created by default with full permissions
  • IAM users are created by default with no permissions (Correct)
  • You need to wait for 24 hours for the new IAM user to have access.

Answer : IAM users are created by default with no permissions

Your company announced that there would be a surprise IT audit on all of the AWS resources being used in the production environment. During the audit activities, it was noted that you are using a combination of Standard and Scheduled Reserved EC2 instances in your applications. They argued that you should have used Spot EC2 instances instead as it is cheaper than the Reserved Instance.

Which of the following are the characteristics and benefits of using these two types of Reserved EC2 instances, which you can use as justification? (Choose 2)


Options are :

  • Standard Reserved Instances can be later exchanged for other Convertible Reserved Instances
  • It can enable you to reserve capacity for your Amazon EC2 instances in multiple Availability Zones and multiple AWS Regions for any duration.
  • Reserved Instances doesn't get interrupted unlike Spot instances in the event that there are not enough unused EC2 instances to meet the demand. (Correct)
  • It runs in a VPC on hardware that's dedicated to a single customer.
  • You can have capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term through Scheduled Reserved Instances (Correct)

Answer : Reserved Instances doesn't get interrupted unlike Spot instances in the event that there are not enough unused EC2 instances to meet the demand. You can have capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term through Scheduled Reserved Instances