AWS Devops Engineer Professional Certified Practice Exam Set 8

You are a Dev ops engineer for your company. There is a requirement to host a custom application which has custom dependencies for a development team. This needs to be


Options are :

  • Package the application and dependencies with Ducker, and deploy the Ducker container with Cloud Formation.
  • Package the application and dependencies with in Elastic Beanstalk, and deploy with Elastic Beanstalk.
  • Package the application and dependencies in an S3 file, and deploy the Ducker container with Elastic Beanstalk.
  • Package the application and dependencies with Ducker, and deploy the Ducker container with Elastic Beanstalk. (Correct)

Answer : Package the application and dependencies with Ducker, and deploy the Ducker container with Elastic Beanstalk.

AWS Solutions Architect Associate 2019 with Practice Test Set 6

You are creating a Cloud formation template in which User data is going to be passed to underlying EC2 Instance. Which of the below functions is normally used to pass data to the User Data section in the Cloud formation template?


Options are :

  • “User Data”: ( “Fn::Find ln Map”:
  • “User Data”: { “Fn::Ref”: {
  • “User Data”: { „Fn::Get Atr: {
  • “User Data: { “Fn::Base64: ( (Correct)

Answer : “User Data: { “Fn::Base64: (

Your application has an Auto Scaling group of three EC2 instances behind an Elastic Load Balancer. Your Auto Scaling group was updated with a new launch configuration that refers to an updated AMI. During the deployment, customers complained that they were receiving several errors even though all instances passed the ELB health checks. How can you prevent this from happening again?


Options are :

  • Create a new launch configuration with the updated AMI and associate it with the Auto Scaling group. Increase the size of the group to six and when instances become healthy revert to three. (Correct)
  • Manually terminate the instances with the older launch configuration,
  • Update the launch configuration instead of updating the Auto scaling Group
  • Create a new ELB and attach the Auto scaling Group to the ELB

Answer : Create a new launch configuration with the updated AMI and associate it with the Auto Scaling group. Increase the size of the group to six and when instances become healthy revert to three.

Your application is having a very high traffic, so you have enabled auto scaling in multi availability zone to suffice the needs of your application but you observe that one of the availability zone is not receiving any traffic. What can be wrong here?


Options are :

  • Availability zone is not added to Elastic load balancer (Correct)
  • Instances need to manually added to availability zone
  • Auto scaling only works for single availability zone
  • Auto scaling can be enabled for multi AZ only in north Virginia region

Answer : Availability zone is not added to Elastic load balancer

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 5

You are responsible for an application that leverages the Amazon SDK and Amazon EC2 roles for storing and retrieving data from Amazon S3, accessing multiple Dynamo DB tables, and exchanging message with Amazon SQS queues. Your VP of Compliance is concerned that you are not following security best practices for securing all of this access. He has asked you to verify that the application?s AWS access keys are not older than six months and to provide control evidence that these keys will be rotated a minimum of once every six months. Which option will provide your VP with the requested information?


Options are :

  • Provide your VP with a link to AM AWS documentation to address the VP?s key rotation concerns. (Correct)
  • Create a new set of instructions for your configuration management tool that will periodically create and rotate the application?s existing access keys and provide a compliance report to your VP.
  • Update your application to log changes to its AWS access key credential file and use a periodic Amazon EMR job to create a compliance report for your VP
  • Create a script to query the lAM list-access keys API to get your application access key creation date and create a batch process to periodically create a compliance report for your VP.

Answer : Provide your VP with a link to AM AWS documentation to address the VP?s key rotation concerns.

You are using Elastic Beanstalk for your development team. You are responsible for deploying multiple versions of your application. How can you ensure, in an ideal way, that you don?t cross the application version in Elastic beanstalk?


Options are :

  • Use AWs Config to delete the older versions
  • Create a script to delete the older versions.
  • Create a lambda function to delete the older versions.
  • Use policies 9 Elastic beanstalk (Correct)

Answer : Use policies 9 Elastic beanstalk

You have an Auto scaling Group which is launching a set of t2.small instances. You now need to replace those instances with a larger instance type. How would you go about making this change in an ideal manner?


Options are :

  • Create another Auto scaling Group and attach the new instance type.
  • Create a new launch configuration with the new instance type and update your Auto scaling Group. (Correct)
  • Change the Instance type of the Underlying EC2 instance directly.
  • Change the Instance type in the current launch configuration to the new instance type.

Answer : Create a new launch configuration with the new instance type and update your Auto scaling Group.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 1

You work for an accounting firm and need to store important financial data for clients. Initial frequent access to data is required. but after a period of 2 months, the data can be archived and brought back only in the case of an audit. What is the most cost-effective way to do this?


Options are :

  • Store all data in a private 53 bucket
  • Use lifecycle management to move data from S3 to Glacier (Correct)
  • Store all data in a Glacier
  • Use lifecycle management to store all data in Glacier

Answer : Use lifecycle management to move data from S3 to Glacier

You have an AWS Ops Works Stack running Chef Version 11.10. Your company hosts its own proprietary cookbook on Amazon 53, and this is specified as a custom cookbook in the stack. You want to use an open- source cookbook located in an external Git repository. What tasks should you perform to enable the use of both custom cookbooks?


Options are :

  • In the AWS ops works stack settings, enable Berk shelf. Create a new cookbook with a Berksfile that specifies the other two cookbooks. Configure the stack to use this new cookbook. (Correct)
  • Contact the open source project?s maintainers and request that they pull your cookbook into theirs, Update the stack to use their cookbook.
  • . In your cookbook create an S3 sym link object that points to the open source projects cookbook.
  • n the Ops Works stack settings add the open source project?s cookbook details in addition to your cookt

Answer : In the AWS ops works stack settings, enable Berk shelf. Create a new cookbook with a Berksfile that specifies the other two cookbooks. Configure the stack to use this new cookbook.

You are designing an application that contains protected health information. Security and compliance requirements for your application mandate that all protected health Information in the application use encryption at rest and in transit. The application uses a three-tier architecture where data flows through the load balancer and is stored on Amazon EBS volumes for processing and the results are stored in Amazon S3 using the AWS SDK. Which of the following two options satisfy the security requirements? (Select two) Please select:


Options are :

  • Use SSL termination on the load balancer an SSL listener on the Amazon EC2 Instances. Amazon EBS encryption on EBS volumes containing PHI and Amazon S3 with server-side encryption.
  • Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances and Amazon S3 with server-side encryption. (Correct)
  • use SSL termination on the load balancer. Amazon EBS encryption on Amazon EC2 instances and Amazon S3 with server- side encryption.
  • Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances. OS-level disk encryption on the Amazon EBS volumes and Amazon 53 with server-side encryption. .„ (Correct)
  • Use SSL termination with a SAN SSL certificate on the load balancer. Amazon EC2 with all Amazon EBS volumes using Amazon EBS encryption, and Amazon 53 with server-side encryption with customer-managed keys.

Answer : Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances and Amazon S3 with server-side encryption. Use TCP load balancing on the load balancer. SSL termination on the Amazon EC2 instances. OS-level disk encryption on the Amazon EBS volumes and Amazon 53 with server-side encryption. .„

AWS SAP-C00 Certified Solution Architect Professional Exam Set 9

You are managing the development of an application that uses Dynamo DB to store JSON data. You have already set the Read and Write capacity of the Dynamo DB table. You are unsure of the amount of the traffic that will be received by the application during the deployment time. How can you ensure that the Dynamo DB is not highly throttled and does not become a bottleneck for the application? Choose 2 answers from the options below.


Options are :

  • Monitor the System Errors metric using Cloud watch
  • Monitor the Consumed Read Capacity Units and Consumed Write Capacity Jnits metric using Cloud watch. (Correct)
  • Create a Cloud watch alarm which would then send a trigger to AWS Lambda to create a new Dynamo DB table.
  • Create a Cloud watch alarm which would then send a trigger to AWS Lambda to increase the Read and Write capacity of the Dynamo DB table. (Correct)

Answer : Monitor the Consumed Read Capacity Units and Consumed Write Capacity Jnits metric using Cloud watch. Create a Cloud watch alarm which would then send a trigger to AWS Lambda to increase the Read and Write capacity of the Dynamo DB table.

A group of developers in your organization want to migrate their existing application into Elastic Beanstalk and want to use Elastic load Balancing and Amazon SQS. They are currently using a custom application server. How would you deploy their system to Elastic Beanstalk?


Options are :

  • Configure an AWS Ops Works stack that installs the third party application server and creates a load balancer and an Amazon SQS queue and then deploys it to Elastic Beanstalk.
  • Create a custom Elastic Beanstalk platform that contains the third party application server and runs a script that creates a load balancer and an Amazon SQS queue.
  • Use a Ducker container that has the third party application server installed on it and that creates the load balancer and an Amazon SQS queue using the application source bundle feature. (Correct)
  • Configure an Elastic Beanstalk platform using AWS Ops Works deploy It to Elastic Beanstalk and run a script that creates a load balancer and an Amazon SOS queue.

Answer : Use a Ducker container that has the third party application server installed on it and that creates the load balancer and an Amazon SQS queue using the application source bundle feature.

You are a Dev ops Engineer for your company. You are planning on using Cloud watch for monitoring the resources hosted in AWS. Which of the following can you do with Cloud watch logs ideally. Choose 3 answers from the options given below ?


Options are :

  • Stream the log data into Amazon Elastic search for any search analysis required. (Correct)
  • Send the log data to AWS Lambda for custom processing (Correct)
  • Send the data to SQS for further processing.
  • Stream the log data to Amazon Kinesis for further processing (Correct)

Answer : Stream the log data into Amazon Elastic search for any search analysis required. Send the log data to AWS Lambda for custom processing Stream the log data to Amazon Kinesis for further processing

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 13

An application is currently writing a large number of records to a Dynamo DB table in one region. There is a requirement for a secondary application to just take in the changes to the Dynamo DB table every 2 hours and process the updates accordingly. Which of the following is an ideal way to ensure the secondary application can get the relevant changes from the Dynamo DB table.


Options are :

  • Transfer the records to 53 which were modified in the last 2 hours
  • Create another Dynamo DB table with the records modified in the last 2 hours.
  • Use Dynamo DB streams to monitor the changes in the Dynamo DB table. (Correct)
  • insert a timestamp for each record and then scan the entire table for the timestamp as per the last 2 hours.

Answer : Use Dynamo DB streams to monitor the changes in the Dynamo DB table.

Your team is responsible for an AWS Elastic Beanstalk application. The business requires that you move to a continuous deployment model, releasing updates to the application multiple times per day with zero downtime. What should you do to enable this and still be able to roll back almost Immediately in an emergency to the previous version?


Options are :

  • Create a second Elastic Beanstalk environment with the new application version, and configure the old environment to redirect clients, using the HTTP 301 response code, to the new environment
  • Develop the application to poll for a new application version In your code repository; download and Install to each running Elastic Beanstalk instance.
  • Enable rolling updates in the Elastic Beanstalk environment, setting an appropriate pause time for application startup.
  • Create a second Elastic Beanstalk environment running the new application version, and swap the environment CNAME5. (Correct)

Answer : Create a second Elastic Beanstalk environment running the new application version, and swap the environment CNAME5.

By default in Ops work , how many application versions can you rollback up to? Please select:


Options are :

  • 4 (Correct)
  • 3
  • 1
  • 2

Answer : 4

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 7

You have an Ops work stack defined with Linux instances. You have executed a recipe, but the execution has failed. What Is one of the ways that you can use to diagnose what was the reason why the recipe did not execute correctly.


Options are :

  • Log into the instance and check if the recipe was properly configured. (Correct)
  • Deregister the instance and check the EC2 Logs
  • Use AWS Config and check the Ops work logs to diagnose the error
  • Use AWS Cloud trail and check the Ops work logs to diagnose the error

Answer : Log into the instance and check if the recipe was properly configured.

You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make?


Options are :

  • Deploy 3 EC2 instances in one region and 3 in another region and use Amazon Elastic Load Balancer.
  • Deploy 6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer.
  • Deploy 3 EC2 Instances In one availability zone and 3 In another availability zone and use Amazon Elastic Load Balancer. (Correct)
  • Deploy 2 EC2 instances in three regions and use Amazon Elastic Load Balancer.

Answer : Deploy 3 EC2 Instances In one availability zone and 3 In another availability zone and use Amazon Elastic Load Balancer.

You have an ELB on AWS which has a set of web servers behind them. There is a requirement that the SSL key used to encrypt data is always kept secure. Secondly the logs of ELB should only be decrypted by a subset of users. Which of these architectures meets all of the requirements?


Options are :

  • Use Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS Cloud HSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption. (Correct)
  • Use Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS Cloud HSM to perform the SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.
  • Use Elastic Load Balancing to distribute traffic to a set of web servers. Use TCP load balancing on the load balancer and configure your web servers to retrieve the private key from a private Amazon 53 bucket on boot. Write your web server logs to a private Amazon 53 bucket using Amazon 53 server-side encryption.
  • Use Elastic Load Balancing to distribute traffic to a set of web servers. To protect the SSL private key. upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.

Answer : Use Elastic Load Balancing to distribute traffic to a set of web servers, configure the load balancer to perform TCP load balancing, use an AWS Cloud HSM to perform the SSL transactions, and write your web server logs to a private Amazon S3 bucket using Amazon S3 server-side encryption.

AWS DVA-C01 Certified Developer Associate Practice Exam Set 7

When you implement a lifecycle hook in Auto scaling, by default what is the time limit in which the instance will be a pending state?


Options are :

  • 5 minutes
  • 2ominutes
  • 60 seconds
  • 60 minutes (Correct)

Answer : 60 minutes

Which of the below is not a lifecycle event in Ops work?


Options are :

  • Configure
  • Setup
  • Shut down
  • Uninstall (Correct)

Answer : Uninstall

When deploying applications to Elastic Beanstalk, which of the following statements is false with regards to application deployment ?


Options are :

  • Can include parent directories (Correct)
  • which can be deployed to the application server
  • Should not exceed 512 MB in size
  • The application can be bundled in a zip file

Answer : Can include parent directories

AWS Solutions Architect Associate 2019 with Practice Test Set 6

You have implemented a system to automate deployments of your configuration and application dynamically after an Amazon EC2 instance in an Auto Scaling group is launched. Your system uses a configuration management tool that works in a standalone configuration, where there is no master node. Due to the volatility of application load, new instances must be brought into service within three minutes of the launch of the instance operating system. The deployment stages take the following times to complete: 1) Installing configuration management agent: 2mins 2) Configuring instance using artifacts: 4mins 3) Installing application framework: 1 5mins 4) Deploying application code: 1 mm What process should you use to automate the deployment using this type of standalone agent configuration?


Options are :

  • Build a custom Amazon Machine Image that includes the configuration management agent and appli framework pre-Installed. Configure your Auto Scaling launch configuration with an Amazon EC2 UserData script tc pull configuration artifacts and application code from an Amazon S3 bucket, and then execute the agent to configure the system.
  • Create a web service that polls the Amazon EC2 API to check for new instances that are launched in an Auto Scaling group. When it recognizes a new Instance, execute a remote script via SSH to install the agent. SCP the configuration artifacts and application code, and finally execute the agent to configure the system
  • Build a custom Amazon Machine Image that includes all components pre-installed, including an agent. configuration artifacts, application frameworks, and code. Create a startup script that executes the agent to configure the system on startup. (Correct)
  • Configure your Auto Scaling launch configuration with an Amazon EC2 User Data script to install the agent.. pull configuration artifacts and application code from an Amazon S3 bucket, and then execute the agent to configure the infrastructure and application.

Answer : Build a custom Amazon Machine Image that includes all components pre-installed, including an agent. configuration artifacts, application frameworks, and code. Create a startup script that executes the agent to configure the system on startup.

Which of the following is a container for metrics in Cloud watch?


Options are :

  • Metric Collection
  • Locale
  • Packages
  • Namespaces (Correct)

Answer : Namespaces

One of your engineers has written a web application in the Go Programming language and has asked your Dev Ops team to deploy It to AWS. The application code is hosted on a Git repository. What are your options? (Select Two)


Options are :

  • Write a Docker file that installs the Go base image and fetches your application using Git. Create a new AWS Elastic Beanstalk application and use this Docker file to automate the deployment. „ (Correct)
  • Write a Docker file that installs the Go base image and uses Git to fetch your application. Create a new AW Ops Works stack that contains a Docker layer that uses the Dockerrun .aws. json file to deploy your container and then use the Dockerfile to automate the deployment.
  • Write a Docker file that installs the Go base image and fetches your application using Git, Create an AWS Cloud Formation template that creates and associates an AWS::EC2::instance resource type with an AWS::EC2::Container resource type.
  • Create a new AWS Elastic Beanstalk application and configure a Go environment to host your application. Using GIt check out the latest version of the code, once the local repository for Elastic Beanstalk Is configured use eb create” command to create an environment and then use “eb deploy” command to deploy the application. (Correct)

Answer : Write a Docker file that installs the Go base image and fetches your application using Git. Create a new AWS Elastic Beanstalk application and use this Docker file to automate the deployment. „ Create a new AWS Elastic Beanstalk application and configure a Go environment to host your application. Using GIt check out the latest version of the code, once the local repository for Elastic Beanstalk Is configured use eb create” command to create an environment and then use “eb deploy” command to deploy the application.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 7

You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease In the performance of the application due to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a My SQL RDS instance initially deployed with AWS Cloud Formation. Your Auto Scaling group has a mm value of 4 and a max value of 8. The desired capacity is now at 8 due to the high CPU utilization of the instances. After some analysis, you are confident that the performance Issues stem from a constraint in CPU capacity, while memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy thi change while minimizing any interruption to your end users?


Options are :

  • Update the launch configuration specified in the AWS Cloud Formation template with the new C3 instance type. Also add an Update Policy attribute to your Auto Scaling group that specifies an Auto Scaling Rolling Update. P a stack update with the new template (Correct)
  • Sign into the AWS Management Console and update the existing launch configuration with the new C3 instance type. Add an Update Policy attribute to your Auto Scaling group that specifies an AutoScaling Rolling update.
  • Sign into the AWS Management Console, copy the old launch configuration, and create a new launch configuration that specifies the C3 instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all running instances
  • Update the launch configuration specified in the AWS Cloud Formation template with the new C3 instance type. Run a stack update with the new template. Auto Scaling will then update the instances with the new instance type

Answer : Update the launch configuration specified in the AWS Cloud Formation template with the new C3 instance type. Also add an Update Policy attribute to your Auto Scaling group that specifies an Auto Scaling Rolling Update. P a stack update with the new template

You work as a Dev ops Engineer for your company. There are currently a number of environments hosted via Elastic beanstalk. There is a requirement to ensure to ensure that the rollback time for a new version application deployment is kept to a minimal. Which elastic beanstalk deployment method would fulfill this requirement?


Options are :

  • Blue/Green (Correct)
  • Rolling with additional batch
  • All at Once
  • Rolling

Answer : Blue/Green

You have a web application that is currently running on a three M3 instances in three AZs. You have an Auto Scaling group configured to scale from three to thirty Instances. When reviewing your Cloud Watch metrics, you see that sometimes your Auto Scaling group is hosting fifteen instances. The web application is reading and writing to a Dynamo DBconfigured backend and configured with 800 Write Capacity Units and 800 Read Capacity Units. Your Dynamo DB Primary Key is the Company ID. You are hosting 25 TB of data in your web application. You have a single customer that is complaining of long load times when their staff arrives at the office at 9:00 AM and loads the website, which consists of content that is pulled from Dynamo DB. You have other customers who routinely use the web application. Choose the answer that will ensure high availability and reduce the customer?s access times?


Options are :

  • Change your Auto Scaling group configuration to use Amazon C3 instance types. because the web application layer is probably running out of compute capacity.
  • Add a caching layer in front of your web application by choosing Elastic Cache Mem cached instances in one of the AZ5.
  • Implement an Amazon SQS queue between your Dynamo DB database layer and the web application layer to minimize the large burst In traffic the customer generates when everyone arrives at the office at 9:00AM and begins accessing the website. (Correct)
  • Double the number of Read Capacity Units in your Dynamo DB instance because the instance is probably being throttled when the customer accesses the website and your web application.

Answer : Implement an Amazon SQS queue between your Dynamo DB database layer and the web application layer to minimize the large burst In traffic the customer generates when everyone arrives at the office at 9:00AM and begins accessing the website.

AWS SAP-C00 Certified Solution Architect Professional Exam Set 8

Which of the following Cloud formation helper scripts can help install packages on EC2 resources Please select:


Options are :

  • cfn-signal
  • cfn-hup
  • cfn-get-metadata
  • cfn-init (Correct)

Answer : cfn-init

You are a Dev ops Engineer for your company. You are in charge of an application that uses EC2, ELB and Auto scaling. You have been requested to get the ELB access logs. When you try to access the logs . you can see that nothing has been recorded in S3. Why Is this the case?


Options are :

  • You do not have the necessary access to the logs generated by ELB.
  • The Auto scaling service is not sending the required logs to ELB
  • The EC2 instances are not sending the required logs to ELB
  • By default ELB access logs are disabled. (Correct)

Answer : By default ELB access logs are disabled.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions