AWS Devops Engineer Professional Certified Practice Exam Set 3

Which of the following Cache Engines does Ops work have built in support for?


Options are :

  • Redis
  • Mem cache (Correct)
  • Both Redis and Mem cache
  • There is no built En support as of yet for any cache engine

Answer : Mem cache

Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS My SQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS My SQL. Which are the best approaches to meet these requirements? Choose 2 answers from the options below ?


Options are :

  • Add an RDS My SQL read replica in each availability zone (Correct)
  • Increase the RDS My SQL Instance size and Implement provisioned lops
  • Implement shading to distribute load to multiple RDS My SQL instances
  • Deploy Elastic Cache in-memory cache running in each availability zone

Answer : Add an RDS My SQL read replica in each availability zone

The company you work for has a huge amount of infrastructure built on AWS. However there has been some concerns recently about the security of this infrastructure, and an external auditor has been given the task of running a thorough check of all of your company?s AWS assets. The auditor will be in the USA while your S company?s infrastructure resides in the Asia Pacific (Sydney) region on AWS. Initially, he needs to check all of your VPC assets, specifically, security groups and NACLs You have been assigned the task of providing the auditor with a login to be able to do this. Which of the following would be the best and most secure solution to provide the auditor with so he can begin his Initial investigations? Choose the correct answer from the options below


Options are :

  • Give him root access to your AWS Infrastructure, because he is an auditor he will need access to every service.
  • Create an lAM user with full VPC access but set a condition that will not allow him to modify anything if the request is from any IP other than his own.
  • Create an lAM user who will have read-only access to your AWS VPC infrastructure and provide the auditor with those credentials. ...- (Correct)
  • Create an lAM user tied to an administrator role. Also provide an additional level of security with MFA.

Answer : Create an lAM user who will have read-only access to your AWS VPC infrastructure and provide the auditor with those credentials. ...-

You have an Ops work stack setup in AWS. You want to install some updates to the Linux instances in the stack. Which of the following can be used to publish those updates. Choose 2 answers from the options given below


Options are :

  • Use Auto-scaling to launch new instances and then delete the older instances
  • On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command (Correct)
  • Create and start new instances to replace your current online instances. Then delete the current instances. (Correct)
  • Delete the stack and create a new stack with the instances and their relevant updates

Answer : On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command Create and start new instances to replace your current online instances. Then delete the current instances.

You currently have an application with an Auto Scaling group with an Elastic Load Balancer configured in AWS. After deployment users are complaining of slow response time for your application. Which of the following can be used as a start to diagnose the issue ?


Options are :

  • Use Cloud watch to monitor the ELB latency (Correct)
  • Use Cloud watch to monitor the CPU Utilization
  • Use Cloud watch to monitor the Memory Utilization
  • Use Cloud watch to monitor the Health Host Count metric

Answer : Use Cloud watch to monitor the ELB latency

You have an a video processing application hosted in AWS. The video?s are uploaded by users onto the site. You have a program that is custom built to process those videos. The program is able to recover Incase there are any failures when processing the videos. Which of the following mechanisms can be used to deploy the instances for carrying out the video processing activities , ensuring that the cost is kept at a minimum?


Options are :

  • Create a launch configuration with Reserved Instances. Ensure the User Data section details the Installation of the custom software. Create an Auto scaling group with the launch configuration.
  • Create a launch configuration with On-Demand Instances. Ensure the User Data section details the installation of the custom software. Create an Auto scaling group with the launch configuration.
  • Create a launch configuration with Spot Instances. Ensure the User Data section details the installation of the custom software. Create an Auto scaling group with the launch configuration. (Correct)
  • Create a launch configuration with Dedicated Instances. Ensure the User Data section details the installation of the custom software. Create an Auto scaling group with the launch configuration.

Answer : Create a launch configuration with Spot Instances. Ensure the User Data section details the installation of the custom software. Create an Auto scaling group with the launch configuration.

There is a requirement for a vendor to have access to an 53 bucket in your account. The vendor already has an AWS account. How can you provide access to the vendor on this bucket, Please select:


Options are :

  • Create an 53 bucket policy that allows the vendor to read from the bucket from their AWS account.
  • Create a cross-account role for the vendor account and grant that role access to the 53 bucket. (Correct)
  • Create a new lAM group and grant the relevant access to the vendor on that bucket.
  • Create a new lAM user and grant the relevant access to the vendor on that bucket.

Answer : Create a cross-account role for the vendor account and grant that role access to the 53 bucket.

You need to grant a vendor access to your AWS account. They need to be able to read protected messages in a private S3 bucket at their leisure. They also use AWS. What is the best way to accomplish this?


Options are :

  • Create an EC2 Instance Profile on your account. Grant the associated lAM role full access to the bucket. Start an EC2 instance with this Profile and give SSH access to the instance to the vendor.
  • Generate a signed S3 PUT URL and a signed 53 PUT URL both with wildcard values and 2 year durations, Pass the URL5 to the vendor
  • Create an lAM User with API Access Keys. Grant the User permissions to access the bucket. Give the vendor the AWS Access Key ID and AWS Secret Access Key for the User.
  • Create a cross-account lAM Role with permission to access the bucket, and grant permission to use the Ro to the vendor AWS account. (Correct)

Answer : Create a cross-account lAM Role with permission to access the bucket, and grant permission to use the Ro to the vendor AWS account.

You currently have an Auto scaling group that has the following settings Mm capacity - 2 Desired capacity - 2 Maximum capacity - 2 Your launch configuration has AMI?s which are based on the t2.micro instance type. The application running on these instances are now experiencing issues and you have identified that the solution is to change the Instance type of the instances running In the Auto scaling Group. Which of the below solutions will meet this demand?


Options are :

  • Delete the current Launch configuration. Create a new launch configuration with the new instance type and add it to the Auto scaling Group. This will then launch the new instances.
  • Make a copy the Launch configuration. Change the Instance type In the new launch configuration. Attach that to the Auto scaling Group. Change the maximum and Desired size of the Auto scaling Group to 4. Once the new instances are launched, change the Desired and maximum size back to 2. (Correct)
  • Change the Instance type in the current launch configuration. Change the Desired value of the Auto scaling Group to 4. Ensure the new Instances are launched.
  • Change the desired and maximum size of the Auto scaling Group to 4. Make a copy the Launch configuration. Change the Instance type in the new launch configuration. Attach that to the Auto scaling Group. Change the maximum and Desired size of the Auto scaling Group to 2

Answer : Make a copy the Launch configuration. Change the Instance type In the new launch configuration. Attach that to the Auto scaling Group. Change the maximum and Desired size of the Auto scaling Group to 4. Once the new instances are launched, change the Desired and maximum size back to 2.

You need to investigate one of the instances which is part of your Auto scaling Group. How would you implement this.


Options are :

  • Put the instance in a standby state (Correct)
  • Suspend the AZR balance process so that Auto scaling will not terminate the instance
  • Put the instance in a In-service state
  • Suspend the Add To Load Balancer process

Answer : Put the instance in a standby state

Which of the following services can be used to instrument Dev ops in your company?


Options are :

  • All of the above (Correct)
  • AWS Elastic Beanstalk
  • AWS Cloud formation
  • AWS Ops work

Answer : All of the above

You have a legacy application running that uses an m4.large instance size and cannot scale with Auto Scaling, but only has peak performance 5% of the time. This Is a huge waste of resources and money so your Senior Technical Manager has set you the task of trying to reduce costs while still keeping the legacy application running as it should. Which of the following would best accomplish the task your manager has set you? Choose the correct answer from the options below ?


Options are :

  • Use t2.nano instance and add spot instances when they are required.
  • Use two t2.nano instances that have single Root I/O Virtualizations.
  • Use a T2 burs table performance instance. (Correct)
  • Use a C4.large instance with enhanced networking.

Answer : Use a T2 burs table performance instance.

You have an application hosted in AWS , which sits on EC2 Instances behind an Elastic Load Balancer. You have added a new feature to your application and are now receiving complaints from users that the site has a slow response. Which of the below actions can you carry out to help you pinpoint the issue ?


Options are :

  • Create some custom Cloud watch metrics which are pertinent to the key features of your application (Correct)
  • Use Cloud watch, monitor the CPU utilization to see the times when the CPU peaked
  • Use Cloud trail to log all the API calls, and then traverse the log files to locate the issue
  • Review the Elastic Load Balancer logs

Answer : Create some custom Cloud watch metrics which are pertinent to the key features of your application

You need to store a large volume of data. The data needs to be readily accessible for a short period, but then needs to be archived indefinitely after that. What is a cost-effective solution that an help fulfil this requirement?


Options are :

  • Store your data in an EBS volume, and use lifecycle policies to archive to Amazon Glacier.
  • Store your data in Amazon 53. and use lifecycle policies to archive to Amazon Glacier (Correct)
  • Keep all your data in 53 since this is a durable storage.
  • Store your data in Amazon S3, and use lifecycle policies to archive to S3-lnfrequently Access

Answer : Store your data in Amazon 53. and use lifecycle policies to archive to Amazon Glacier

Which of the following features of the Elastic Beanstalk service will allow you to perform a Blue Green Deployment ?


Options are :

  • Swap Environment
  • Environment Configuration
  • Rebuild Environment
  • Swap URL?s (Correct)

Answer : Swap URL?s

You have been requested to use Cloud Formation to maintain version control and achieve automation for the applications in your organization. How can you best use Cloud Formation to keep everything agile and maintain multiple environments while keeping cost down?


Options are :

  • Use Cloud Formation custom resources to handle dependencies between stacks
  • Create multiple templates in one Cloud Formation stack.
  • Create separate templates based on functionality, create nested stacks with Cloud Formation. (Correct)
  • Combine all resources into one template for version control and automation.

Answer : Create separate templates based on functionality, create nested stacks with Cloud Formation.

The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? Choose two from the options below Please select:


Options are :

  • Using AWS Cloud Formation and configuration management. set up remote logging to send events via UDP packets to Cloud Trail.
  • Using configuration management, set up remote logging to send events to Amazon Kinesis and insert into Amazon Cloud Search or Amazon Red shift. depending on available analytic tools (Correct)
  • Using AWS Cloud Formation. create a Cloud Watch Logs Log Group and send the operating system and application logs of interest using the Cloud Watch Logs Agent. (Correct)
  • Using AWS Cloud Formation. merge the application logs with the operating system logs, and use lAM Roles to allow both teams to have access to view console output from Amazon EC2.

Answer : Using configuration management, set up remote logging to send events to Amazon Kinesis and insert into Amazon Cloud Search or Amazon Red shift. depending on available analytic tools Using AWS Cloud Formation. create a Cloud Watch Logs Log Group and send the operating system and application logs of interest using the Cloud Watch Logs Agent.

Which of the following services can be used in conjunction with Cloud watch Logs. Choose the 3 most viable services from the options given below


Options are :

  • Amazon Lambda (Correct)
  • Amazon Kinesis
  • Amazon 53 (Correct)
  • Amazon SOS

Answer : Amazon Lambda Amazon 53

You have a requirement to automate the creation of EBS Snapshots. Which of the following can be used to achieve this In the best way possible.


Options are :

  • Create a power shell script which uses the AWS CLI to get the volumes and then run the script as a cronjob
  • Use the AWS Code Deploy service to create a snapshot of the AWS Volumes
  • Use Cloud watch Events to trigger the snapshots of EBS Volumes (Correct)
  • Use the AWS Config service to create a snapshot of the AWS Volumes

Answer : Use Cloud watch Events to trigger the snapshots of EBS Volumes

Your company has the requirement to set up instances running as part of an Auto scaling Group. Part of the requirement is to use Lifecycle hooks to setup custom based software?s and do the necessary configuration on the Instances. The time required for this setup might take an hour, or might finish before the hour Is up. How should you setup lifecycle hooks for the Auto scaling Group. Choose 2 ideal actions you would include as part of the lifecycle hook.


Options are :

  • If the software installation and configuration is complete. then send a signal to complete the launch of the instance. (Correct)
  • If the software installation and configuration is complete . then restart the time period.
  • Configure the lifecycle hook to record heartbeats. If the hour Is up, restart the timeout period. (Correct)
  • Configure the lifecycle hook to record heartbeats. If the hour is up. choose to terminate the current instar and start a new one

Answer : If the software installation and configuration is complete. then send a signal to complete the launch of the instance. Configure the lifecycle hook to record heartbeats. If the hour Is up, restart the timeout period.

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data In a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below?


Options are :

  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon 53 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Red shift in order to process and run reports every hour. (Correct)
  • Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics. (Correct)
  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data In Amazon Glacier and run reports every hour.
  • Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object In AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Red shift and run reports every hour. Your answer is incorrect.

Answer : On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon 53 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Red shift in order to process and run reports every hour. Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics.

You have an Auto scaling Group configured to launch EC2 Instances for your application. But you notice that the Auto scaling Group is not launching instances in the right proportion. In fact Instances are being launched too fast. What can you do to mitigate this issue? Choose 2 answers from the options given below Please select:


Options are :

  • Adjust the Memory threshold set for the Auto scaling scale-in and scale-out process.
  • Adjust the cool down period set for the Auto scaling Group (Correct)
  • Set a custom metric which monitors a key application functionality for the scale-in and scale-out process. (Correct)
  • Adjust the CPU threshold set for the Auto scaling scale-in and scale-out process.

Answer : Adjust the cool down period set for the Auto scaling Group Set a custom metric which monitors a key application functionality for the scale-in and scale-out process.

Your security officer has told you that you need to tighten up the logging of all events that occur on your AWS account. He wants to be able to access all events that occur on the account across all regions quickly and in the simplest way possible. He also wants to make sure he is the only person that has access to these events in the most secure way possible. Which of the following would be the best solution to assure his requirements are met? Choose the correct answer from the options below ?


Options are :

  • Use Cloud Trail to send all API calls to Cloud Watch and send an email to the security officer every time an API call is made. Make sure the emails are encrypted.
  • Use Cloud Trail to log all events to one 53 bucket. Make this S3 bucket only accessible by your security office with a bucket policy that restricts access to his user only and also add MFA to the policy for a further level of security. (Correct)
  • Use Cloud Trail to log all events to an Amazon Glacier Vault. Make sure the vault access policy only grants access to the security officer?s lP address.
  • Use to log all events to a separate 53 bucket in each region as Cloud Trail cannot write to a bucket in a different region. Use MFA and bucket policies on all the different buckets.

Answer : Use Cloud Trail to log all events to one 53 bucket. Make this S3 bucket only accessible by your security office with a bucket policy that restricts access to his user only and also add MFA to the policy for a further level of security.

Your company has an on-premise Active Directory setup in place. The company has extended their footprint on AWS, but still want to have the ability to use their on-premise Active Directory for authentication. Which of the following AWS services can be used to ensure that AWS resources such as AWS Workspaces can S continue to use the existing credentials stored in the on-premise Active Directory.


Options are :

  • Use the Active Directory connector service on AWS (Correct)
  • Use the Classic Link feature on AWS
  • Use the AWS Simple AD service
  • Use the Active Directory service on AWS

Answer : Use the Active Directory connector service on AWS

Your application Is having a very high traffic, so you have enabled auto scaling in multi availability zone to suffice the needs of your application but you observe that one of the availability zone is not receiving any traffic. What can be wrong here? Please select:


Options are :

  • Auto scaling only works for single availability zone
  • Availability zone is not added to Elastic load balancer (Correct)
  • Auto scaling can be enabled for multi AZ only in north Virginia region
  • instances need to manually added to availability zone

Answer : Availability zone is not added to Elastic load balancer

Your development team is using access keys to develop an application that has access to S3 and Dynamo DB. A new security policy has outlined that the credentials should not be older than 2 months e and should be rotated. How can you achieve this


Options are :

  • Delete the user associated with the keys after every 2 months. Then recreate the user again.
  • Use a script which will query the date the keys are created. If older than 2 months, delete them and recreate new keys . (Correct)
  • Use the application to rotate the keys in every 2 months via the SDK
  • Delete the lAM Role associated with the keys after every 2 months. Then recreate the lAM Role again.

Answer : Use a script which will query the date the keys are created. If older than 2 months, delete them and recreate new keys .

You are in charge of designing Cloud formation templates for your company. One of the key requirements is to ensure that if a Cloud formation stack is deleted , a snapshot of the relational database which is part of the stack, is created. How can you achieve this in the best possible way.


Options are :

  • Use the Deletion policy of the cloud formation template to ensure a snapshot is created of the relational database. (Correct)
  • Use the Update policy of the cloud formation template to ensure a snapshot is created of the relational database.
  • Create a new cloud formation template to create a snapshot of the relational database.
  • Create a snapshot of the relational database beforehand so that when the cloud formation stack is deleted, the snapshot of the database will be present.

Answer : Use the Deletion policy of the cloud formation template to ensure a snapshot is created of the relational database.

You?re building a mobile application game. The application needs permissions for each user to communicate and store data in Dynamo DB tables. What is the best method for granting each mobile device that installs your application to access Dynamo DB tables for storage when required? Choose the correct answer from the options below


Options are :

  • Create an lAM role with the proper permission policy to communicate with the DynamoDB table. Use web identity federation, which assumes the lAM role using Assume Role With Web ldentity, when the user signs in, granting temporary security credentials using STS. (Correct)
  • Create an Active Directory server and an AD user for each mobile application user. When the user signs in to the AD sign-on. allow the AD server to federate using SAML 2.0 to lAM and assign a role to the AD user which is the assumed with Assume Role With sAML.
  • During the install and game configuration process, have each user create an lAM credential and assign the lAM user to a group with proper permissions to communicate with Dynamo DB.
  • Create an lAM group that only gives access to your application and to the Dynamo Ds tables. Then, when writing to Dynamo DB. simply include the unique device ID to associate the data with that specific user.

Answer : Create an lAM role with the proper permission policy to communicate with the DynamoDB table. Use web identity federation, which assumes the lAM role using Assume Role With Web ldentity, when the user signs in, granting temporary security credentials using STS.

You need to deploy a multi-container Docker environment on to Elastic beanstalk. Which of the following files can be used to deploy a set of Docker containers to Elastic beanstalk Please select:


Options are :

  • Docker run
  • Docker file
  • Dockerrun.aws.json (Correct)
  • Docker Multi file

Answer : Dockerrun.aws.json

You currently have an application deployed via Elastic Beanstalk. You are now deploying a new application and have ensured that Elastic beanstalk has detached the current instances and deployed and reattached new instances. But the new Instances are still not receiving any sort of traffic. Why is this the case.


Options are :

  • You need to create a new Elastic Beanstalk application, because you cannot detach and then reattach instances to an ELB within an Elastic Beanstalk application
  • The instances are of the wrong AMI. hence they are not being detected by the ELB.
  • It takes time for the ELB to register the instances, hence there is a small time frame before your instances can start receiving traffic (Correct)
  • The Instances needed to be reattached before the new application version was deployed

Answer : It takes time for the ELB to register the instances, hence there is a small time frame before your instances can start receiving traffic

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions