AWS Devops Engineer Professional Certified Practice Exam Set 13

You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors In Production even when Staging tests pass. You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines?


Options are :

  • Use AWS ECS and Docker clustering. This will make sure that the AMIs and machine sizes are the same across both environments.
  • Develop models of your entire cloud system in Cloud Formation. Use this model in Staging and Production achieve greater parity. (Correct)
  • Use AMPs to ensure the whole machine, including the kernel of the virtual machines, is consistent, since Docker uses Linux Container (DCC) technology, and we need to make sure the container environment Is consistent.
  • Use AWS Conflg to force the Staging and Production stacks to have configuration parity. Any differences will be detected for you so you are aware of risks.

Answer : Develop models of your entire cloud system in Cloud Formation. Use this model in Staging and Production achieve greater parity.

You need to perform ad-hoc business analytics queries on well-structured data. Data comes in constantly at a high velocity. Your business intelligence team can understand SQL. What AWS service(s) should you look to first?


Options are :

  • EMR running Apache Spark
  • Kinesis Fire hose + RDS
  • Kinesis Fire hose + Red Shift (Correct)
  • EMR using Hive

Answer : Kinesis Fire hose + Red Shift

You are planning on using the Amazon RDS facility for Fault tolerance for your application. How does Amazon RDS multi Availability Zone model work Please select:


Options are :

  • A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. . (Correct)
  • A second, standby database is deployed and maintained in a different region from master using asynchronous replication.
  • A second. standby database is deployed and maintained in a different region from master using synchronous replication.
  • A second, standby database is deployed and maintained in a different availability zone from master using asynchronous replication.

Answer : A second, standby database is deployed and maintained in a different availability zone from master, using synchronous replication. .

You have an application running on an Amazon EC2 instance and you are using lAM roles to securely access AWS Service APIs. How can you configure your application running on that instance to retrieve the API keys for use with the AWS SDKs?


Options are :

  • Within your application code, make a GET request to the lAM Service API to retrieve credentials for your user.
  • Within your application code, configure the AWS SDK to get the API keys from environment variables. because assigning an Amazon EC2 role stores keys in environment variables on launch.
  • When using AWS SDK5 and Amazon EC2 roles, you do not have to explicitly retrieve API keys. because the SDK handles retrieving them from the Amazon EC2 Meta Data service. „ (Correct)
  • When assigning an EC2 lAM role to your instance in the console, in the those SDK dropdown list, select the SDK that you are using. and the Instance will configure the correct SDK on launch with the API keys.

Answer : When using AWS SDK5 and Amazon EC2 roles, you do not have to explicitly retrieve API keys. because the SDK handles retrieving them from the Amazon EC2 Meta Data service. „

Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?


Options are :

  • Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELB5. (Correct)
  • Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
  • Use a Cloud Front distribution to serve up your API. Even if the region your API is in goes down, the edge locations Cloud Front uses will be fine.
  • Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.

Answer : Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELB5.

You have an application hosted in AWS. This application was created using Cloud formation Templates and Auto scaling. Now your application has got a surge of users which is decreasing the performance of the application. As per your analysis, a change in the instance type to C3 would resolve the Issue. Which of the below option can introduce this change while minimizing downtime for end users? Please select:


Options are :

  • Update the launch configuration in the AWS Cloud Formation template with the new C3 instance type. Add an Update Policy attribute to the Auto Scaling group that specifies an Auto Scaling Rolling Update. Run a stack update with the updated template. (Correct)
  • Update the existing launch configuration with the new C3 instance type. Add an Update Policy attribute to your Auto Scaling group that specifies an Auto Scaling Rolling Update in order to avoid downtime.
  • Update the AWS Cloud Formation template that contains the launch configuration with the new C3 Instance type. Run a stack update with the updated template, and Auto Scaling will then update the instances one at a time with the new instance type.
  • Copy the old launch configuration. and create a new launch configuration with the C3 Instances. Update the Auto Scaling group with the new launch configuration. Auto Scaling will then update the instance type of all instances.

Answer : Update the launch configuration in the AWS Cloud Formation template with the new C3 instance type. Add an Update Policy attribute to the Auto Scaling group that specifies an Auto Scaling Rolling Update. Run a stack update with the updated template.

You have an application hosted in AWS. You wanted to ensure that when certain thresholds are reached a Dev ops Engineer is notified. Choose 3 answers from the options given below Please select:


Options are :

  • Set the threshold your application can tolerate in a Cloud Watch Logs group and link a Cloud Watch alarm that threshold. (Correct)
  • Pipe data from EC2 to the application logs using AWS Data Pipeline and Cloud Watch
  • Use Cloud Watch Logs agent to send log data from the app to Cloud Watch Logs from Amazon EC2 instances (Correct)
  • Once a Cloud Watch alarm is triggered, use SNS to notify the Senior Dev Ops Engineer. (Correct)

Answer : Set the threshold your application can tolerate in a Cloud Watch Logs group and link a Cloud Watch alarm that threshold. Use Cloud Watch Logs agent to send log data from the app to Cloud Watch Logs from Amazon EC2 instances Once a Cloud Watch alarm is triggered, use SNS to notify the Senior Dev Ops Engineer.

You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit. Which of these Is a violation of this policy?


Options are :

  • Cloud Front Viewer Protocol Policy set to HTTPS redirection.
  • ELB Using Proxy Protocol vi,
  • Telling S3 to use AES2S6 on the server-side. If you use 551 termination, your servers will always get non-secure connections and will never know whether users
  • ELB SSL termination (Correct)

Answer : ELB SSL termination

Your application?s Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?


Options are :

  • Calculate the bottleneck or constraint on the compute layer. then select that as the new metric, and set tn. metric thresholds to the bounding values that begin to affect response latency. (Correct)
  • Raise the Cloud Watch Alarms threshold associated with your auto scaling group, so the scaling takes more or an increase in demand before beginning.
  • Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the Os level, since the OS uses a higher proportion of resources on smaller instances.
  • Set a longer cooldown period on the Group. so the system stops overshooting the target capacity. The issue is that the scaling system doesn?t allow enough time for new instances to begin servicing requests before measuring aggregate load again.

Answer : Calculate the bottleneck or constraint on the compute layer. then select that as the new metric, and set tn. metric thresholds to the bounding values that begin to affect response latency.

You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What Is a possible issue?


Options are :

  • The scaling metric is not functioning correctly.
  • Someone changed the lAM Role Policy
  • Some of the new jobs coming in are malformed and un process able. (Correct)
  • The routing tables changed and none of the workers can process events anymore.

Answer : Some of the new jobs coming in are malformed and un process able.

You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected Cloud Formation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by Cloud Formation. How should you overcome this challenge?


Options are :

  • Submit a ticket to the AWS Forums. AWS extends Cloud Formation Resource Types by releasing tooling to ti AWS Labs organization on Git Hub. Their response time is usually 1 day. and they complete requests within a week „ or two.
  • Create a Cloud Formation Custom Resource Type by Implementing create. update. and delete functionality. either by subscribing a Custom Resource Provider to an SNS topic. or by implementing the logic in AWS Lambda. (Correct)
  • Instead of depending on Cloud Formation, use Chef, Puppet. or Ensile to author Heat templates, which are declarative stack resource definitions that operate over the Open Stack hypervisor and cloud environment.
  • Use a Cloud Formation Custom Resource Template by selecting an API call to proxy for create, update. and delete actions. Cloud Formation will use the AWS SDK. CLI, or API method of your choosing as the state transition function for the resource type you are modeling.

Answer : Create a Cloud Formation Custom Resource Type by Implementing create. update. and delete functionality. either by subscribing a Custom Resource Provider to an SNS topic. or by implementing the logic in AWS Lambda.

You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-i region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-i?s AZ?s a through f, inclusive?


Options are :

  • 4 servers In each of AZs a through c. Inclusive
  • 3 servers in each of AZs a through d. inclusive.
  • 2 servers in each of AZ?s a through e. inclusive. (Correct)
  • 8 servers in each of Ars a and b

Answer : 2 servers in each of AZ?s a through e. inclusive.

You are building a Ruby on Rails application for internal, non-production use which uses My SQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?


Options are :

  • AWS Cloud Formation
  • AWS Ops Works
  • AWS ELS + EC2 with CLI Push
  • AWS Elastic Beanstalk (Correct)

Answer : AWS Elastic Beanstalk

You need to create a Route53 record automatically in Cloud Formation when not running in production during all launches of a Template. How should you Implement this?


Options are :

  • Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create\ the record with a null string when environment Is production.
  • Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment Is not production. (Correct)
  • Create two templates, one with the Route53 record and one without it. Use the one without it when deploying to production.
  • Create two templates. one with the Route53 record value and one with a null value for the record. Use the one without it when deploying to production.

Answer : Use a Parameter for environment, and add a Condition on the Route53 Resource in the template to create the record only when environment Is not production.

You are planning on using encrypted snapshots in the design of your AWS Infrastructure. Which of the following statements are true with regards to EBS Encryption Please select:


Options are :

  • Snap shooting an encrypted volume makes an encrypted snapshot when specified I requested; restoring an encrypted snapshot always creates an encrypted volume.
  • Snap shooting an encrypted volume makes an encrypted snapshot: restoring an encrypted snapshot creates an encrypted volume when specified / requested.
  • Snap shooting an encrypted volume makes an encrypted snapshot restoring an encrypted snapshot alway: creates an encrypted volume. (Correct)
  • Snap shooting an encrypted volume makes an encrypted snapshot when specified I requested: restoring an encrypted snapshot creates an encrypted volume when specified / requested.

Answer : Snap shooting an encrypted volume makes an encrypted snapshot restoring an encrypted snapshot alway: creates an encrypted volume.

What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.


Options are :

  • Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
  • Enable diplex networking on your servers, so packets are non-blocking in both directions and theres no switching overhead.
  • Ensure the instances are in different VPC5 so you don?t saturate the Internet Gateway on any one VPC.
  • Use a placement group for your instances so the instances are physically near each other in the same Availability Zone. (Correct)

Answer : Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.

Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest and cheapest way to reduce costs and scale with spikes like this?


Options are :

  • Create another ELB and Auto Scaling Group layer mounted on top of the other system, adding a tier to the system. Serve most read requests out of the top layer.
  • Create a Cloud Front Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests which can be served late. (Correct)
  • synchronously replicate common requests responses into S3 objects. When a request comes in for a pre computed response. redirect to AWS 53.
  • Create a Mem cached cluster in AWS Elastic Cache. Create cache logic to serve requests which can be served late from the in-memory cache for Increased performance.

Answer : Create a Cloud Front Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviors to proxy cache requests which can be served late.

You run a 2000-engineer organization. You are about to begin using AWS at a large scale for the first time. You want to integrate with your existing identity management system running on Microsoft Active Directory. because your organization Is a power-user of Active Directory. How should you manage your AWS identities in the most simple manner?


Options are :

  • Use AWS Directory Service AD Connector (Correct)
  • Use an Sync Domain running on AWS Directory Service.
  • Use an AWS Directory Sync Domain running on AWS Lambda
  • Use AWS Directory Service Simple AD.

Answer : Use AWS Directory Service AD Connector

You need your API backed by Dynamo DB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What Is a good approach?


Options are :

  • Set up Dynamo DB cross-region replication In a master-standby configuration. with a single standby In another region. Create a cross region ELB pointing to a cross-region Auto Scaling Group, and direct a Routes3 Latency DNS Record with DNS Failover to the crossregion ELB.
  • Set up a Dynamo DB Global table. Create an Auto Scaling Group behind an ELB in each of the two regions your application layer in which the Dynamo DB is running In. Add a Route53 Latency DNS Record with DNS Failover. using the ELB5 in the two regions as the resource records. (Correct)
  • Set up a Dynamo DB Multi-Region table. Create a cross-region ELS pointing to a crossregion Auto Scaling Group. and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.
  • Set up Dynamo DB cross-region replication in a master-standby configuration. with a single standby in another region. Create an Auto Scaling Group behind an ELS in each of the two regions for your application layer In which Dynamo DB is running In. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.

Answer : Set up a Dynamo DB Global table. Create an Auto Scaling Group behind an ELB in each of the two regions your application layer in which the Dynamo DB is running In. Add a Route53 Latency DNS Record with DNS Failover. using the ELB5 in the two regions as the resource records.

You need to run a very large batch data processing job one time per day. The source data exists entirely in 53, and the output of the processing job should also be written to S3 when finished. If you need to version control this processing job and all setup and teardown logic for the system, what approach should you use?


Options are :

  • Model an AWS EMR job in AWS Cloud Formation. (Correct)
  • Model an AWS EMR job In AWS Ops Works.
  • Model an AWS EMR job in AWS Elastic Beanstalk.
  • Model an AWS EMR job in AWS CLI Composer.

Answer : Model an AWS EMR job in AWS Cloud Formation.

When thinking of AWS Elastic Beanstalk, the „Swap Environment URLs? feature most directly aids in what?


Options are :

  • Mutable Rolling Deployments
  • Blue-Green Deployments (Correct)
  • Canary Deployments
  • Immutable Rolling Deployments

Answer : Blue-Green Deployments

If you?re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs. what should you configure?


Options are :

  • Configure Blue-Green Deployments.
  • Configure a Dead Letter Queue (Correct)
  • Configure Rolling Deployments.
  • Configure Enhanced Health Reporting

Answer : Configure a Dead Letter Queue

You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application with the right authentication/authorization implementation?


Options are :

  • Create an AWS Auth Service Domain ad grant public signup and access to the domain. During setup, add at least one major social media site as a trusted Identity Provider for users.
  • Use JWT or SAML compliant systems to build authorization policies. Users log In with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.
  • Build the application out using AWS Cognate and web Identity federation to allow users to log In using Face book or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS. like AWS 53. ..„ (Correct)
  • Use AWS API Gateway with a constantly rotating API Key to allow access from the clientside. Construct a custom build of the SDK and include S3 access in it.

Answer : Build the application out using AWS Cognate and web Identity federation to allow users to log In using Face book or Google Accounts. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS. like AWS 53. ..„

You have a development team that is continuously spending a lot of time rolling back updates for an application. They work on changes, and If the change fails, they spend more than 5-6h in rolling back the update. Which of the below options can help reduce the time for rolling back application versions?


Options are :

  • Use Elastic Beanstalk and re-deploy using Application Versions (Correct)
  • Use S3 to store each version and then re-deploy with Elastic Beanstalk
  • Use Ops Works and redeploy using rollback feature.
  • Use Cloud Formation and update the stack with the previous template

Answer : Use Elastic Beanstalk and re-deploy using Application Versions

You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other. unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?


Options are :

  • Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
  • Purchase a Heavy Utilization Reserved instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved instance credits are also applied to the batch jobs. (Correct)
  • Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch

Answer : Purchase a Heavy Utilization Reserved instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved instance credits are also applied to the batch jobs.

You want to pass queue messages that are 1 GB each. How should you achieve this? Please select:


Options are :

  • Use the Amazon SQS Extended Client Ubrary forJava and Amazon S3 as a storage mechanism for message bodies. (Correct)
  • Use SQS?s support for message partitioning and multi-part uploads on Amazon S3.
  • Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.
  • Use AWS EFS as a shared pool storage medium. Store file system pointers to the files on disk in the SQS message bodies.

Answer : Use the Amazon SQS Extended Client Ubrary forJava and Amazon S3 as a storage mechanism for message bodies.

Your server less architecture using AWS API Gateway, AWS Lambda, and AWS Dynamo DB experienced a large increase in traffic to a sustained 2000 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your Dynamo DB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?


Options are :

  • You did not request a limit increase on concurrent Lambda function executions. (Correct)
  • Your API Gateway deployment Is throttling your requests.
  • You used Consistent Read requests on Dynamo DB and are experiencing semaphore lock.
  • Your AWS API Gateway Deployment is bottlenecking on request .

Answer : You did not request a limit increase on concurrent Lambda function executions.

Which of the following tools does not directly support AWS Ops Works, for monitoring your stacks?


Options are :

  • AWS Config (Correct)
  • Amazon Cloud Watch Metrics
  • AWS CloudTrail
  • Amazon CloudWatch Logs

Answer : AWS Config

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions