AWS Devops Engineer Professional Certified Practice Exam Set 12

Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?


Options are :

  • Use Cloud Trail Log File Integrity Validation (Correct)
  • Use AWS Config SNS Subscriptions and process events in real time.
  • Use AWS Config Timeline forensics.
  • Use Cloud Trail backed up to AWS 53 and Glacier

Answer : Use Cloud Trail Log File Integrity Validation

You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What?s the best design for this system, using Dynamo DB?


Options are :

  • Dynamo DB table with roughly equal read and write throughput, with Elastic Cache caching. (Correct)
  • Dynamo DB table with lOOx higher read than write throughput, with Cloud Front caching.
  • Dynamo DB table with roughly equal read and write throughput. with Cloud Front caching
  • Dynamo DB table with bOx higher read than write throughput, with Elastic Cache caching.

Answer : Dynamo DB table with roughly equal read and write throughput, with Elastic Cache caching.

You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use? Please select:


Options are :

  • Create a new Auto Scaling Launch Configuration with User Data scripts configured to pull the latest code at all times.
  • SSH into new instances that come online, and deploy new code onto the system by pulling it from an 53 bucket, which is populated by code that you refresh from source control on new pushes.
  • Create a Docker file when preparing to deploy a new version to production and publish it to 53. Use User Data In the Auto Scaling Launch configuration to pull down the Docker file from S3 and run It when new instances launch.
  • Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. (Correct)

Answer : Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.

You need to create a simple, holistic check for your system?s general availability and uptime. Your system presents itself as an HTTP-speaking API. What is the most simple tool on AWS to achieve this with? Please select:


Options are :

  • Route53 Health Checks (Correct)
  • AWS ELB Health Checks
  • EC2 Health Checks
  • Cloud Watch Health Checks

Answer : Route53 Health Checks

Your system automatically provisions ElPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?


Options are :

  • You hit the soft limit of S ElPs per region and requested a 6th. (Correct)
  • You hit the soft limit of 2 VPCs per region and requested a 3rd.
  • You didn?t set the Development flag to true when deploying EC2 instances.
  • You didn?t choose the Development version of the AMI you are using.

Answer : You hit the soft limit of S ElPs per region and requested a 6th.

When thinking of AWS Elastic Beanstalk?s model, which is true?


Options are :

  • Applications have many environments, environments have many deployments. (Correct)
  • Environments have many applications, applications have many deployments
  • Deployments have many environments, environments have many applications.
  • Applications have many deployments, deployments have many environments.

Answer : Applications have many environments, environments have many deployments.

Your company wants to understand where cost is coming from in the company?s production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?


Options are :

  • Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month. (Correct)
  • Use custom Cloud Watch Metrics in your system, and put a metric data point whenever cost is incurred.
  • Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.
  • Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.

Answer : Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month.

Your company releases new features with high frequency while demanding high application availability. As part of the application?s A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed In near real-time, to ensure that the application Is working flawlessly after each deployment. If the logs show any anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?


Options are :

  • Ship the logs to Amazon 53 for durability and use Amazon EMR to analyze the logs in a batch manner eaci hour.
  • Ship the logs to Amazon Cloud Watch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
  • Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner. (Correct)
  • Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner

Answer : Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.

There is a requirement to monitor API calls against your AWS account by different users and entities. There needs to be a history of those calls. The history of those calls are needed In in bulk for later review. Which 2 services can be used In this scenario ?


Options are :

  • AWS Contig AWS Inspector
  • AWS Cloud Trail; AWS Config
  • AWS Cloud Trail; Cloud Watch Events (Correct)
  • AWS Config AWS Lambda

Answer : AWS Cloud Trail; Cloud Watch Events

You need to deploy a new application version to production. Because the deployment is high-risk, you need to roll the new version out to users over a number of hours, to make sure everything is working correctly. You need to be able to control the proportion of users seeing the new version of the application down to the percentage point. You use ELB and EC2 with Auto Scaling Groups and custom AMls with your code prei nstalled assigned to Launch Configurations. There are no database-level changes during your deployment. You have been told you cannot spend too much money, so you must not increase the number of EC2 instances much at all during the deployment, but you also need to be,. able to switch back to the original version of code quickly if something goes wrong. What is the best way to/ meet these requirements?


Options are :

  • Use the Blue-Green deployment method to enable the fastest possible rollback if needed. Create a full second stack of instances and cut the DNS over to the new stack of instances, and change the DNS back If a rollback is needed.
  • Create a second ELB. Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMI5 with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELB5. (Correct)
  • Migrate to use AWS Elastic Beanstalk. Use the established and well-tested Rolling Deployment setting AWS provides on the new Application Environment, publishing a zip bundle of the new code and adjusting the wait period to spread the deployment over time. Re-deploy the old code bundle to rollback if needed.
  • Create AMI5 with all code pre-installed. Assign the new AMI to the Auto Scaling Launch Configuration, to replace the old one. Gradually terminate Instances running the old code (launched with the old Launch Configuration) and allow the new AMIs to boot to adjust the traffic balance to the new code. On rollback, reverse the process by doing the same thing. but changing the AMI on the Launch Config back to the original code.

Answer : Create a second ELB. Auto Scaling Launch Configuration, and Auto Scaling Group using the Launch Configuration. Create AMI5 with all code pre-installed. Assign the new AMI to the second Auto Scaling Launch Configuration. Use Route53 Weighted Round Robin Records to adjust the proportion of traffic hitting the two ELB5.

If I want Cloud Formation stack status updates to show up in a continuous delivery system in as close to real time as possible, how should I achieve this?


Options are :

  • Subscribe your continuous delivery system to an SQS queue that you also tell your Cloud Formation stack to publish events into.
  • Use a long-poll on the Resources object in your Cloud Formation stack and display those state changes in the UI for the system.
  • Subscribe your continuous delivery system to an SNS topic that you also tell your Cloud Formation stack to \ publish events into. (Correct)
  • Use a long-poll on the List Stacks APl call for your Cloud Formation stack and display those state changes in the UI for the system.

Answer : Subscribe your continuous delivery system to an SNS topic that you also tell your Cloud Formation stack to \ publish events into.

You have deployed a Cloud formation template which is used to spin up resources in your account. Which of the following status in Cloud formation represents a failure.


Options are :

  • UPDATE_IN_PROGRESS
  • UPDATE_COMPLETE_CLEAN UP_I N_PROGRESS
  • ROLLBACK_IN_PROGRESS (Correct)
  • DELETE_COMPLETE

Answer : ROLLBACK_IN_PROGRESS

You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this In a simple way?


Options are :

  • Create a second master RDS instance and peer the RDS groups.
  • Cache all the database responses on the read side with Cloud Front.
  • Create a Multi-AZ RDS installs and route read traffic to standby.
  • Create read replicas for RDS since the load is mostly reads (Correct)

Answer : Create read replicas for RDS since the load is mostly reads

What is web identity federation?


Options are :

  • Use of AWS STS Tokens to log in as a Google or Face book user.
  • Use of AWS lAM User tokens to log in as a Google or Face book user.
  • Use of an identity provider like Google or Face book to exchange for temporary AWS security credentials. (Correct)
  • Use of an identity provider like Google or Face book to become an AWS lAM User.

Answer : Use of an identity provider like Google or Face book to exchange for temporary AWS security credentials.

You are designing a service that aggregates click stream data in batch and delivers reports to subscribers via email only once per week. Data is extremely spiky, geographically distributed, high-scale. and unpredictable. How should you design this system?


Options are :

  • Use API Gateway Invoking Lambdas which PutRecords into Kinesis, and EMR running Spark performing GetRecords on Kinesis to scale with spikes. Spark on EMR outputs the analysis to S3. which are sent out via email.
  • Use AWS Elasticsearch service and EC2 Auto Scaling groups. The Autoscaling groups scale based on click throughput and stream into the Elasticsearch domain, which is also scalable. Use Kibana to generate reports periodically.
  • Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETsI to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. . (Correct)
  • Use a large Red Shift cluster to perform the analysis, and a fleet of Lambdas to perform record Inserts into the Red Shlft tables. Lambda will scale rapidly enough for the traffic spikes.

Answer : Use a CloudFront distribution with access log delivery to S3. Clicks should be recorded as querystring GETsI to the distribution. Reports are built and sent by periodically running EMR jobs over the access logs in S3. .

You have decided to migrate your application to the cloud. You cannot afford any downtime. You want to gradually migrate so that you can test the application with a small percentage of users and increase over time. Which of these options should you Implement?


Options are :

  • Implement a Route 53 weighted routing policy that distributes the traffic between your on-premises application and the AWS application depending on weight. (Correct)
  • Use Direct Connect to route traffic to the on-premise location. In Direct Connect. configure the amount of traffic to be routed to the on-premise location.
  • Implement a Route 53 failover routing policy that sends traffic back to the on-premises application if the AWS application fails.
  • Configure an Elastic Load Balancer to distribute the traffic between the on-premises application and the AWS application.

Answer : Implement a Route 53 weighted routing policy that distributes the traffic between your on-premises application and the AWS application depending on weight.

For AWS Auto Scaling, what is the first transition state an instance enters after leaving steady state when scaling in due to health check failure or decreased load?


Options are :

  • Detaching
  • Terminating (Correct)
  • Term inating: Wait
  • Entering Standby

Answer : Terminating

Your CTO has asked you to make sure that you know what all users of your AWS account are doing to change resources at all times. She wants a report of who is doing what over time, reported to her once per week, for as broad a resource type group as possible. How should you do this?


Options are :

  • Create a global AWS Cloud Trail. Configure a script to aggregate the log data delivered to 53 once per week and deliver this to the CTO. (Correct)
  • Use Cloud Watch Events Rules with an SNS topic subscribed to all AWS API calls. Subscribe the CTO to an email type delivery on this SNS Topic.
  • Use AWS lAM credential reports to deliver a CSV of all uses of lAM User Tokens over time to the CTO.
  • Use AWS Config with an SNS subscription on a Lambda, and insert these changes over time into a Dynamo DB table. Generate reports based on the contents of this table.

Answer : Create a global AWS Cloud Trail. Configure a script to aggregate the log data delivered to 53 once per week and deliver this to the CTO.

Your company uses AWS to host its resources. They have the following requirements 1) Record all API calls and Transitions 2) Help in understanding what resources are there in the account 3) Facility to allow auditing credentials and logins Which services would suffice the above requirements Please select:


Options are :

  • Cloud Trail. lAM Credential Reports. AWS Config
  • Cloud Trail. AWS Config. AM Credential Reports (Correct)
  • AWS Config. lAM Credential Reports. Cloud Trail
  • AWS Config. Cloud Trail, lAM Credential Reports

Answer : Cloud Trail. AWS Config. AM Credential Reports

You are building a game high score table in Dynamo DB. You will store each user?s highest score for each game. with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What?s the best Dynamo DB key structure?


Options are :

  • Game lD as the hash / only key.
  • Game lD as the hash key. Highest Score as the range key (Correct)
  • Game lD as the range / only key.
  • Highest score as the hash / only key.

Answer : Game lD as the hash key. Highest Score as the range key

You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs. What technique should you use to meet these requirements?


Options are :

  • Store your logs in Amazon Glacier.
  • Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.
  • Store your log in Amazon Cloud Watch Logs.
  • Store your logs in Amazon S3. and use lifecycle policies to archive to Amazon Glacier. (Correct)

Answer : Store your logs in Amazon S3. and use lifecycle policies to archive to Amazon Glacier.

You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?


Options are :

  • Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.
  • Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration. (Correct)
  • Create a Dockerfile when preparing to deploy a new version to production and publish it to 53. Use User Data in the Auto Scaling Launch configuration to pull down the Docker file from S3 and run it when new instances launch.
  • SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which Is populated by code that you refresh from source control on new pushes.

Answer : Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.

Your CTO is very worried about the security of your AWS account. How best can you prevent hackers from completely hijacking your account?


Options are :

  • Use AWS lAM Geo-Lock and disallow anyone from logging in except for in your city.
  • Use MFA on all users and accounts, especially on the root account. . (Correct)
  • Don?t write down or remember the root account password after creating the AWS acc
  • Use short but complex password on the root account and any administrators.

Answer : Use MFA on all users and accounts, especially on the root account. .

Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this deployment?s evolution as It changes over time, and carefully control any alterations. What is a good way to automate a stack to meet these requirements?


Options are :

  • Use Cloud Formation Nested Stack Templates, with three child stacks to represent the three logical layers o your cloud. (Correct)
  • Use AWS Conflg to declare a configuration set that AWS should roll out to your cloud
  • Use Elastic Beanstalk Linked Applications, passing the important DNS entries between layers using the metadata interface.
  • Use Ops Works Stacks with three layers to model the layering in your stack.

Answer : Use Cloud Formation Nested Stack Templates, with three child stacks to represent the three logical layers o your cloud.

You are using Chef in your data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?


Options are :

  • AWS Ops Works (Correct)
  • AWS Cloud Formation
  • AWS Elastic Beanstalk
  • Amazon Simple Workflow Service

Answer : AWS Ops Works

There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working In the region with the outage. What might be the Issue?


Options are :

  • The AWS Console is down, so your CLI commands do not work.
  • None of the other answers make sense. If EC2 is not affected, it must be some other issue.
  • AWS turns off the Deploy Code API call when there are major outages, to protect from system floods.
  • 53 Is unavailable, so you can?t create EBS volumes from a snapshot you use to deploy new volumes. (Correct)

Answer : 53 Is unavailable, so you can?t create EBS volumes from a snapshot you use to deploy new volumes.

You meet once per month with your operations team to review the past month?s data. During the meeting, you realize that 3 weeks ago, your monitoring system which pings over HTTP from outside AWS recorded a large spike in latency on your 3-tier web service API. You use Dynamo DB for the database layer, ELB, EBS, and EC2 for the business logic tier, and SQS, ELB, and EC2 for the presentation layer. Which of the following techniques will NOT help you figure out what happened? Please select:


Options are :

  • Review Cloud Watch Metrics for one minute interval graphs to determine which component(s) slowed the system down. (Correct)
  • Check your Cloud Trail log history around the spike?s time for any API calls that caused slowness.
  • Analyze your logs to detect bursts in traffic at that time.
  • Review your ELB access logs in S3 to see if any ELB5 in your system saw the latency.

Answer : Review Cloud Watch Metrics for one minute interval graphs to determine which component(s) slowed the system down.

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it?s very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?


Options are :

  • Copy all log files into AWS S3 using a cron job on each instance. Use an 53 Notification Configuration on the Put Bucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
  • Begin using Cloud Watch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster. (Correct)
  • Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on Put Bucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in(and flag issues.
  • Begin using Cloud Watch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform adhoc Map Reduce analysis and write new queries when needed.

Answer : Begin using Cloud Watch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

For AWS Auto Scaling, what is the first transition state an existing instance enters after leaving Standby state?


Options are :

  • Entering Standby
  • Detaching
  • Terminating: Wait
  • Pending (Correct)

Answer : Pending

You need to create an audit log of all changes to customer banking data. You use DynamoDB to store this customer banking data. It?s important not to lose any Information due to server failures. What is an elegant way to accomplish this?


Options are :

  • Before writing to Dynamo DB, do a pre-write acknowledgment to disk on the application server, removing sensitive information before logging. Periodically rotate these log files into 53.
  • Use a Dynamo DB Stream Specification and to AWS Lambda. Log the changes to AWS Cloud Watch Logs. removing sensitive Information before logging. (Correct)
  • Use a Dynamo DB Stream Specification and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these batches to 53.
  • Before writing to Dynamo DB. do a pre-write acknowledgment to disk on the application server, removing sensitive information before logging. Periodically pipe these files into Cloud Watch Logs.

Answer : Use a Dynamo DB Stream Specification and to AWS Lambda. Log the changes to AWS Cloud Watch Logs. removing sensitive Information before logging.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions