AWS Devops Engineer Professional Certified Practice Exam Set 1

Your company has multiple applications running on AWS. Your company wants to develop a tool that notifies on-call teams Immediately via email when an alarm Is triggered in your environment. You have multiple on- call teams that work different shifts, and the tool should handle notifying the correct teams at the correct times. How should you implement this solution?


Options are :

  • Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Create a secondary Amazon SNS topic for alarms and configure your Cloud Watch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that onc all engineers receive alerts.
  • Create an Amazon SNS topic and an Amazon SOS queue. Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic. Configure Cloud Watch alarms to notify this topic when an alarm is triggered. Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0. Worker nodes in this group spawn when messages are added to the queue. Workers then use Amazon Simple Email Service to send messages to your on call teams.
  • Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your Cloud Watch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift. (Correct)
  • Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic. Notifications will be sent to on-call users when a Cloud Watch alarm is triggered.

Answer : Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your Cloud Watch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.

You use Amazon Cloud Watch as your primary monitoring system for your web application. After a recent software deployment, your users are getting Intermittent 500 Internal Server Errors when using the web application. You want to create a Cloud Watch alarm, and notify an on-call engineer when these occur. How can you accomplish this using AWS services? Choose three answers from the options given below Please select:


Options are :

  • Deploy your web application as an AWS Elastic Beanstalk application. Use the default Elastic Beanstalk Cloud watch metrics to capture 500 Internal Server Errors. Set a Cloud Watch alarm on that metric.
  • Use Amazon Simple Email Service to notify an on-call engineer when a Cloud Watch alarm is triggered.
  • Use Amazon Simple Notification Service to notify an on-call engineer when a Cloud Watch alarm is triggered. (Correct)
  • Install a Cloud Watch Logs Agent on your servers to stream web application logs to Cloud Watch. (Correct)
  • Create a Cloud Watch Logs group and define metric filters that capture 500 Internal Server Errors. Set a Cloud Watch alarm on that metric.

Answer : Use Amazon Simple Notification Service to notify an on-call engineer when a Cloud Watch alarm is triggered. Install a Cloud Watch Logs Agent on your servers to stream web application logs to Cloud Watch.

You have an Auto Scaling group with an Elastic Load Balancer. You decide to suspend the Auto Scaling Add To Load Balancer for a short period of time. What will happen to the instances launched during the suspension period?


Options are :

  • The instances will be registered with ELS once the process has resumed
  • The instances will not be registered with ELB. You must manually register when the process is resumed . (Correct)
  • Auto Scaling will not launch the instances during this period because of the suspension
  • It is not possible to suspend the Add To Load Balancer process

Answer : The instances will not be registered with ELB. You must manually register when the process is resumed .

You currently have the following setup in AWS 1)An Elastic Load Balancer 2) Auto scaling Group which launches EC Instances 3) AMIs with your code pre-installed You want to deploy the updates to your app to only a certain number of users. You want to have a cost-effective solution. You should also be able to revert back quickly. Which of the below solutions is the most feasible one?


Options are :

  • Create a full second stack of instances, cut the DNS over to the new stack of instances, and change the ONS back if a rollback is needed.
  • Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk versions. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELB5
  • Create a second ELB. Auto Scaling. Create the AMI with the new app. Use a new launch configuration. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs. (Correct)
  • Create new AMIs with the new app. Then use the new EC2 instances in half proportion to the older instances.

Answer : Create a second ELB. Auto Scaling. Create the AMI with the new app. Use a new launch configuration. Use Route 53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs.

You have deployed an application to AWS which makes use of Auto scaling to launch new instances. You now want to change the instance type for the new instances. Which of the following is one of the action items to achieve this deployment?


Options are :

  • Create a new launch configuration with the new instance type (Correct)
  • Use Elastic Beanstalk to deploy the new application with the new instance type
  • Use Cloud formation to deploy the new application with the new instance type
  • Create new EC2 instances with the new instance type and attach it to the Auto scaling Group

Answer : Create a new launch configuration with the new instance type

You have a code repository that uses Amazon 53 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? Choose two answers from the options given below.


Options are :

  • Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning.
  • Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon 53 bucket to your Amazon EC2 instances.
  • Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 P addresses and enable bucket versioning.
  • Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application?s Amazon EC2 instances with this role.
  • Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances. Use these credentials to securely access the Amazon 53 bucket when deploying code. (Correct)
  • Use AWS Data Pipeline to lifecycle the data in your Amazon 53 bucket to Amazon Glacier on a weekly basis.

Answer : Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instances. Use these credentials to securely access the Amazon 53 bucket when deploying code.

Your application stores sensitive information on and EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below Please select:


Options are :

  • Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume (Correct)
  • Copy an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted.
  • It Is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption
  • Un mount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume

Answer : Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume

You have been tasked with deploying a scalable distributed system using AWS Ops Works. Your distributed system is required to scale on demand. As it is distributed, each node must hold a configuration file that Includes the hostnames of the other Instances within the layer. How should you configure AWS Ops Works to manage scaling this application dynamically?


Options are :

  • Configure your AWS Ops Works layer to use the AWS-provided recipe for distributed host configuration, and configure the instance hostname and file path parameters in your recipes settings.
  • Create a Chef Recipe to update this configuration file, configure your AWS Ops Works stack to use custom cookbooks, and assign this recipe to the Configure Lifecycle Event of the specific layer. (Correct)
  • Create a Chef Recipe to update this configuration file, configure your AWS Ops Works stack to use custom cookbooks, and assign this recipe to execute when instances are launched.
  • Update this configuration file by writing a script to poll the AWS Ops Works service API for new instances. Configure your base AMI to execute this script on Operating System startup.

Answer : Create a Chef Recipe to update this configuration file, configure your AWS Ops Works stack to use custom cookbooks, and assign this recipe to the Configure Lifecycle Event of the specific layer.

You have a set of EC2 instances hosted in AWS. You have created a role named Demo Role and assigned that role to a policy, but you are unable to use that role with an instance. Why is this the case.


Options are :

  • You are not able to associate an lAM role with an instance
  • You won?t be able to use that role with an instance unless you also create a user and associate it with that specific role
  • You won?t be able to use that role with an instance unless you also create a user group and associate it wit?h, that specific role.
  • You need to create an instance profile and associate it with that specific role. (Correct)

Answer : You need to create an instance profile and associate it with that specific role.

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers In the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below?.


Options are :

  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
  • Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics. (Correct)
  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon 53 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. (Correct)
  • Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process. Create a log group object In AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Red shift and run reports every hour.

Answer : Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon 53 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.

Your company has developed a web application and is hosting it in an Amazon 53 bucket configured for static website hosting. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon Dynamo DB table. How can you ensure that API keys for access to your data In Dynamo DB are kept secure?


Options are :

  • Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.
  • Create an Amazon S3 role in lAM with access to the specific Dynamo DB tables, and assign It to the bucket hosting your website.
  • Configure a web identity federation role within lAM to enable access to the correct Dynamo DB resources and retrieve temporary credentials. ., (Correct)
  • Configure 53 bucket tags with your AWS access keys (or your bucket hosing your website so that the application can query them for access.

Answer : Configure a web identity federation role within lAM to enable access to the correct Dynamo DB resources and retrieve temporary credentials. .,

As part of your continuous deployment process, your application undergoes an I/O load performance test before It is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?


Options are :

  • Ensure that snapshots of the Amazon EBS volumes are created as a backup.
  • Ensure that the Amazon EBS volume Is encrypted.
  • Ensure that the i/O block sizes for the test are randomly selected.
  • Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test. (Correct)

Answer : Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.

As an architect you have decided to use Cloud Formation instead of Ops Works or Elastic Beanstalk for deploying the applications in your company. Unfortunately. you have discovered that there is a resource type that Is not supported by Cloud Formation. What can you do to get around this. Please select:


Options are :

  • Create a custom resource type using template developer. custom resource template. and Cloud Formation. (Correct)
  • Use a configuration management tool such as Chef, Puppet. or Ensile.
  • Specify more mappings and separate your template into multiple templates by using nested stacks.
  • Specify the custom resource by separating your template into multiple templates by using nested stacks

Answer : Create a custom resource type using template developer. custom resource template. and Cloud Formation.

You work for an insurance company and are responsible for the day-to-day operations of your company?s online quote system used to provide insurance quotes to members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industry, regulations also require that you retain all application logs for the system indefinitely in order to investigate fraudulent claims in the future. You have been tasked with designing a log management system with the following requirements: - All log entries must be retained by the system, even during unplanned instance failure, The customer insight team requires immediate access to the logs from the past seven days. The fraud investigation team requires access to all historic logs, but will wait up to 24 hours before these logs are available. How would you meet these requirements in a cost-effective manner? Choose three answers from the options below?


Options are :

  • Configure your application to write logs to a separate Amazon EBS volume with the delete on termination field set to false. Create a script that moves the logs from the instance to Amazon 53 once an hour. - (Correct)
  • Configure your application to write logs to the Instance?s ephemeral disk, because this storage is free and has good write performance. Create a script that moves the logs from the instance to Amazon S3 once an hour.
  • Configure your application to write logs to the instance?s default Amazon EBS boot volume, because this storage already exists. Create a script that moves the logs from the instance to Amazon 53 once an hour.
  • Create an Amazon 53 lifecycle configuration to move log files from Amazon 53 to Amazon Glacier after seven days. (Correct)
  • Write a script that is configured to be executed when the instance is stopped or terminated and that will upload any remaining logs on the instance to Amazon S3.
  • Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. (Correct)

Answer : Configure your application to write logs to a separate Amazon EBS volume with the delete on termination field set to false. Create a script that moves the logs from the instance to Amazon 53 once an hour. - Create an Amazon 53 lifecycle configuration to move log files from Amazon 53 to Amazon Glacier after seven days. Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability.

Your application stores sensitive information on and EBS volume attached to your EC2 instance. How can you protect your information? Choose two answers from the options given below Please select:


Options are :

  • Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume (Correct)
  • It Is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to 53 for encryption.
  • Copy an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted.
  • un mount the s volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume

Answer : Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume

Your application is currently running on Amazon EC2 instances behind a load balancer. Your management has decided to use a Blue/Green deployment strategy. How should you Implement this for each deployment? Please select:


Options are :

  • Launch more Amazon EC2 instances to ensure high availability. de-register each Amazon EC2 instance from the load balancer, upgrade it and test it. and then register it again with the load balancer.
  • Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
  • Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing. (Correct)
  • Using AWS Cloud Formation, create a test stack for validating the code, and then deploy the code to each production Amazon EC2 instance.

Answer : Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then switch DNS over to the new load balancer using Amazon Route 53 after testing.

You currently run your infrastructure on Amazon EC2 instances behind an Auto Scaling group> All logs for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug. Which technique should you use to make sure you are able to review your logs after your Instances have shut down?


Options are :

  • Install the Cloud Watch Logs Agent on your AMI, and configure Cloud Watch Logs Agent to stream your logs. (Correct)
  • Install the Cloud Watch monitoring agent on your AMI. and set up new SNS alert for Cloud Watch metrics that triggers the Cloud Watch monitoring agent to backup all logs on the ephemeral drive.
  • Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate.
  • Configure the ephemeral policies on your Auto Scaling group to back up on terminate.

Answer : Install the Cloud Watch Logs Agent on your AMI, and configure Cloud Watch Logs Agent to stream your logs.

You have been requested to use Cloud Formation to maintain version control and achieve automation for the applications In your organization. How can you best use Cloud Formation to keep everything agile and maintain multiple environments while keeping cost down?


Options are :

  • Create separate templates based on functionality, create nested stacks with Cloud promotion. (Correct)
  • Use Cloud Formation custom resources to handle dependencies between stacks
  • Create multiple templates in one Cloud Formation stack
  • Combine all resources into one template for version control and automation.

Answer : Create separate templates based on functionality, create nested stacks with Cloud promotion.

You have been requested to use Cloud Formation to maintain version control and achieve automation for the applications In your organization. How can you best use Cloud Formation to keep everything agile and maintain multiple environments while keeping cost down?


Options are :

  • Create multiple templates in one Cloud Formation stack.
  • Create separate templates based on functionality, create nested stacks with Cloud Formation. (Correct)
  • Combine all resources into one template for version control and automation.
  • Use Cloud Formation custom resources to handle dependencies between stacks

Answer : Create separate templates based on functionality, create nested stacks with Cloud Formation.

After a daily scrum with your development teams, you?ve agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement? Please select:


Options are :

  • Using an AWS Cloud Formation template. re-deploy your application behind a load balancer. launch a new AWS Cloud Formation stack during each deployment, update your load balancer to send half your traffic to the ne stack while you test, after verification update the load ba lancer to send 100% of traffic to the new stack, and then terminate the old stack.
  • Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types.
  • Using an AWS Ops Works stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of Ops Works stack versioning, during deployment create a new version of your application, tell Ops Works to launch the new version behind your load balancer. and when the new version is launched, terminate the old Ops Works stack.
  • Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new Identical Auto Scaling group. and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group. (Correct)

Answer : Re-deploy your application behind a load balancer that uses Auto Scaling groups, create a new Identical Auto Scaling group. and associate it to the load balancer. During deployment, set the desired number of instances on the old Auto Scaling group to zero, and when all instances have terminated, delete the old Auto Scaling group.

You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data In a durable data store in order to run reports. Web servers In the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose two answers from the options given below?


Options are :

  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacier. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use Amazon Data Pipeline to process the data in Amazon Glacier and run reports every hour.
  • Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap process Create a log group object In AWS Data Pipeline, and define Metric Filters to move processed log data directly from the web servers to Amazon Red shift and run reports every hour.
  • Install an Amazon Cloud watch Logs Agent on every web server during the bootstrap process. Create a Cloud Watch log group and define Metric Filters to create custom metrics that track unique visitors from the streaming web server logs. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the Cloud watch custom metrics.
  • On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour. (Correct)

Answer : On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucket. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminated. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift In order to process and run reports every hour.

After reviewing the last quarter?s monthly bills, management has noticed an increase in the overall bill from Amazon. After researching this increase in cost, you discovered that one of your new services Is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata cache of all objects in the applications bucket. Your boss has asked you to come up with a new cost-effective way to help reduce the amount of these new GET Bucket API calls. What process should you use to help mitigate the cost?


Options are :

  • Using Amazon SNS. create a notification on any new Amazon 53 objects that automatically updates a new Dynamo DB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon 53 object metadata cache from the Dynamo DS table. (Correct)
  • Create a new Dynamo DS table. Use the new Dynamo DS table to store all metadata about all objects uploaded to Amazon 53. Any time a new object is uploaded, update the application?s internal Amazon 53 object metadata cache from Dynamo DB.
  • Update your Amazon 53 buckets lifecycle policies to automatically push a list of objects to a new bucket. a use this list to view objects associated with the application?s bucket.
  • Upload all files to an Elastic Cache file cache server. Update your application to now read all file metadata from the Elastic Cache file cache server, and configure the Elastic Cache policies to push all files to Amazon 53 for long- term storage.

Answer : Using Amazon SNS. create a notification on any new Amazon 53 objects that automatically updates a new Dynamo DB table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal Amazon 53 object metadata cache from the Dynamo DS table.

You are using Cloud Formation to launch an EC2 instance and then configure an application after the instance is launched. You need the stack creation of the ELB and Auto Scaling to wait until the EC2 Instance is launched and configured properly. How do you do this? Please select:


Options are :

  • It is not possible for the stack creation to wait until one service is created and launched
  • Use the Wait Condition resource to hold the creation of the other dependent resources
  • Use a Creation Policy to wait for the creation of the other dependent resources (Correct)
  • Use the Hold Condition resource to hold the creation of the other dependent resources

Answer : Use a Creation Policy to wait for the creation of the other dependent resources

The operations team and the development team want a single place to view both operating system and application logs. How should you implement this using AWS services? Choose two from the options below


Options are :

  • Using AWS Cloud Formation, create a Cloud Watch Logs Log Group and send the operating system and application logs of interest using the Cloud Watch Logs Agent. (Correct)
  • Using configuration management, set up remote logging to send events to Amazon Kinesis and insert the into Amazon Cloud Search or Amazon Red shift. depending on available analytic tools. (Correct)
  • Using AWS Cloud Formation, merge the application logs with the operating system logs, and use lAM Roles to allow both teams to have access to view console output from Amazon EC2.
  • Using AWS Cloud Formation and configuration management, set up remote logging to send events via UDP packets to Cloud Trail.

Answer : Using AWS Cloud Formation, create a Cloud Watch Logs Log Group and send the operating system and application logs of interest using the Cloud Watch Logs Agent. Using configuration management, set up remote logging to send events to Amazon Kinesis and insert the into Amazon Cloud Search or Amazon Red shift. depending on available analytic tools.

You need to monitor specific metrics from your application and send real-time alerts to your Devops Engineer. Which of the below services will fulfill this requirement? Choose two answers Please select:


Options are :

  • Amazon Cloud Watch (Correct)
  • Amazon Simple Notification Service (Correct)
  • Amazon Simple Email Service
  • Amazon Simple Queue Service

Answer : Amazon Cloud Watch Amazon Simple Notification Service

Management has reported an increase in the monthly bill from Amazon web services, and they are extremely concerned with this increased cost. Management has asked you to determine the exact cause of this increase, After reviewing the billing report, you notice an increase In the data transfer cost. How can you provide management with a better insight into data transfer use?


Options are :

  • Using Amazon Cloud Watch metrics, pull your Elastic Load Balancing outbound data transfer metrics monthly, and include them with your billing report to show which application is causing higher bandwidth usage.
  • Deliver custom metrics to Amazon Cloud Watch per application that breaks down application data transfer into multiple. more specific data points. (Correct)
  • Use Amazon Cloud Watch Logs to run a map-reduce on your logs to determine high usage and data transfer
  • Update your Amazon Cloud Watch metrics to use five-second granularity, which will give better detailed metrics that can be combined with your billing data to pinpoint anomalies.

Answer : Deliver custom metrics to Amazon Cloud Watch per application that breaks down application data transfer into multiple. more specific data points.

You have an application running a specific process that is critical to the application?s functionality, and have added the health check process to your Auto Scaling group. The instances are showing healthy but the application Itself is not working as it should. What could be the issue with the health check, since It is still showing the instances as healthy.


Options are :

  • The health check is not checking the application process (Correct)
  • You do not have the time range in the health check property configured
  • It is not possible for a health check to monitor a process that involves the application
  • The health check is not configured properly

Answer : The health check is not checking the application process

If your application performs operations or workflows that take a long time to complete, what service


Options are :

  • Manages the ELB and running a daemon process on each instance
  • Manages a Amazon SQS queue and running a daemon process on each instance (Correct)
  • Manages a Amazon SNS Topic and running a daemon process on each instance
  • Manages Lambda functions and running a daemon process on each instance

Answer : Manages a Amazon SQS queue and running a daemon process on each instance

You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this?


Options are :

  • use Cloud Formation templates
  • Configuration Management Templates
  • Saved Configurations (Correct)
  • Saved Templates

Answer : Saved Configurations

Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment. Which of the following should you do?


Options are :

  • Place each developers own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the users public keys into the appropriate account. ..- (Correct)
  • Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (53) bucket with encryption enabled. Assign AWS Identity and Access Management (lAM) users to each developer so they can download the credentials file,
  • Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.
  • Place an internally created private key into a secure 53 bucket with server-side encryption using customer keys and configuration management, create a service account on all the instances using this private key, and assi1 lAM users to each developer so they can download the file.

Answer : Place each developers own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the users public keys into the appropriate account. ..-

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions