AWS Devops Engineer Professional Certified Practice Exam Set 6

You are in charge of designing a number of Cloud formation templates for your organization. You need to ensure that no one can update the stack production based resources. How can this be achieved In the most efficient way?


Options are :

  • Use 53 bucket policies to protect the resources.
  • Use a Stack based policy to protect the production based resources. (Correct)
  • Use MFA to protect the resources
  • Create tags for the resources and then create lAM policies to protect the resources.

Answer : Use a Stack based policy to protect the production based resources.

A company has recently started using Docker cloud. This is a SaaS solution for managing docker containers on the AWS cloud and the solution provider is also on the same cloud platform. There is a requirement for the SaaS solution to access AWS resources. Which of the following would meet the requirement for enabling the SaaS solution to work with AWS resources in the most secured manner? Please select:


Options are :

  • Create an lAM user within the enterprise account assign a user policy to the lAM user that allows only the actions required by the SaaS application. Create a new access and secret key for the user and provide these credentials to the SaaS provider.
  • From the AWS Management Console. navigate to the Security Credentials page and retrieve the access and secret key for your account.
  • Create an lAM role for cross-account access allows the SaaS providers account to assume the role and assign it a policy that allows only the actions required by the SaaS application. (Correct)
  • Create an lAM role for EC2 instances, assign it a policy mat allows only the actions required tor the Saas application to work. provide the role ARM to the SaaS provider to use when launching their application Instances. Many SaaS platforms can access aws resources via a Cross account access created in aws. If you go to Roles in your identity management, you will see the ability to add a cross account role.

Answer : Create an lAM role for cross-account access allows the SaaS providers account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

You are writing an AWS Cloud Formation template and you want to assign values to properties that will not be available until runtime. You know that you can use intrinsic functions to do this but are unsure as to which part of the template they can be used in. Which of the following is correct in describing how you can currently use intrinsic functions in an AWS Cloud Formation template?


Options are :

  • You can use intrinsic functions only in the resource properties part of a template.
  • You can use intrinsic functions in any part of a template. except AWS Template Format Version
  • You can use intrinsic functions in any part of a template.
  • You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties. metadata attributes, and update policy attributes. (Correct)

Answer : You can only use intrinsic functions in specific parts of a template. You can use intrinsic functions in resource properties. metadata attributes, and update policy attributes.

There is a requirement for an application hosted on a VPC to access the On-premise LDAP server. The VPC and the On-premise location are connected via an IPSec VPN. Which of the below are the right options for the application to authenticate each user, Choose 2 answers from the options below


Options are :

  • Develop an identity broker that authenticates against lAM security Token service to assume a lAM role in order to get temporary AWS security credentials The application calls the Identity broker to get AWS temporary security credentials.
  • The application authenticates against LDAP the application then calls the AWS Identity and Access Management (lAM) Security service to log in to lAM using the LDAP credentials the application can use the lAM temporary credentials to access the appropriate AWS service. x aniwer Is Ir:tzi
  • The application authenticates against LDAP and retrieves the name of an lAM role associated with the use The application then calls the lAM Security Token Service to assume that lAM role. The application can use the temporary credentials to access any AWS resources (Correct)
  • Develop an identity broker that authenticates against LDAP and then calls lAM Security Token Service to get lAM federated user credentials. The application calls the identity broker to get lAM federated user credentials with access to the appropriate AWS service. (Correct)

Answer : The application authenticates against LDAP and retrieves the name of an lAM role associated with the use The application then calls the lAM Security Token Service to assume that lAM role. The application can use the temporary credentials to access any AWS resources Develop an identity broker that authenticates against LDAP and then calls lAM Security Token Service to get lAM federated user credentials. The application calls the identity broker to get lAM federated user credentials with access to the appropriate AWS service.

A user is accessing RDS from an application. The user has enabled the Multi AZ feature with the MS SQL RDS DB. During a planned outage how will AWS ensure that a switch from DB to a standby replica will not affect access to the application?


Options are :

  • RDS will have an internal IP which will redirect all requests to the new DB
  • RDS uses DNS to switch over to stand by replica for seamless transition (Correct)
  • The switch over changes Hardware so RDS does not need to worry about access
  • RDS will have both the DBs running independently and the user has to manually switch over

Answer : RDS uses DNS to switch over to stand by replica for seamless transition

A custom script needs to be passed to a new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this?


Options are :

  • User data (Correct)
  • AWS Contig.
  • EC2Config service
  • lAM roles

Answer : User data

You are currently planning on using Auto scaling to launch instances which have an application installed. Which of the following methods will help ensure the instances are up and running in the shortest span of time to take In traffic from the users?


Options are :

  • Use User Data to launch scripts to install the software.
  • Log into each Instance and Install the software.
  • Use AMrs which already have the software installed. (Correct)
  • Use Docker containers to launch the software.

Answer : Use AMrs which already have the software installed.

Your company is planning to develop an application in which the front end is in .Net and the backend is in Dynamo DB. There Is an expectant of a high load on the application. How could you ensure the scalability of the application to reduce the load on the Dynamo DB database? Choose an answer from the options below.


Options are :

  • Use SQS to assist and let the application pull messages and then perform the relevant operation in Dynamo DB. (Correct)
  • Launch Dynamo DB in Multi-AZ configuration with a global index to balance writes
  • Add more Dynamo DB databases to handle the load.
  • Increase write capacity of Dynamo DB to meet the peak loads

Answer : Use SQS to assist and let the application pull messages and then perform the relevant operation in Dynamo DB.

You work for a company that automatically tags photographs using artificial neural networks (ANNs), which run on GPUs using C++. You receive millions of images at a time, but only 3 times per day on average. These images are loaded into an AWS S3 bucket you control for you in a batch, and then the customer publishes a JSON-formatted manifest into another S3 bucket you control as well. Each image takes 10 milliseconds to process using a full GPU. Your neural network software requires 5 minutes to bootstrap. Image tags are JSON objects, and you must publish them to an 53 bucket. Which of these is the best system architectures for this system?


Options are :

  • Create an Auto Scaling, Load Balanced Elastic Beanstalk worker tier Application and Environment. Deploy the artificial neural network code to G2 instances in this tier. Set the desired capacity to 1. Make the code periodically check 53 for new manifests. When a new manifest is detected, push all of the Images in the manifest into the SQS queue associated with the Elastic Beanstalk worker tier.
  • Make an S3 notification configuration which publishes to AWS Lambda on the manifest bucket. Make the Lambda create a Cloud Formation Stack which contains the logic to construct an autoscaling worker tier of EC2 G2 instances with the artificial neural network code on each instance. Create an SQS queue of the images in the manifest. Tear the stack down when the queue is empty. (Correct)
  • Create an Ops Works Stack with two Layers. The first contains lifecycle scripts for launching and bootstrapping an HTTP API on G2 instances for image processing. and the second has an always-on instance which monitors the S3 manifest bucket for new files. When a new file Is detected. request instances to boot on the artificial neural network layer. When the instances are booted and the HTTP APIS are up. submit processing requests to individual instances.
  • Deploy your artificial neural network code to AWS Lambda as a bundled binary for the C++ extension. Make an S3 notification configuration on the manifest which publishes to another AWS Lambda running controller code. This controller code publishes all the images In the manifest to AWS Kinesis. Your ANN code Lambda Function uses the Kinesis as an Event Source. The system automatically scales when the stream contains image events

Answer : Make an S3 notification configuration which publishes to AWS Lambda on the manifest bucket. Make the Lambda create a Cloud Formation Stack which contains the logic to construct an autoscaling worker tier of EC2 G2 instances with the artificial neural network code on each instance. Create an SQS queue of the images in the manifest. Tear the stack down when the queue is empty.

Your firm has uploaded a large amount of aerial image data to 53. In the past, in your onpremises environment, you used a dedicated group of servers to process this data and used Rabbit MQ - An open source messaging system to get job Information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?


Options are :

  • Change the storage class of the 53 objects to Reduced Redundancy Storage. Setup Auto-Scaled workers triggered by queue depth that use spot Instances to process messages In SQS. Once data Is processed. change the storage class of the S3 objects to Glacier.
  • Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker Instances when they become idle. Once data is processed. change the storage class of the S3 object to Glacier.
  • Use SQS for passing job messages. Use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed. change the storage class of the 53 objects to Reduced Redundancy Storage.
  • Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the 53 objects to Glacier (Correct)

Answer : Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SQS. Once data is processed, change the storage class of the 53 objects to Glacier

A user is using Cloud formation to launch an EC2 instance and then configure an application after the instance is launched. The user wants the stack creation of ELB and Auto Scaling to wait until the EC2 instance is launched and configured properly. How can the user configure this?


Options are :

  • The user can use the Hold Condition resource to wait for the creation of the other dependent resources
  • The user can use the Wait Condition resource to hold the creation of the other dependent resources (Correct)
  • It is not possible that the stack creation will wait until one service Is created and launched
  • The user can use the Dependent Condition resource to hold the creation of the other dependent resources

Answer : The user can use the Wait Condition resource to hold the creation of the other dependent resources

You are in charge of designing a number of Cloud formation templates for your organization. You are required to make changes to the stack resources every now and then based on the requirement. How can you check the impact of the change to resources In a cloud formation stack before deploying changes to the stack?


Options are :

  • There is no way to control this. You need to check for the impact beforehand.
  • Use Cloud formation Stack Policies to check for the impact to the changes.
  • Use Cloud formation Rolling Updates to check for the impact to the changes.
  • Use Cloud formation change sets to check for the impact to the changes. (Correct)

Answer : Use Cloud formation change sets to check for the impact to the changes.

You are working for a company has an on-premise infrastructure. There is now a decision to move to AWS. The plan is to move the development environment first. There are a lot of custom based applications that need to be deployed for the development community. Which of the following can help to implement the application for the development team?


Options are :

  • Use Cloud formation to deploy the docker containers.
  • Use Elastic beanstalk to deploy the docker containers. (Correct)
  • Use Ops Works to deploy the docker containers.
  • Create docker containers for the custom application components. „ (Correct)

Answer : Use Elastic beanstalk to deploy the docker containers. Create docker containers for the custom application components. „

Which of the following run command types are available for ops work stacks? Choose 3 answers from the options given below.


Options are :

  • Execute Recipes (Correct)
  • Update Custom Cookbooks (Correct)
  • Configure (Correct)
  • Un Deploy

Answer : Execute Recipes Update Custom Cookbooks Configure

Of the 6 available sections on a Cloud Formation template (Template Description Declaration, Template Format Version Declaration, Parameters, Resources, Mappings, Outputs). which is the only one required for a Cloud Formation template to be accepted? Choose an answer from the options below ?


Options are :

  • Parameters
  • Mappings
  • Template Declaration
  • Resources (Correct)

Answer : Resources

Which of the following is not a component of Elastic Beanstalk?


Options are :

  • Application
  • Application Version
  • Environment
  • Docker (Correct)

Answer : Docker

You have created a Dynamo DB table for an application that needs to support thousands of users. You need to ensure that each user can only access their own data in a particular table. Many users already have accounts with a third-party identity provider, such as Face book, Google, or Login with Amazon. How would you implement this requirement? Choose 2 answers from the options given below?


Options are :

  • Create an lAM role which has specific access to the Dynamo DS table. (Correct)
  • Create an lAM User for all users so that they can access the application.
  • Use a third-party identity provider such as Google, Face book or Amazon so users can become an AWS lAM User with access to the application.
  • Use Web identity federation and register your application with a third-party identity provider such as Google, Amazon. or Face book. (Correct)

Answer : Create an lAM role which has specific access to the Dynamo DS table. Use Web identity federation and register your application with a third-party identity provider such as Google, Amazon. or Face book.

A user is trying to save some cost on the AWS services. Which of the below mentioned options will not help him save cost? Please select:


Options are :

  • Delete the Auto Scaling launch configuration after the instances are terminated (Correct)
  • Delete the unutilized EBS volumes once the instance is terminated
  • Release the elastic IP if not required once the instance is terminated
  • Delete the AWS ELB after the instances are terminated

Answer : Delete the Auto Scaling launch configuration after the instances are terminated

An organization is planning to use AWS for their Production Roll Out. The organizations wants to implement automation for deployment, such that it will automatically create a LAMP stack, deploy an RDS My SQL DB instance, download the latest PHP installable from 53 and set up the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?


Options are :

  • AWS Cloud front
  • AWS Elastic Beanstalk
  • AWS Dev Ops
  • AWS Cloud formation (Correct)

Answer : AWS Cloud formation

Which of the following are components of the AWS Data Pipeline service. Choose 2 answers from the options given below


Options are :

  • Task Runner (Correct)
  • Task History
  • Workflow Runner
  • Pipeline definition (Correct)

Answer : Task Runner Pipeline definition

Your company is using an Auto scaling Group to scale out and scale in instances. There is an expectation of a peak in traffic every Monday at 8am, The traffic is then expected to come down before the weekend on Friday 5pm. How should you configure Auto scaling in this?


Options are :

  • Create dynamic scaling policies to scale up on Monday and scale down on Friday
  • Create a scheduled policy to scale up on Monday and scale down on Friday (Correct)
  • Manually add Instances to the Auto scaling Group on Monday and remove them on Fr
  • Create a scheduled policy to scale up on Friday and scale down on Monday

Answer : Create a scheduled policy to scale up on Monday and scale down on Friday

You are using for managing the instances in your AWS environment. You need to deploy a new version of your application. You?d prefer to use all new instances if possible, but you cannot have any downtime, You also don?t want to swap any environment urIs. Which of the following deployment methods would you implement?


Options are :

  • Using “Blue Green” with “All at once” deployment method.
  • Using “All at once” deployment method.
  • Using Rolling Updates” deployment method. (Correct)
  • Using “Blue Green” deployment method.

Answer : Using Rolling Updates” deployment method.

Which of the below 3 things can you achieve with the Cloud watch logs service? Choose 3 options. Please select:


Options are :

  • Send the log data to AWS Lambda for custom processing or to load into other systems (Correct)
  • Stream the log data to Amazon Kinesis
  • Record API calls for your AWS account and delivers log files containing API calls to your Amazon S3 bucket
  • Stream the log data into Amazon Elastic search in near real-time with Cloud Watch Logs subscriptions. (Correct)

Answer : Send the log data to AWS Lambda for custom processing or to load into other systems Stream the log data into Amazon Elastic search in near real-time with Cloud Watch Logs subscriptions.

Your company is getting ready to do a major public announcement of a social media site on AWS. The website Is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS My SQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS My SQL. Which are the best approaches to meet these requirements?


Options are :

  • Deploy Elastic Cache in-memory cache running in each availability zone (Correct)
  • Increase the RDS My SQL Instance size and Implement provisioned lops
  • Add an RDS My SQL read replica in each availability zone (Correct)
  • Implement shading to distribute load to multiple RDS My SQL instances

Answer : Deploy Elastic Cache in-memory cache running in each availability zone Add an RDS My SQL read replica in each availability zone

Which of the following is false when it comes to using the Elastic Load balancer with Op sworks stacks?


Options are :

  • Each load balancer can handle only one layer
  • You can use either the Application or Classic Load Balancer with Ops Works stacks. (Correct)
  • You can attach only one load balancer to a layer.
  • You need to create the load balancer before hand and then attach it to the Ops work stack.

Answer : You can use either the Application or Classic Load Balancer with Ops Works stacks.

A vendor needs access to your AWS account. They need to be able to read protected messages in a private S3 bucket. They have a separate AWS account. Which of the solutions below is the best way to do this?


Options are :

  • Create an 53 bucket policy that allows the vendor to read from the bucket from their AWS account. Your answer is incorrect.
  • Create a cross-account lAM role with permission to access the bucket, and grant permission to use the role to the vendor AWS account. (Correct)
  • Allow the vendor to SSH into your EC2 instance and grant them an lAM role with full access to the bucket.
  • Create an lAM User with API Access Keys. Give the vendor the AWS Access Key ID and AWS Secret Access Key for the user.

Answer : Create a cross-account lAM role with permission to access the bucket, and grant permission to use the role to the vendor AWS account.

You have a number of Cloud formation stacks in your IT organization. Which of the following commands will help see all the cloud formation stacks which have a completed status? Please select:


Options are :

  • describe-stacks
  • list-stacks (Correct)
  • list. templates
  • stacks-complete

Answer : list-stacks

You currently have a set of instances running on your Ops work stacks. You need to install security updates on these servers. What does AWS recommend in terms of how the security updates should be deployed? Choose 2 answers from the options given below


Options are :

  • Create a cloud formation template which can be used to replace the instances.
  • Create a new Ops work stack with the new instances.
  • On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command. (Correct)
  • Create and start new instances to replace your current online Instances. Then delete the current instances. (Correct)

Answer : On Linux-based instances in Chef 11.10 or older stacks, run the Update Dependencies stack command. Create and start new instances to replace your current online Instances. Then delete the current instances.

Your IT company is currently hosting a production environment in Elastic beanstalk. You understand that the Elastic beanstalk service provides a facility known as Managed updates which are minor and patch version updates which are periodically required for your system. Your IT supervisor is worried about the impact that these updates would have on the system. What can you tell about the Elastic beanstalk service with regards to managed updates


Options are :

  • Elastic Beanstalk applies managed updates with no reduction In capacity
  • All of the above (Correct)
  • Elastic Beanstalk applies managed updates with no downtime
  • Package updates can be configurable weekly maintenance window

Answer : All of the above

If you?re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing queue jobs. what should you configure?


Options are :

  • Configure Enhanced Health Reporting.
  • Configure Blue-Green Deploy me
  • Configure a Dead (Correct)
  • Configure Rolling Deployments.

Answer : Configure a Dead

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions