AWS Develops Engineer Professional Practice Final File Exam Set 9

Web startup runs a very successful social news app on Amazon EC2 elastic load balancer, Auto Scaling Group Java / Tomcat application servers, and Dynamo DB as the data warehouse. The main web application runs best m2 x large instances, because it is very tied to memory. With each new deployment requires a semi-automatic creation and testing for the new AMI application servers, which will take quite a long time, and therefore only done once a week. Recently, a new chat feature is implemented in a node JS and howls integrated into the architecture. The first tests show that the new component is CPU bound, because the company has some experience in using Chef, they decided to intensify the adoption process and use AWS(Amazon Web Service) Ops acts as application lifecycle management tool to simplify application> and reduce deployment cycles. What settings AWS(Amazon Web Service) Ops Works it is necessary to integrate kui \ IM module in the most cost-effective and flexible manner?


Options are :

  • Create a single AWS(Amazon Web Service) Ops Works stacked to create two layers of AWS(Amazon Web Service) Ops works to create a custom recipe
  • Create two AWS(Amazon Web Service) Ops works to create two stacks of AWS(Amazon Web Service) Ops acts as layers to create custom recipe
  • Create two AWS(Amazon Web Service) Ops works to create two stacks of AWS(Amazon Web Service) Ops operates two layers to create a custom template
  • None
  • Create a single AWS(Amazon Web Service) Ops Works stack, creates AWS(Amazon Web Service) Ops Works layer, create one custom recipe

Answer : Create a single AWS(Amazon Web Service) Ops Works stacked to create two layers of AWS(Amazon Web Service) Ops works to create a custom recipe

Your company must have downloaded a lot of aerial image data 53. In the past, onpremises in your environment, you used to own a group of servers to process this information and used the Rabbit MQ - Open source messaging system to get the job Notification of servers. When processed the data would go to tape and shipped elsewhere. Your boss told you to stay in the current model, and take advantage of AWS(Amazon Web Service) archiving technologies and services in order to minimize the cost of the message. Which is right?


Options are :

  • Change the storage category 53 Reduced redundancy Storage objects. Setup Auto-scaled workers triggered the queue depth are using the spot to deal with instances of text messages SQS. Once the data has been processed. change the storage category S3 Glacier objects.
  • Use SQS runs the work of messages. Use the Cloud Watch alarms stop EC2 employee where they are idle. Once the data has been processed. change the storage category 53 objects Reduced redundancy storage.
  • Setup Auto-Scaled employees triggered the depth of the queue, which uses the spot to deal with cases SQS messages. Once the data has been processed, change the storage class 53 objects Glacier
  • None
  • Use the SNS to convey the work of messages use the Cloud Watch alerts the worker to quit the spot where they are idle. Once the data has been processed. change the storage class S3 object Glacier.

Answer : Setup Auto-Scaled employees triggered the depth of the queue, which uses the spot to deal with cases SQS messages. Once the data has been processed, change the storage class 53 objects Glacier

AWS BDS-C00 Certified Big Data Speciality Practice Test Set 6

You are using the AWS(Amazon Web Service) manage cases in their own environment. You will need to install a new version of your application. You YD prefer to use all the new cases, if possible, but can not have any downtime, you can also don YT do not want to change anything to the environment URI. Which of the following deployment methods you implement?


Options are :

  • Using Rolling Updates deployment method.
  • Using Blue All at once deployment method.
  • All at once, using a method.
  • None
  • Use Blue deployment method.

Answer : Using Rolling Updates deployment method.

The company is preparing to make a substantial disclosure of the social media site AWS. The site is running on EC2 instances for multiple Availability Zones multi-AZ RDS My SQL Extra Large DB Instance. Site carried out a large number of small reads and writes per second, and relies on eventual consistency model. After comprehensive tests you will notice that there is read the argument RDS My SQL. Which are the best practices that meet these requirements?


Options are :

  • Use Elastic Cache-Cache running on each availability zone
  • Increase the size of the RDS My SQL Instance and implement the provisions lops
  • Implement shading distribute the load to several RDS My SQL cases
  • More RDS My SQL to read the replica in each availability zone

Answer : Use Elastic Cache-Cache running on each availability zone More RDS My SQL to read the replica in each availability zone

The user attempts to save the cost of AWS(Amazon Web Service) services. Which of the options below does not help him to save costs? Please choose:


Options are :

  • Removes unused EBS volumes, for example, when stopped
  • The release of flexible IP if it is not needed, for example when stopped
  • Eliminate Auto Scaling launch configuration after the cases decided in
  • Removes AWS(Amazon Web Service) ELB after deciding cases
  • None

Answer : Eliminate Auto Scaling launch configuration after the cases decided in

AWS Develops Engineer Professional Practice Final File Exam Set 10

You have several stacks of clouds in the formation of their own IT organization. Which of the following commands help you see all the clouds in the formation of stacks with a completed status? Please choose:


Options are :

  • stacks supplement
  • list-stacks
  • list. designs
  • None
  • describe the stacks

Answer : list-stacks

Which of the following is not part of the Elastic Beanstalk?


Options are :

  • surroundings
  • None
  • application Version
  • longshoreman
  • application

Answer : longshoreman

You want to use to install the code in the code, which was hosted by the pole Git repository. Which of the following additional services can help to meet this requirement?


Options are :

  • Use Code Commit service
  • Use Code Batch service
  • None
  • Use SQS service
  • Use Code Pipe Line service

Answer : Use Code Batch service

AWS Solutions Architect Associate 2019 with Practice Test Set 7

You are currently using SQS convey messages EC2 Instances. You have to pass messages that are more than 5 MB. Which of the following can help you achieve this.


Options are :

  • Use Amazon SQS Extended Java client library and Amazon S3 storage mechanism based on message parts.
  • Use SQSs support viestinositustarve and multipart uploads Amazon 53.
  • Use Kinesis buffer for the stream message bodies. Keep the checkpoint ID to be placed Kinesis Stream In SQS.
  • Use the AWS(Amazon Web Service) EFS as a shared pool storage medium. Store the file system references the files on a floppy disk SQS message bodies.
  • None

Answer : Use Amazon SQS Extended Java client library and Amazon S3 storage mechanism based on message parts.

The company has recently started using cloud Docker. This is a SaaS solution for managing docker containers AWS(Amazon Web Service) cloud and solution provider also has the same cloud platform. It is a prerequisite for a SaaS solution for access to AWS(Amazon Web Service) resources. Which of the following would be a requirement to work on SaaS allows AWS(Amazon Web Service) resources in the most secure manner? Please choose:


Options are :

  • Create a LAM user within the enterprise to determine the user account to the user Lam policy that permits only measures required by the SaaS application. Create a new connection, and the secret key to the user and to provide these credentials SaaS provider.
  • None
  • From the AWS(Amazon Web Service) Management Console. to navigate the security credentials of the page and retrieve the secret key and the access to your account.
  • Create a LAM role in cross account access allows SaaS providers to the role account and assign it a policy that allows only measures required by the SaaS application.
  • Create a LAM role EC2 instances to determine its policy carpet allows only the required actions Tor Saas application. to give the role of ARM SaaS provider to use when launching instances of the application. Many SaaS platforms can use the AWS(Amazon Web Service) resources through licensing Cross created the AWS(Amazon Web Service) account. If you go to Roles own identity management, you'll see the option to add the role of cross-account.

Answer : Create a LAM role in cross account access allows SaaS providers to the role account and assign it a policy that allows only measures required by the SaaS application.

The company owns a number of AWS(Amazon Web Service) accounts. Currently, one development and one of the production account. You must be granted access to the development team a 53 bucket in the production account. How can you achieve this?


Options are :

  • Create a LAM user in the production account allows users Development Account (trusted account), use the bucket 53 Production account.
  • When you create a role, define the account as a trusted development as a whole and set permissions poli, which allows trusted users to update to the bucket 53.
  • Use your Web identity federation with third party identity provider STS AWS(Amazon Web Service) granted temporary credentials and membership in the production line of a LAM to the user.
  • Create a LAM limits the role of the account Production account that allows users Development Account to get to the S3 bucket Production account.
  • None

Answer : Create a LAM limits the role of the account Production account that allows users Development Account to get to the S3 bucket Production account.

AWS Develop Engineer Professional Certified Practice Test Set 2

You are currently planning on using Auto Scaling launch in cases where the application is installed. Which of the below methods will help to ensure cases are in progress in a short span of time to take traffic from users?


Options are :

  • Use your credentials to launch scripts to install the program.
  • None
  • Use Docker containers to launch the software.
  • Use AMRS who have already installed the software.
  • To install the software to log each instance.

Answer : Use AMRS who have already installed the software.

You are working for a company is an on-premise infrastructure. Now the decision to move to AWS. The plan is to transfer development environment first. There are a lot of custom based applications that need to use the development of the Community. Which of the following can help the development team to implement the application?


Options are :

  • Create a docker containers custom application components.
  • Use the formation of clouds to introduce a window of containers.
  • Use the Ops Works to install window containers.
  • Use the Elastic Beanstalk to introduce a window of containers.

Answer : Create a docker containers custom application components. Use the Elastic Beanstalk to introduce a window of containers.

If you re try to configure the AWS(Amazon Web Service) Elastic Beanstalk employee-level debugging easy if you have trouble finishing jobs in the queue. what should you configure?


Options are :

  • Specify the Dead
  • Specify the Cyan use me
  • Specify the Rolling deployments.
  • None
  • Configure the Enhanced Health Reporting.

Answer : Specify the Dead

Certification : AWS(Amazon Web Service) Certified Solutions Architect Associate Practice Exams Set 11

The user must enter the RDS application. The user has enabled multi AZ feature MS SQL RDS DB. During the planned outage AWS(Amazon Web Service) how to ensure that the transition to the standby copy of the DB does not affect access to the application?


Options are :

  • The switch over changes in equipment so that RDS does not have to worry about access
  • RDS is an internal IP that will redirect all requests to a new DB
  • None
  • RDS is both the DBS to operate independently and the user must manually switch
  • RDS uses DNS to switch to stand replica seamless transition

Answer : RDS uses DNS to switch to stand replica seamless transition

What did you set fire Cloud Formation model of up to different instance sizes based off the type of environment?


Options are :

  • None
  • surveys
  • outputs
  • resources
  • conditions

Answer : conditions

Your company must have downloaded a lot of aerial image data 53. In the past, onpremises in your environment, you used to own a group of servers to process this information and used the Rabbit MQ - Open source messaging system to get the job Notification of servers. When processed the data would go to tape and shipped elsewhere. Your boss told you to stay in the current model, and take advantage of AWS(Amazon Web Service) archiving technologies and services in order to minimize the cost of the message. Which is right?


Options are :

  • Change the storage category 53 Reduced redundancy Storage objects. Setup AutoScaled workers triggers the queue depth, which uses the spot to deal with cases SQS messages. Once the data has been processed. change the storage category 53 objects Glacier.
  • None
  • Use SQS runs the work of messages. Use the Cloud Watch alerts the worker to stop EC2 instances when they \. is idle. Once the data has been processed, change the storage category S3 Reduced redundancy storage objects.
  • Setup Auto-Scaled employees triggered the depth of the queue, which uses the spot to deal with cases SQS messages. Once the data has been processed, change the storage class 53 objects Glacier
  • Use the SNS to convey the work of messages use the Cloud Watch alerts the worker to quit the spot where they are idle. Once the data has been processed. change the storage class S3 object Glacier.

Answer : Setup Auto-Scaled employees triggered the depth of the queue, which uses the spot to deal with cases SQS messages. Once the data has been processed, change the storage class 53 objects Glacier

AWS Solutions Architect - Associate SAA-C01 Practice Exams Set 27

A company employee ID photos automatically using artificial neural networks (Anns) passing the GPU C ++. For millions of images at a time, but only 3 times a day on average. These images were loaded AWS(Amazon Web Service) S3 bucket you control your batch, and then the client publishes a JSON manifest to another S3 bucket you control as well. Each image takes 10 milliseconds to deal with the entire GPU. Oman neural network software requires a 5-minute meant to evoke. Image tags are JSON objects, and you need to publish in order to bring the bucket 53. Which of these is the best system architectures in this system?


Options are :

  • Make the shape of the S3 notification, which shall publish and AWS(Amazon Web Service) Lambda cargo bucket list. Lambda Tee create a Cloud Formation of the stack, which contains the logic to build autoscale employee level G2 EC2 instances artificial neural network code in each case. Create an SQS queue obvious images. Tear down the stack, when the queue is empty.
  • None
  • Deploy the artificial neural network code Lambda AWS(Amazon Web Service) as a bundled binary extension of C ++. Do S3 announcement manifest of the assembly that will release the second AWS(Amazon Web Service) Lambda code running on the controller. This control code will publish all the images appeared AWS(Amazon Web Service) Kinesis. ANN own code Lambda function uses Kinesis as the source of the event. The system automatically scales the image includes the current events
  • Create Ops Works Stack two levels. The first includes the life cycle scripts to launch and bootstrapping HTTP API G2 cases, the image processing. and the other is always on, for example, which oversees the obvious S3 bucket new files. When a new file is detected. request to launch cases on an artificial neural network layer. When the case is started and the HTTP API-up. send processing requests for individual cases.
  • Create Auto Scaling, Elastic Beanstalk load balanced level and the ambient sounds of the employee. Install artificial neural network code G2 cases this level. Set the desired capacity 1. Make a check code 53 new loading lists from time to time. With the new apparent is detected, push all the images is the obvious part of the SQS queue associated with Elastic Beanstalk employee level.

Answer : Make the shape of the S3 notification, which shall publish and AWS(Amazon Web Service) Lambda cargo bucket list. Lambda Tee create a Cloud Formation of the stack, which contains the logic to build autoscale employee level G2 EC2 instances artificial neural network code in each case. Create an SQS queue obvious images. Tear down the stack, when the queue is empty.

You have created a Dynamo DB desktop application that needs to support thousands of users. You need to make sure that each user can access only their own data in a particular table. Many users already accounts for a third-party identity provider, such as Face book, Google or Amazon Login. How to implement this requirement? Select 2 response options below?


Options are :

  • Create a LAM role, which has a special connection to the Dynamo DS table.
  • LAM Create a User for all users, so they can use the application.
  • Use a third-party identity provider, such as Google, face book or Amazon so users can come Lam AWS(Amazon Web Service) user access to the application.
  • Use your Web identity federation and register your application with third party identity provider, such as Google, Amazon. or face book.

Answer : Create a LAM role, which has a special connection to the Dynamo DS table. Use your Web identity federation and register your application with third party identity provider, such as Google, Amazon. or face book.

You are responsible for designing several models of the formation of clouds in the organization. You need to make sure that no one can update the resources are based on the production of the stack. How can this be achieved effectively?


Options are :

  • Use the UM protect resources
  • Create tags resources, and then create a LAM policies to protect resources.
  • None
  • Use the Stack-based policy to protect the resources are based on production.
  • 53 Use a bucket of policies to protect the resources.

Answer : Use the Stack-based policy to protect the resources are based on production.

Exam : AWS(Amazon Web Service) Certified Solutions Architect Associate

Which of the following is not supported Elastic Beanstalk platform?


Options are :

  • Node.js
  • Kurber netes
  • packer Builder
  • Java SE
  • Go

Answer : Kurber netes

The company has recently expanded its data center as part of AWS(Amazon Web Service) VPC. It is a prerequisite for on-premise users manage their AWS(Amazon Web Service) resources to the AWS console. Don YT Lam users to create them again. Which of the options below fits your needs for authentication?


Options are :

  • Use to retrieve a local SAML2.0 complian identity provider (IDP) temporary security credentials to log on members of the AWS(Amazon Web Service) Management Console.
  • Use a local SAML 2 0 compliant identity provider (IDP) to grant the members of the distributed system access via the AWS(Amazon Web Service) Management Console, the AWS(Amazon Web Service) single sign-on (550) endpoint.
  • None
  • Use OAuth 2.0 apply for temporary AWS(Amazon Web Service) security credentials to log in to your members AWS(Amazon Web Service) Management Console.
  • Use the Web Identity Federation to apply for a temporary AWS(Amazon Web Service) security credentials to your members Si> in to the AWS(Amazon Web Service) Management Console

Answer : Use a local SAML 2 0 compliant identity provider (IDP) to grant the members of the distributed system access via the AWS(Amazon Web Service) Management Console, the AWS(Amazon Web Service) single sign-on (550) endpoint.

You are responsible for designing several models of the formation of clouds in the organization. You need to make changes to the resource when the stack is then based on the requirement. How can you verify the impact of change resources for the formation of clouds in the stack before deploying changes to the stack?


Options are :

  • Use cloud formation change poses to check the effect of the changes.
  • Use cloud formation Rolling Updates to check the effect of the changes.
  • None
  • Use cloud formation Stack rules to check the effect of the changes.
  • There is no way to control this. You need to check the effect in advance.

Answer : Use cloud formation change poses to check the effect of the changes.

AWS BDS-C00 Certified Big Data Speciality Practice Test Set 4

You are a large organization Dev Ops Engineer. The company wants to start using cloud formation models to start building their AWS(Amazon Web Service) resources. For the requirements of models from different departments, such as networking, security, application, etc. What is the best way to architect the formation of these clouds designs.


Options are :

  • None
  • Consider Elastic Beanstalk environments to create, since the formation of the clouds is not built for this kind of customization.
  • Use one of the formation of clouds in the model, as this would reduce the maintenance overhead for the patterns themselves.
  • Create a separate logical models. for example. a separate model for networking, security, application, etc. Then nest the relevant models.
  • Consider Opsworks create environments due to the formation of clouds is not built for this kind of customization.

Answer : Create a separate logical models. for example. a separate model for networking, security, application, etc. Then nest the relevant models.

IT company is currently hosted production environment, Elastic Beanstalk. You understand that the Elastic Beanstalk service provides a facility known as the Maintenance updates that are minor and the patch version of the updates that are needed from time to time the system. IT administrator is concerned about the impact that these updates should the system. What can you tell in terms of Elastic Beanstalk service is managed by upgrades


Options are :

  • Package updates may be configurable weekly maintenance window
  • Elastic Beanstalk managed to apply updates without reducing capacity
  • Elastic Beanstalk managed to apply updates without downtime
  • All the above
  • None

Answer : All the above

You are using the life cycle of hooks for your Auto Scaling Group. Because there is a life cycle of the hook, the instance is placed in Waiting: Wait mode, which means that it is no longer able to deal with traffic yet. When you move Instance of the waiting room, the other scaling operations suspended. After some time, occurrence of status changes Expect: Complete and finally ln the service in cases that are part of the Auto Scaling Group may begin to show up traffic. But note that the start-up process, there are cases end much earlier, long before the status is changed Expect: Action. What can you do to make sure instances placed in the right mode when the bootstrap process is complete? Please choose:


Options are :

  • Use a full-lifecycle-action call the end of the life cycle of activity. Run the command from the second EC2 Instance.
  • Use a full-lifecycle-action call the end of the life cycle of activity. Run the command from the command line.
  • Use a full-lifecycle-action call the end of the life cycle of activity. Run the command SQS queue
  • Use a full-lifecycle-action call the end of the life cycle of activity. Run the command a simple notification service.
  • None

Answer : Use a full-lifecycle-action call the end of the life cycle of activity. Run the command from the command line.

AWS Certified Cloud Practitioner 6 full practice tests 2020 Set 9

Explain what the following resource model makes cloud formation? - Show all Select the best answer. SNSTopic: {View All I type: SNS AWS(Amazon Web Service) :: :: Topic, Goal Characteristics: {Order Protocol: SQS, Endpoint: {fn :: GetAtt: [SQSQueue, Am]) / I Select


Options are :

  • Create an SNS topic that enables SQS order to increase the final score as a parameter model
  • Create an SNS topic and then initiate a call to create the SQS queue logical resource name SQS queue
  • None
  • Create an SNS topic and add custom ARN endpoint SQS resource created under the logical name of the SQS queue
  • Create an SNS topic that enables SQS subscription end points

Answer : Create an SNS topic and add custom ARN endpoint SQS resource created under the logical name of the SQS queue

Which of the following characteristics Auto Scaling Group to ensure that additional instances not been started or completed before the above scaling action shall enter into force?


Options are :

  • implement policies
  • cool down period
  • The ramp-up time
  • None
  • termination policy

Answer : cool down period

You have an application hosted on AWS, which sits on EC2 instances behind an elastic load balancer. You've added a new feature to the application and are now complaints from users that the site is slow response. What the below steps you can perform to help identify the problem?


Options are :

  • Check the Elastic Load Balancer logs
  • Use the Cloud trail log all API and then walk through the log files to locate the problem
  • Make your own Cloud watch the gauges, which are important for the key features of your application
  • None
  • Use the Cloud to view, monitor CPU usage to see the times when the CPU at its peak

Answer : Make your own Cloud watch the gauges, which are important for the key features of your application

AWS BDS-C00 Certified Big Data Speciality Practice Test Set 5

For your security officer has told you that you need to tighten up the logging of all events in your AWS(Amazon Web Service) account. He wants to be able to access all account transactions in all areas quickly and as simply as possible. He also wants to make sure he is the only person who has access to these events, the safest way possible. Which of the following would be the best solution to ensure his demands are met? Choose the correct answer from the options below?


Options are :

  • Use the Cloud Trail log all events in one bucket 53. To do this, S3 bucket is only accessible by their own security unit bucket policy that restricts access to his user only, and also add a further level of security UM policy.
  • Use the Cloud Trail log all events at Amazon Glacier Vault. Make sure the vault access policy only allows access to the security officer YS lP address.
  • Use to log all transactions to a separate bucket 53 in each region Cloud Trail can not write to the bucket in a different region. Use all the different buckets and a bucket of UM policy.
  • Use the Cloud Trail send all the API Cloud Watch send an email to every security officer API call. Make sure that the emails will be encrypted.
  • None

Answer : Use the Cloud Trail log all events in one bucket 53. To do this, S3 bucket is only accessible by their own security unit bucket policy that restricts access to his user only, and also add a further level of security UM policy.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions