AWS Develops Engineer Professional Practice Final File Exam Set 13

You work for an insurance company and responsible for the day-to-day operation of the company's web site is used to provide insurance quotes members of the public. Your company wants to use the application logs generated by the system to better understand customer behavior. Industrial, regulations also require that you keep all application logs indefinitely to determine the fraudulent scheme in the future. You have been tasked to design a log management system with the following requirements: - All the log data is to maintain the system, even unplanned, for example, the failure of Customer Insight team requires immediate access to the logs during the last seven days. The fraud investigation team requires access to all the historical logs, but to wait up to 24 hours before these logs are available. How do you meet these requirements in a cost effective manner? Select from the options below three answers?


Options are :

  • Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. (Correct)
  • Create a life cycle of 53 Amazon assembly to move the log files in Amazon 53 Amazon Glacier after seven days. (Correct)
  • Set the app to write logs to a separate Amazon EBS volume Delete termination field to false. To create a script, which transfers the logs Amazon instance every 53 hours. - (Correct)
  • Write a script that is configured to run when the instance is stopped or stopped and it will download all the remaining logs on to an instance in Amazon S3.
  • Set the app to write to the logs Instance Ä YS default Amazon EBS boot volume, because this storage is already there. To create a script, which transfers the logs Amazon instance every 53 hours.
  • Set the app to write logs for short-term disk Instance, because this storage is free and it is a good write performance. Create a script that logs are transferred to an instance in Amazon S3 once an hour.

Answer : Create a housekeeping script that runs on a T2 micro instance managed by an Auto Scaling group for high availability. Create a life cycle of 53 Amazon assembly to move the log files in Amazon 53 Amazon Glacier after seven days. Set the app to write logs to a separate Amazon EBS volume Delete termination field to false. To create a script, which transfers the logs Amazon instance every 53 hours. -

As part of the ongoing introduction of the process, the application will go through the I / O load test before it is sent to production using a new amis. The application uses one of Amazon Elastic Block Store (EBS) PIOPS amount per instance and requires a coherent I / O performance. Which of the following must be carried out to ensure that the I / O load tests give the right results in a reproducible manner?


Options are :

  • Make sure that the snapshots of Amazon EBS volumes to create a backup.
  • Make sure your Amazon EBS volumes are pre-heated by reading all the blocks before the test. (Correct)
  • None
  • Make sure that the I / O block sizes test is selected at random.
  • Make sure your Amazon EBS volume is encrypted.

Answer : Make sure your Amazon EBS volumes are pre-heated by reading all the blocks before the test.

AWS Develops Engineer Professional Practice Final File Exam Set 9

The company has developed a web application hosted on the Amazon and 53 bucket configured with a static website hosting. The application uses the AWS SDK JavaScript in your browser to use the data stored in the Amazon Dynamo DB table. How can we ensure that the API keys to access the information in the Dynamo DB is safe?


Options are :

  • Set the bucket 53 of the AWS ease of identification keys (or bucket spraying site so that the application can query the access.
  • None
  • Keep the keys inside the AWS global variables of the application and configure the application to use these credentials when making requests.
  • Create an Amazon S3 role in Lam access to a specific Dynamo DB tables, and assign it to the bucket hosting site.
  • Configure the web identity of the federal role in LAM in order to access to the right resources and Dynamo DB apply for temporary credentials. ., (Correct)

Answer : Configure the web identity of the federal role in LAM in order to access to the right resources and Dynamo DB apply for temporary credentials. .,

You have a set of EC2 instances hosted on AWS. You have created a role name is Demo Role and role of the private car policy, but you can not use an instance of such a role. Why this is the case.


Options are :

  • You can not use the role of such an instance, unless you also create a user and associate it with that special role
  • You can not use the role of such an instance, unless you also create a user group and associate it with that special role.
  • None
  • You can not combine the role of the instance of Lam
  • You need to create an instance of a profile and associate it with that special role. (Correct)

Answer : You need to create an instance of a profile and associate it with that special role.

When the Auto Scaling group rotates Amazon Elastic Compute Cloud (EC2) application to quickly scale up and down in response to the download within 1 0 min window; However, when the load spikes, you start to see problems in your configuration management system which was previously terminated Amazon EC2 resources continue to appear active. What would be a reliable and effective way to deal with re-organization of Amazon EC2 resources in your configuration management system? Choose two answers to the options below


Options are :

  • Write a small script, which is carried out in the Amazon EC2 for example extinguishing de-register the resource configuration management. (Correct)
  • Write a script that will take care of on a daily cron job on the Amazon EC2 instance and an API that performs image c EC2 Auto Scaling group and it will remove the terminated cases, the configuration management. (Correct)
  • Use your existing configuration management system to monitor the launch and bootstrapping instances to reduce the number of moving parts automation.
  • Specify the Amazon Simple Queue Service (SQS) queue AutoScaling actions, which is a script that listens for new messages and delete copies of the ends of the Configuration Management.

Answer : Write a small script, which is carried out in the Amazon EC2 for example extinguishing de-register the resource configuration management. Write a script that will take care of on a daily cron job on the Amazon EC2 instance and an API that performs image c EC2 Auto Scaling group and it will remove the terminated cases, the configuration management.

AWS Solutions Architect - Associate SAA-C01 Practice Exams Set 21

You Dev Ops Engineer for the company. You will be asked to create a mobile deployment solution is cost-effective with minimum downtime. How do you achieve this? Select from the options below two answers


Options are :

  • Use the update policy attribute how cloud formation processes of updates to Auto Scaling Group resource (Correct)
  • After each stack is in use, disassemble the old stack
  • the application using the re-investment model to invest in cloud formation Elastic Beanstalk
  • with re-deploy cloud formation model. define the policy update Auto Scaling groups in your Cloud Formation model (Correct)

Answer : Use the update policy attribute how cloud formation processes of updates to Auto Scaling Group resource with re-deploy cloud formation model. define the policy update Auto Scaling groups in your Cloud Formation model

You are doing load testing tool for training your application hosted on AWS. My testing of Amazon RDS for SQL DB Instance, you will find that once you hit 100% CPU usage on it. the application will not be responded to. Your application to read heavy. What methods are scaled to meet the Tier Application Ä YS needs? Choose three answers below to select the options:


Options are :

  • Use the Cache Elastic Front Amazon RDS DB cache common queries. (Correct)
  • Add to Amazon RDS DB instance in connection with the Auto Scaling Group and set the Cloud Watch metric is based on the CPU usage.
  • Shard sharing your data set a number of Amazon RDS DB instances. (Correct)
  • Add to Amazon RDS DB read replicas, and ask for application directly legible queries to them. (Correct)
  • Use Amazon SQS queue to throttle the results go Amazon RDS DB instance.
  • Enable Multi-AZ for your Amazon RDS DB instance.

Answer : Use the Cache Elastic Front Amazon RDS DB cache common queries. Shard sharing your data set a number of Amazon RDS DB instances. Add to Amazon RDS DB read replicas, and ask for application directly legible queries to them.

You are using the Elastic Beanstalk to manage the application. You have a SQL script that needs to be performed only once, irrespective of the number of EC2 instances you're running. How can you do this?


Options are :

  • Use Container Elastic Beanstalk command within the configuration file to run the script, make sure that the leader of the only flag is false.
  • Use the Elastic Beanstalk version of the configuration file and run the script, make sure that the leader of the only flag is set to true.
  • Use Container Elastic Beanstalk command within the configuration file to run the script, making sure th ‚ Ä úleader only‚ Ä flag is set to true. (Correct)
  • Use ‚ Ä úleader Commanda Ä Elastic Beanstalk within the configuration file to run the script, making sure that the ‚ Ä úcontainer only‚ Ä flag is set to true.
  • None

Answer : Use Container Elastic Beanstalk command within the configuration file to run the script, making sure th ‚ Ä úleader only‚ Ä flag is set to true.

AWS SOA-C00 Certified Sys Ops Administrator Associate Exam Set 9

At present, the Auto Scaling group of elastic load balancer and have to give up all cases and replaced by a new instance type. What are two ways in which this can be achieved. Please choose:


Options are :

  • Install the additional ELB your AutoScaling in later cases, the composition and phase while removing the older instances
  • To connect an extra AutoScaling behind the composition of ELB and phase in newer cases, while removing the older cases. .. (Correct)
  • For example, using the News waived in all cases, using the previous configuration. (Correct)
  • Oldest to use to start the Configuration gradually in all cases, using the previous configuration.

Answer : To connect an extra AutoScaling behind the composition of ELB and phase in newer cases, while removing the older cases. .. For example, using the News waived in all cases, using the previous configuration.

You have a multi-window environment that you want to install and AWS. Which of the following configuration files can be deployed set of Docker containers flexible Beanstalk application? Please choose:


Options are :

  • Ducker run .aws. json (Correct)
  • Ducker run.json
  • .extensions
  • None
  • Ducker file

Answer : Ducker run .aws. json

You have a code repository, which uses Amazon's data store 53. During a recent inspection of your security checks, some concerns were raised about maintaining the integrity of the Amazon S3 bucket. Another concern was raised about the use of safely code for applications running on Amazon S3, Amazon EC2 a virtual private cloud. What are some measures that you can take to mitigate these concerns? Choose two answers to the options below.


Options are :

  • Add to Amazon S3 bucket policy statement a condition to allow access only Amazon EC2 instances RFC 1918 addresses P and allows the bucket versioning.
  • Use a configuration management service for AWS Identity and Access Management user data to Amazon EC2 instances. Use these credentials to securely access the Amazon 53 bucket deployment of code. (Correct)
  • Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon 53 bucket to your Amazon EC2 instances.
  • Use the AWS Data Pipeline life cycle of data in Amazon 53 Amazon Glacier bucket on a weekly basis.
  • Add to Amazon S3 bucket policy statement a condition that requires multi-factor authentication, so you can remove the artifacts and allows the bucket versioning.
  • Create an Amazon Identity and Access Management role permission to enter the Amazon S3 bucket, and start all application Ä YS Amazon EC2 instances for this role.

Answer : Use a configuration management service for AWS Identity and Access Management user data to Amazon EC2 instances. Use these credentials to securely access the Amazon 53 bucket deployment of code.

AWS DVA-C00 Certified Developer Associate Practice Exam Set 2

One of the cases for your Auto Scaling group health revert to an impaired AutoScaling. What AutoScaling do in this case?


Options are :

  • Stops instance to launch a new instance of the (Correct)
  • Send an SNS notification
  • None
  • Perform health until cool before declaring that the instance has failed
  • Wait for the instance to be healed before sending traffic

Answer : Stops instance to launch a new instance of the

You have multiple web servers in an Auto Scaling group behind a load balancer. Hourly, you want to filter and process logs to collect information about visitors, and then put that information on a durable data storage function reports. Web servers In the Auto Scaling group constantly put on the market and ends with scaling based on your policy, but you do not want to lose any of the log data from these servers during a stop / start or termination of the user AutoScaling. What are two approaches to meet these requirements? Choose two answers to the options below?


Options are :

  • Web servers, create a scheduled task that runs the script runs and send the logs in the Amazon Glacier. To ensure that the operating system is triggered by switching off the transmission logs in the Amazon EC2 instance is stopped / finished. Use Amazon Data Pipeline Amazon Glacier process the data and create reports every hour.
  • Install the AWS Data Pipe Line Logs Agent during each Web server from cold start to create a log of group object AWS Data Pipeline, and define filters to transfer data processed log data directly from the web servers of Amazon Redshift, as well as generate reports every hour.
  • None
  • Install Amazon Cloud watch Logs Agent during each Web server from cold start. Create a Cloud Watch group and the log determine the filters the data to create custom metrics that track visitors from streaming web server. Create a scheduled task on Amazon EC2, for example, that runs every hour to create a new report based on the Cloud to watch scrambled information.
  • Web servers, create a scheduled task that runs the script runs and send the logs to the Amazon S3 bucket. To ensure that the operating system is triggered by switching off the transmission logs in the Amazon EC2 instance is stopped / finished. Use the AWS Data Pipeline to move the log data Amazon S3 bucket Amazon Redshift In order for us to process, and to run reports every hour. (Correct)

Answer : Web servers, create a scheduled task that runs the script runs and send the logs to the Amazon S3 bucket. To ensure that the operating system is triggered by switching off the transmission logs in the Amazon EC2 instance is stopped / finished. Use the AWS Data Pipeline to move the log data Amazon S3 bucket Amazon Redshift In order for us to process, and to run reports every hour.

As an architect you have chosen to use instead of the formation of clouds Ops Works or the Elastic Beanstalk introduction of enterprise applications. Unfortunately. you have discovered that there is a resource type that does not support the formation of clouds. What can you do to get around this. Please choose:


Options are :

  • Use a configuration management tool like Chef, Puppet. or be preserved.
  • Set up more descriptions and separate the model into several models using the nested stacks.
  • Set up a custom resource model by separating using several models nested stacks
  • None
  • Create a custom resource type allows the developer to model. custom resource model. and cloud formation. (Correct)

Answer : Create a custom resource type allows the developer to model. custom resource model. and cloud formation.

Certification : AWS Certified Solutions Architect Associate Practice Exams Set 3

You work for startup that has developed a new photo-sharing mobile devices. In recent months, your application has increased in popularity; This has led to a reduction in application performance hint Increased load. The application has a two-level architecture, consisting of AutoScaling PHP application level and My SQL, for example, the use of RDS first AWS cloud formation. For your Auto Scaling Group is a value of 4 mm and a maximum value of 8. The desired capacity is now, because of the high CPU usage, there are cases. A few of analysis, you are assured that the performance problems caused by limited CPU capacity, even if the memory usage is still limited. So you decide to switch to a general-purpose M3 instances of the compute instances optimized C3 cases. How to take this change possible, minimizing interruptions to end-users?


Options are :

  • None
  • Sign in to the AWS Management Console, copy the old configuration to start. and create a new launch configuration, which determines the C3 cases. Update Auto Scaling Group to launch a new configuration. AutoScaling then updates, for example, the type of all currently running.
  • Update launch configuration defined in the AWS cloud formation model of the new C3 Instance / Type. Run stack upgrade to a new model. AutoScaling then update cases with new instance type.
  • Update launch configuration defined in the AWS cloud with the formation of a new model C3 Instance type. Also, add the update policy attribute to your Auto Scaling group that determines AutoScaling Rolling upgrade. (Correct)
  • Sign in to the AWS Management Console, and update an existing launch configuration with the new C3, for example, type. Add your upgrade policy attribute to your Auto Scaling group that determines AutoScaling Rolling upgrade.

Answer : Update launch configuration defined in the AWS cloud with the formation of a new model C3 Instance type. Also, add the update policy attribute to your Auto Scaling group that determines AutoScaling Rolling upgrade.

Your application is currently running on Amazon EC2 instances behind the load balancer. Oman leadership has decided to use the blue / green deployment strategy. How do you use this for each deployment? Please choose:


Options are :

  • None
  • To launch more Amazon EC2 instances to ensure high availability. de-register each Amazon EC2, for example, from a load, update it, and test it. and then re-register it to load.
  • The establishment of the Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being sent.
  • Create a new load balancer new Amazon EC2 instances to perform the installation, and then change the DNS over a new load balancer using Amazon Route 53 after testing. (Correct)
  • Using the AWS Cloud Formation, creating a stack of test to validate the code, and then install the code for each production in the Amazon EC2 instance.

Answer : Create a new load balancer new Amazon EC2 instances to perform the installation, and then change the DNS over a new load balancer using Amazon Route 53 after testing.

You will be asked to use Cloud Formation maintain version control and automation applications to achieve the organization. How can you best use cloud formation to keep everything agile and maintain multiple environments while keeping costs under control?


Options are :

  • Create several models in a single stack of cloud formation
  • Create separate models based on functionality, to create a nested stacks of promoting the Cloud. (Correct)
  • Combine all the resources of a single model for version control purposes and automation.
  • Use a custom formation of cloud resources to handle dependencies stacks
  • None

Answer : Create separate models based on functionality, to create a nested stacks of promoting the Cloud.

AWS SOA-C00 Certified Sys Ops Administrator Associate Exam Set 8

You are responsible for your company‚ Ä YS big multi-level Windows-based web application running on Amazon EC2 instances is located behind a load balancer. While reviewing indicators, you‚ Ä Yve began noticing upward trend of a slow load time to the customer. Your boss has asked you to come up with a solution to ensure that the customer's load time will not affect too many requests per second. What technology would you use to solve this problem?


Options are :

  • None
  • The re-use infrastructure investment AWS cloud formation model. Specify the Elastic Load Balancing health checks start a new AWS cloud formation stack when health checks restore abandoned.
  • Relaying application using Auto Scaling model. Set the Auto Scaling new model of spin Elastic Beanstalk application when customer load time exceeds the threshold.
  • Relaying infrastructure using the AWS cloud formation model. To spin up the second AWS Cloud formation of the stack. Specify the Elastic Load Balancing Spillover functions to spread any slow connections to other AWS cloud stack formation.
  • Relocation of infrastructure allows the AWS cloud formation, with Elastic Beanstalk, and Auto Scaling. Set the Auto Scaling group policies to scale based on the number of requests per second, as well as current customer load time. (Correct)

Answer : Relocation of infrastructure allows the AWS cloud formation, with Elastic Beanstalk, and Auto Scaling. Set the Auto Scaling group policies to scale based on the number of requests per second, as well as current customer load time.

The company is developing a variety of web applications using multiple platforms and programming languages ??in different application dependencies. Each application is developed and deployed quickly and good availability to meet business requirements. Which of the following methods should be used to invest in these applications quickly?


Options are :

  • Use the AWS cloud formation Ducker import service to build and deploy applications with high availability in multiple Availability Zones.
  • None
  • Develop applications Ducker containers, and then turn them on Elastic Beanstalk environments Auto Scaling and Elastic Load Balancing. (Correct)
  • To develop each application code Dynamo DB. and then use the hooks to send it to Elastic Beanstalk environments Auto Scaling and Elastic Load Balancing.
  • Saves each package the application code with Git repository, tailored to archive the leaders of each application dependencies. and installing the AWS Ops Works multiple Availability Zones.

Answer : Develop applications Ducker containers, and then turn them on Elastic Beanstalk environments Auto Scaling and Elastic Load Balancing.

The company has a number of applications running on AWS. The company wants to develop a tool that notifies emergency teams immediately by e-mail when an alarm is triggered in your environment. You have several on-call teams that work different shifts, and the machine must be handled by notifying the right teams with the right moments. How do you use this solution?


Options are :

  • Create a string of Amazon SNS topic and Amazon SOS. Specify the Amazon SQS queue status for the duration of the Amazon SNS topic. Use the Cloud Watch alerts to inform about this topic, when an alarm is triggered. Create Amazon EC2 AutoScaling group, as well as the minimum and preferred instances are configured to 0. The worker nodes in this group of roe when messages are added to the queue. Workers then use the Amazon Simple email service to send messages on duty teams.
  • Create an Amazon SNS topic and define on-call time team email addresses of subscribers. Use the AWS SDK tools integrate with Amazon SNS application and to send messages to this new topic. Notifications are sent to users when emergency Cloud Watch an alarm is triggered.
  • None
  • Create an Amazon SNS topic for each call group, and configure these with each team member e-mails to subscribers. Create another Amazon SNS topic and specify the Cloud Watch alerts to notify about this topic when triggered. Create HTTP subscriber to this topic, which indicates the application via HTTP POST when the alarm is triggered. Use the AWS SDK tools integrate with Amazon SNS application and to send messages to the correct team as the theme of gears. ‚ Ä ě (Correct)
  • Create an Amazon SNS topic and define on-call time team email addresses of subscribers. Create a secondary Amazon SNS topic alerts and configure the Cloud Watch alerts to notify about this topic when triggered. Create HTTP subscriber to this topic, which indicates the application via HTTP POST when the alarm is triggered. Use the AWS SDK tools integrate with Amazon SNS application and to send messages to the first topic to onc all engineers receive notifications.

Answer : Create an Amazon SNS topic for each call group, and configure these with each team member e-mails to subscribers. Create another Amazon SNS topic and specify the Cloud Watch alerts to notify about this topic when triggered. Create HTTP subscriber to this topic, which indicates the application via HTTP POST when the alarm is triggered. Use the AWS SDK tools integrate with Amazon SNS application and to send messages to the correct team as the theme of gears. ‚ Ä ě

AWS Certification

You have an application that consists of EC2 instances for Auto Scaling group. Between the specific period of time every day, is to increase traffic to your website. Therefore, users complain of poor application response time. You have entered the AutoScaling group to introduce a new one EC2, for example, when the CPU utilization is over 60% two successive 5 minutes. What is the least cost-effective way to solve this problem?


Options are :

  • To increase the minimum number of cases is Auto Scaling group (Correct)
  • Reduce the threshold cu utilization percentage, which is to introduce a new instance of the
  • None
  • Reduces the collection time for ten minutes
  • Reduces the consecutive number of collection periods

Answer : To increase the minimum number of cases is Auto Scaling group

The application uses the app orchestrate resources to cloud formation. During the testing phase before the application is opened, Amazon RDS, for example, the type was changed and caused, for example, can be re-created, resulting in the loss of test data. How should you prevent this from happening in the future?


Options are :

  • Subscribe to the AWS Cloud Formation BeforeResourceUpdate7 notification and call to cancel Stack update, if the resource has been identified as the Amazon RDS instance.
  • In the AWS cloud formation model, set the deletion policy AWS :: :: RDS DB Instance Ä YS removal practice of the property ‚ Ä úRetain. (Correct)
  • Use the AWS cloud formation, for example, a stack of denial updates. Allow only Stack Update permission LAM principals denied that the stack Policy.
  • Within the AWS Cloud Formation parameter, which users can select the type of Amazon RDS instance. set allowable values include only the current instance type.
  • In the formation of the AWS cloud model. Set AWS :: :: RDS DB instance DB Instance class property to be read only.

Answer : In the AWS cloud formation model, set the deletion policy AWS :: :: RDS DB Instance Ä YS removal practice of the property ‚ Ä úRetain.

If the application performs functions or workflow takes a long time to complete, what service


Options are :

  • Manage the Elb and a daemon process running in each case,
  • Manage Lambda functions and running in each case a daemon process
  • Manage Amazon SNS subject and the daemon process running in each case,
  • None
  • Manage Amazon SQS queue and running in each case a daemon process (Correct)

Answer : Manage Amazon SQS queue and running in each case a daemon process

AWS SOA-C00 Certified Sys Ops Administrator Associate Exam Set 7

You have an application running on a specific process that is critical to application functionality, and increased health check process to your Auto Scaling group. Instances have shown healthy, but the application itself is not working as it should. What could be the problem with health, because it is still showing healthy manifestations.


Options are :

  • The health check is not configured correctly
  • None
  • You do not have the time range specified in the Health property
  • It is not possible that the Health Check to monitor the process in which the application
  • Sanitation is not checking the application process (Correct)

Answer : Sanitation is not checking the application process

Development Group wants to account-level access to production instances in order to do a live defects in a very safe environment. What should I do the following?


Options are :

  • Place each developers own the public key of a private S3 bucket, for example, use the profiles and configuration management to create a user account for each developer in all cases and place the public keys of the users to the correct account. ..- (Correct)
  • None
  • Set the credentials provided by Amazon EC2 on the UM encrypted USB, and physically to share it with every developer in such a way that the private key never leaves the office.
  • Set the credentials provided by the Amazon Elastic Compute Cloud (EC2) Amazon will be safe for Sample Storage Service (53) bucket encryption is enabled. Specify the AWS Identity and Access Management (LAM) users to each developer, so they can download credentials file,
  • Place the internally generated private key secure 53 bucket server-side encryption keys and configuration of the client to create a service account for all cases using this private key and assi1 LAM users of each developer, so they can download the file.

Answer : Place each developers own the public key of a private S3 bucket, for example, use the profiles and configuration management to create a user account for each developer in all cases and place the public keys of the users to the correct account. ..-

You will be asked to use Cloud Formation maintain version control and automation applications to achieve the organization. How can you best use cloud formation to keep everything agile and maintain multiple environments while keeping costs under control?


Options are :

  • Create several models in a single stack of cloud formation.
  • None
  • Combine all the resources of a single model for version control purposes and automation.
  • Use a custom formation of cloud resources to handle dependencies stacks
  • Create separate models based on functionality, to create a nested stacks formation of clouds. (Correct)

Answer : Create separate models based on functionality, to create a nested stacks formation of clouds.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 2

After examining the last quarter of the monthly bills, management has observed a decline in the entire Amazon. After researching this increase in cost, you'll find that one of the new services makes a lot of GET Bucket API calls to Amazon S3 to build a metadata cache memory of all the objects in a bucket of applications. Your boss has asked you to come up with a new cost-effective way to help reduce these new GET Bucket API. What process should be used to mitigate the costs?


Options are :

  • None
  • Download all files elasticity File Cache cache server. Update the app now to read all file metadata from the Elastic Cache file cache server, and configure the Elastic Cache policy to push all the files in the Amazon 53 for long-term storage.
  • Use Amazon SNS. Create a notice of new Amazon 53 objects, which will automatically update the new Dynamo DB table to store all metadata to the new destination. Subscribe to the application of the Amazon SNS topic to update the internal Amazon 53 object metadata cache Dynamo DS table. (Correct)
  • Update at Amazon 53 buckets Lifecycle Policies automatically pushed to a list of objects to a new bucket. ? Use this list to view objects that are associated with Application YS Ä bucket.
  • Create a new Dynamo DS table. Use the new Dynamo DS table stores all the metadata about all objects loaded in the Amazon 53. Each time a new object is loaded, upgrade to Application Ä 53 Amazon YS internal object metadata to cache Dynamo DB.

Answer : Use Amazon SNS. Create a notice of new Amazon 53 objects, which will automatically update the new Dynamo DB table to store all metadata to the new destination. Subscribe to the application of the Amazon SNS topic to update the internal Amazon 53 object metadata cache Dynamo DS table.

You've taken Elastic Beanstalk application in the new environment and you want to save the current state of the environment in the document. You want to be able to recover the current state of the environment or possibly later to create a new environment. You also want to make sure that you have a restore point. How can you achieve this?


Options are :

  • Stored configurations (Correct)
  • Use cloud formation models
  • None
  • Saved Templates
  • Configuration Management Templates

Answer : Stored configurations

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions