AWS SAP-C00 Certified Solution Architect Professional Exam Set 8

You are designing Internet connect Myth for your VPC. The Web sewers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose 2 answers)


Options are :

  • Configure a Cloud Front distribution and configure the origin to point to the private IP addresses of your Web sewers Configure a Route53 CNAME record to your Cloud Front distribution.
  • Assign EIPs to all web sewers. Configure a Route53 record set with all EIPs, with health checks and DNS failover. (Correct)
  • Place all your web servers behind ELB Configure a Route53 CNMIE to point to the ELB DNS name. (Correct)
  • Configure ELB with an EIP Place all your Web servers behind ELB Configure a Route53 A record that points to the EIP.
  • Configure a NAT instance in your VPC Create a default route via the NAT instance and associate it with all subnets Configure a DNS A record that points to the NAT instance public IP address.

Answer : Assign EIPs to all web sewers. Configure a Route53 record set with all EIPs, with health checks and DNS failover. Place all your web servers behind ELB Configure a Route53 CNMIE to point to the ELB DNS name.

Questions and Answer : AWS Certified Security Specialty

Your company Is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal Management has tasked you to architect the collection platform ensuring the following requirements are met. Provide the ability for realtime analytics of the inbound biometric data Ensure processing of the biometric data Is highly durable. Elastic and parallel The results of the analytic processing should be persisted for data mining Which architecture outlined below win meet the initial requirements for the collection platform?


Options are :

  • Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to Dynamo DB.
  • Utilize SQS to collect the inbound sensor data analyze the data from SOS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
  • Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Red shift cluster using EMR. (Correct)
  • Utilize S3 to collect the inbound sensor data analyze the data from 53 with a daily scheduled Data Pipeline and save the results to a Red shift Cluster.

Answer : Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Red shift cluster using EMR.

You are developing a new mobile application and are considering storing user preferences In AWS.2w This would provide a more uniform cross-device experience to users using multiple mobile devices to access the application. The preference data for each user is estimated to be 50KB in size Additionally 5 mil ion customers are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available, scalable and secure, how would you design a solution to meet the above requirements?


Options are :

  • Setup an RDS My SQL instance in 2 availability zones to store the user preference data A Deploy a public facing application on a server in front of the database to manage security and access credentials
  • Setup an RDS My SQL instance with multiple read replicas in 2 availability zones to store the user preference data .The mobile application will query the user preferences from the read replicas. Leverage the My SQI user management and access system to manage security and access credentials.
  • Setup a Dynamo DB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the Dynamo DB table. Utilize STS. Web Identity Federation, and Dynamo DB Fine Grained Access Control to authenticate and authorize access. (Correct)
  • Store the user preference data in S3 Setup a Dynamo DB table with an item for each user and an item attribute pointing to the user‘ S3 object. The mobile application will retrieve the S3 URL from Dynamo DB and then access the S3 object directly utilize STS, Web identity Federation, and 53 ACLs to authenticate and authorize access.

Answer : Setup a Dynamo DB table with an item for each user having the necessary attributes to hold the user preferences. The mobile application will query the user preferences directly from the Dynamo DB table. Utilize STS. Web Identity Federation, and Dynamo DB Fine Grained Access Control to authenticate and authorize access.

Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? Choose 3 answers


Options are :

  • Encrypt data using native data encryption drivers at the file system level
  • Implement SSL/TLS for all services running on the sewer
  • Do nothing as EBS volumes are encrypted by default (Correct)
  • Encrypt data inside your applications before storing it on EBS (Correct)
  • Implement third party volume encryption tools (Correct)

Answer : Do nothing as EBS volumes are encrypted by default Encrypt data inside your applications before storing it on EBS Implement third party volume encryption tools

AWS DVA-C00 Certified Developer Associate Practice Exam Set 14

You required for different platform types are designing a multi-platform web application for AWS The application will run on EC2 Instances and will be accessed from PCs. tablets and smart phones Supported accessing platforms are Windows, Mac IOS, IOS and Android Separate sticky session and SSL certificate setups are which of the following describes the most cost effective and performance efficient architecture setup?


Options are :

  • Setup a hybrid architecture to handle session state and SSL certificates on-premium and separate EC2 Instance groups running web applications for different platform types running in a vPC.
  • Assign multiple [LBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs. (Correct)
  • Set up two ELBs The first ELB handles SSL certificates for all platforms and the second ELB handles session stickiness for all platforms for each [LB run separate EC2 instance groups to handle the web application for each platform
  • Set up one ELB for all platforms to distribute load among multiple instance under It Each EC2 instance implements au functionality for a particular platform.

Answer : Assign multiple [LBS to an EC2 instance or group of EC2 instances running the common components of the web application, one ELB for each platform type Session stickiness and SSL termination are done at the ELBs.

Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that au employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or IDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)


Options are :

  • Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket .
  • Tagging each folder in the bucket
  • Setting up a federation proxy or identity provider (Correct)
  • Using AWS Security Token Service to generate temporary tokens (Correct)
  • Configuring IAM role (Correct)

Answer : Setting up a federation proxy or identity provider Using AWS Security Token Service to generate temporary tokens Configuring IAM role

Your firm has uploaded a large amount of aerial image data to S3 In the past, in your onpremises environment, you used a dedicated group of servers to oaten process this data and used Rabbit MQ - An open source messaging system to get job information to the servers. Once processed the data would go to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS archival storage and messaging services to minimize cost. Which is correct?


Options are :

  • Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier. (Correct)
  • Use SQS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when they become idle. Once data is processed, change the storage class of the S3 objects to Reduced Redundancy Storage.
  • Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data is processed,
  • Change the storage class of the S3 objects to Reduced Redundancy Storage. Setup Auto scaled workers triggered by queue depth that use spot instances to process messages in SOS Once data Is processed, change the storage class of the S3 objects to Glacier.

Answer : Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they become idle. Once data is processed, change the storage class of the S3 object to Glacier.

AWS SAP-C00 Certified Solution Architect Professional Exam Set 7

You are designing a personal document-arch Mug solution for your global enterprise with thousands of employee. Each employee has potential y gigabytes of data to be backed up in this arch Mug solution. The solution will be exposed to the employees as an application, where they can just drag and drop their files to the arch Mug system. Employees can retrieve their archives through a web interface. The corporate network has high bandwidth AWS Direct Connect May to AWS. You have a regulatory requirement that all data needs to be encrypted before being uploaded to the cloud. How do you implement this in a highly available and costefficient way?


Options are :

  • Mange encryption keys in a Hardware Security Module (HSM) appliance on-premises serve r with sufficient storage to temporarily store, encrypt, and upload files directly into Amazon Glacier.
  • Manage encryption keys in an AWS CI0udHSNI appliance. Encrypt files prior to uploading on the employee desktop, and then upload directly into Amazon Glacier.
  • Manage encryption keys in Amazon Key Management Service (KMS), upload to Amazon Simple Storage Service (S3) with client-side encryption using a KMS customer master key ID, and configure Amazon S3 lifecycle policies to store each object using the Amazon Glacier storage tier. (Correct)
  • Manage encryption keys on-premises in an encrypted relational database. Set up an onpremises server with sufficient storage to temporarily store files, and then upload them to Amazon S3, providing a client-side master key.

Answer : Manage encryption keys in Amazon Key Management Service (KMS), upload to Amazon Simple Storage Service (S3) with client-side encryption using a KMS customer master key ID, and configure Amazon S3 lifecycle policies to store each object using the Amazon Glacier storage tier.

You have deployed a three-tier web application in a VPC with a CIOR block of 10.0.0.0/28 You initially deploy two web servers, two application sewers, two database sewers and one NAT instance tor a total of seven EC2 instances The web. Application and database sewers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradual y increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch. Which of the following could be the root caused? (Choose 2 answers)


Options are :

  • The Internet Gateway (1GW) of your VPC has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
  • AWS reserves the first four and the last IP address in each subnet‘s CIDR block so you do not have enough addresses left to launch all of the new EC2 Instances . (Correct)
  • AWS reserves one IP address in each subnet‘s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances
  • The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches
  • AWS reserves the first and the last private IP address in each subnets CIDR block so you do not have enough addresses left to launch all of the new EC2 instances

Answer : AWS reserves the first four and the last IP address in each subnet‘s CIDR block so you do not have enough addresses left to launch all of the new EC2 Instances .

You require the ability to analyze a customer‘s clicks tram data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used In real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?


Options are :

  • Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic Map Reduce
  • Write click events directly to Amazon Red shift and then analyze with SQL
  • Publish web clicks by session to an Amazon SOS queue then periodical y drain these events to Amazon RDS and analyze with SQL. (Correct)
  • Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers

Answer : Publish web clicks by session to an Amazon SOS queue then periodical y drain these events to Amazon RDS and analyze with SQL.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 17

You control access to S3 buckets and objects with:


Options are :

  • Identity and Access Management (IAM) Policies.
  • Bucket Policies.
  • All of the above (Correct)
  • Access Control Lists (ACL5).

Answer : All of the above

A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer‘s end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation? (Choose 2 answers)


Options are :

  • Enable route propagation to the customer gateway (CGW). (Correct)
  • Add a route to the route table with an IP sec VPN connection as the target. (Correct)
  • Enable route propagation to the virtual pinnate gateway (VGW).
  • Modify the route table of all Instances using the route‘ command.
  • Modify the Instances VPC subnet route table by adding a route back to the customer‘s on-premises environment.

Answer : Enable route propagation to the customer gateway (CGW). Add a route to the route table with an IP sec VPN connection as the target.

You are responsible for a web application that consists of an Elastic Load Balancing (ELB) load balancer in front of an Auto Scaling group of Amazon Elastic Compute Cloud (EC2) instances. For a recent deployment of a new version of the application, a new Amazon Machine Image (AM I) was created, and the Auto Scaling group was updated with a new launch configuration that refers to this new AMI. During the deployment, you received complaints from users that the website was responding with errors. Al instances passed the ELB health checks. What should you do in order to avoid errors for future deployments? (Choose 2 answer)


Options are :

  • Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail. (Correct)
  • Enable EC2 instance Cloud Watch alerts to change the launch configurations AMI to the previous one. Gradual y terminate instances that are using the new AMI.
  • Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration. (Correct)
  • Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer. (Correct)
  • Add an Elastic Load Balancing health check to the Auto Scaling group. Set a short period for the health checks to operate as soon as possible in order to prevent premature registration of the instance to the load balancer.

Answer : Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail. Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration. Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 8

You are designing the network infrastructure for an application server in Amazon VPC. Users will access all application Instances from the Internet, as well as from an on premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements?


Options are :

  • Configure two routing tables: on that has a default router via the Internet gateway, and other that has a default route via the VPN gateway. Associate both routing tables with each VPC subnet.
  • Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets. (Correct)
  • Configure a single routing table with two default routes: on to the Internet via an Internet gateway, the other to the on-premises network via the VPN gateway. Use this routing table across all subnets in the VPC.
  • Configure a single routing table with a default route via the Internet gateway. Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.

Answer : Configure a single routing table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router. Associate the routing table with all VPC subnets.

After launching an instance that you intend to serve as a NAT (Network Address Translation device in a public subnet you modify your route tables to have the NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful. Which of the following steps could resolve the Issue?


Options are :

  • Disabling the Source/Destination Check attribute on the NAT instance (Correct)
  • Attaching an Elastic IP address to the instance in the private subnet
  • Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it In the public subnet .
  • Attaching a second Elastic Network Interface (ENI) to the NAT Instance, and placing it In the private subnet

Answer : Disabling the Source/Destination Check attribute on the NAT instance

You have been hired to enhance the overall security posture for a very large ecommerce site They have a well architected multi-tier application running In a VPC that uses ELBs in front of both the web and the app tier with static assets served directly from S3 They are using a combination of RDS and Dynamo OB for their dynamic data and then arch Mug nightly into S3 for further processing with EMR They are concerned because they found questionable log entries and suspect someone is attempting to gain unauthorized access. Which approach provides a cost effective scalable mitigation to this kind of attack?


Options are :

  • Recommend that they lease space at a Direct Connect partner location and establish a 1 C Direct Connect connection to their VPC they would then establish Internet connect Myth into their space, fitter the traffic in hardware Web Application Firewall (WAF). And then pass the traffic through the Direct Connect connection into their application running in their VPC.
  • Remove all but TLS 1.2 from the web tier [LB and enable Advanced Protocol Filtering This WI) enable the [LB itself to perform WAF functionality.
  • Add previously Identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet
  • Add a WAF tier by creating a new ELB and an Auto Scaling group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group (Correct)

Answer : Add a WAF tier by creating a new ELB and an Auto Scaling group of EC2 Instances running a host-based WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would their pass the traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier Security Group

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 8

Which is a valid Amazon Resource name (ARN) for IAM?


Options are :

  • ARN :AWS :IAM:1 2345678901 2:instance-profilefWebserver (Correct)
  • ARN:AWS: IAM::1 2345678901 2::instance-profile/Web server
  • 12345678901 2:aws:iam::instance-profile/Web server
  • AWS:IAM::1 2345678901 2:instance-profile/\Web server

Answer : ARN :AWS :IAM:1 2345678901 2:instance-profilefWebserver

You need a persistent and durable storage to trace call act Myth of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usual y there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours. The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files, Cost saving is a priority for this project. What database implementation would better fit this scenario, keeping costs as low as possible?


Options are :

  • Use RDS Multi-AZ with a ?CALLS‘ table and an indexed ?state‘ field that can be equal to ?ACTIVE or ?TERMINATED?. In this way the SQL query is optimized by the use of the Index.
  • Use Dynamo DB with a ?Calls‘ table and a Global Secondary Index on a ?State‘ attribute that can equal to ?active‘ or ?terminated‘. In this way the Global Secondary Index can be used for all items in the table.
  • Use Dynamo DB with a Calls? table and a Global Secondary Index on a ?Is Active‘ attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.
  • Use RDS multi-AZ with two tables, one for ACTIVE_CALLS? and one for ?TERMINATED_CALLS?. In this way the ?ACTIVE_CALLS? table is always small and effective to access. (Correct)

Answer : Use RDS multi-AZ with two tables, one for ACTIVE_CALLS? and one for ?TERMINATED_CALLS?. In this way the ?ACTIVE_CALLS? table is always small and effective to access.

Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers


Options are :

  • Each subnet spans at least 2 Availability Zones to provide a high-availability environment. (Correct)
  • Each subnet maps to a single Availability Zone.
  • CIDR block mask of /25 is the smallest range supported.
  • . Instances in a private subnet can communicate with the Internet only if they have an Elastic (Correct)
  • By default, all subnets can route between each other, whether they are private or public.

Answer : Each subnet spans at least 2 Availability Zones to provide a high-availability environment. . Instances in a private subnet can communicate with the Internet only if they have an Elastic

AWS Solutions Architect Associate 2019 with Practice Test Set 2

Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future? The administrator still must be able to: launch, start stop, and terminate development resources. launch and start production instances.


Options are :

  • Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources. (Correct)
  • Create and IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
  • Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
  • Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances

Answer : Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources.

A company is storing data on Amazon Simple Storage Service (S3). The company‘s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose 3 answers


Options are :

  • Use Amazon S3 bucket policies to restrict access to the data at rest.
  • Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. (Correct)
  • Use Amazon S3 server-side encryption with EC2 key pair.
  • Use Amazon S3 server-side encryption with customer-provided keys. (Correct)
  • Use SSL to encrypt the data while in transit to Amazon S3.
  • Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key (Correct)

Answer : Use Amazon S3 server-side encryption with AWS Key Management Service managed keys. Use Amazon S3 server-side encryption with customer-provided keys. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key

You deployed your company website using Elastic Beanstalk and you enabled log file rotation to S3. An Elastic Map Reduce job is periodically analyzing the logs on S3 to build a usage dashboard that you share with your CIO. You recently improved overall performance of the website using Cloud Front for dynamic content delivery and your website as the origin. After this architectural change, the usage dashboard shows that the traffic on your website dropped by an order of magnitude. How do you fix your usage dashboard‘?


Options are :

  • Use Elastic Beanstalk ?Restart App server(s)? option to update log delivery to the Elastic Map Reduce job.
  • Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map Reduce job
  • Use Elastic Beanstalk ?Rebuild Environment? option to update log delivery to the Elastic map Reduce job. (Correct)
  • Enable Cloud Front to deliver access logs to S3 and use them as input of the Elastic Map Reduce job.
  • Turn on Cloud Trail and use trail log tiles on S3 as input of the Elastic Map Reduce job

Answer : Use Elastic Beanstalk ?Rebuild Environment? option to update log delivery to the Elastic map Reduce job.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 3

The following policy can be attached to an AM group. It lets an IAM user in that group access ?home directory in AWS S3 that matches their user name using the console. ?Version?: ?2012-10-17?, ?Statement: I reaction. [?s3.*?1 Effect: ?Allow?, ?Resource?: [?arn:aws:s3::bucket-name?], ?Condition? :{?String Like?:Is3:prefIx:[home/${AWS: username}/*I‘]}})! ?Action? :[h‘s3:*h‘], ?Effect‘ :Allow?, ?Resource?: LIarn:aws:s3:::bucket name/home/$AWS: username}/*i }


Options are :

  • TRUE
  • FALSE (Correct)

Answer : FALSE

Auto Scaling requests are signed with a signature calculated from the request and the user‘s private key.


Options are :

  • SSL
  • X.509
  • AES-256
  • HMAC-SHA1 (Correct)

Answer : HMAC-SHA1

An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?


Options are :

  • Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.
  • Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.
  • Take hourly DB backups to S3, with transaction logs stored In S3 every 5 minutes (Correct)
  • Use synchronous database master-slave replication between two availability zones. (Correct)

Answer : Take hourly DB backups to S3, with transaction logs stored In S3 every 5 minutes Use synchronous database master-slave replication between two availability zones.

AWS Develops Engineer Professional Practice Final File Exam Set 4

You have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL. Before generating the URL, the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?


Options are :

  • Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role‘s credentials from the EC2 Instance metadata. (Correct)
  • Create an AM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IANI user, and retrieve the AM user‘s credentials from the EC2 instance user data.
  • Create an AM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the Application user.
  • Use the AWS account access keys; the application retrieves the credentials from the source code of the application.

Answer : Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role‘s credentials from the EC2 Instance metadata.

You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached. The EC2 instance is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS. The four EBS volumes are configured as a single RAID 0 device, and each Provisioned IOPS volume is provisioned with 4, 000 IOPS (4, 000 1 6KB reads or writes), for a total of 1 6, 000 random IOPS on the instance. The EC2 instance initially delivers the expected 1 6, 000 IOPS random read and write performance. Sometime later, in order to increase the total random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned lops volumes to the RAID. Each volume is provisioned to 4, 000 IOPs like the original four, for a total of 24, 000 IOPS on the EC2 instance. Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%, but the total random IOPS measured at the instance level does not increase at all. ?


Options are :

  • RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6, 000 IOPS.
  • Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and file system to use 64KB blocks to increase throughput.
  • The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4, 000 Provisioned IOPS volume. (Correct)
  • The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBS Optimized instance that provides larger throughput.
  • Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB.

Answer : The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4, 000 Provisioned IOPS volume.

A company is building a voting system for a popular TV show, viewers win watch the performances then visit the show‘s website to vote for their favorite performer. It is expected that in a short period of time after the show has finished the site will receive mu ions of visitors. The visitors will first login to the site using their Amazon.com credentials and then submit their vote. After the voting Is completed the page will display the vote totals. The company needs to build site such that can handle the rapid influx of traffic while maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns below should they use?


Options are :

  • Use Cloud Front and an Elastic Load Balancer In front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user, the web sewers win process the users vote and store the result into an SOS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application sewers will then retrieve the items from the queue and store the result into a Dynamo DB table. (Correct)
  • Use Cloud Front and the static website hosting feature of S3 with the Java script SDK to call the Login With Amazon service to authenticate the use IAM Roles to gain permissions to a Dynamo DB table to store the users vote.
  • Use Cloud Front and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user then process the users vote and store the result into a multi-AZ Relational Database Service Instance
  • Use Cloud Front and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers will first call the Login with Amazon service to authenticate the user, the web servers will process the users vote and store the result into a Dynamo DB table using IAM Roles for EC2 instances to gain permissions to the Dynamo DB table.

Answer : Use Cloud Front and an Elastic Load Balancer In front of an auto-scaled set of web servers, the web servers will first call the Login With Amazon service to authenticate the user, the web sewers win process the users vote and store the result into an SOS queue using IAM Roles for EC2 Instances to gain permissions to the SQS queue. A set of application sewers will then retrieve the items from the queue and store the result into a Dynamo DB table.

AWS BDS-C00 Certified Big Data Speciality Practice Test Set 2

Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use Elastic Beanstalk due to Its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis, Similarly, other software teams in your org want access to that same restored data via their EC2 Instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following.


Options are :

  • Create your RDS instance separately and pass Its DNS name to your app‘s 08 connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
  • Create your RDS instance separately and add its P address to your application‘s DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC‘s IP address block.
  • Create your RDS instance separately and pass its DNS name to yours DB connection string as an environment variable Alter Its security group to allow access to It from hosts In your application subnets.
  • Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets. (Correct)

Answer : Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.

A web company is looking to Implement an intrusion detection and prevention system Into their deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the VPC. How should they architect their solution to achieve these goals‘


Options are :

  • Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides.
  • Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC.
  • Configure sewers running in the VPC using the host-based ?route‘ commands to send all traffic through the platform to a scalable virtualized IDS/IPS. (Correct)
  • Configure each host with an agent that collects all network traffic and sends that traffic to the IDSIIPS platform for inspection.

Answer : Configure sewers running in the VPC using the host-based ?route‘ commands to send all traffic through the platform to a scalable virtualized IDS/IPS.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions