AWS SAP-C00 Certified Solution Architect Professional Exam Set 7

An AWS customer is deploying an application mat is composed of an Auto Scaling group of EC2 Instances. The customers security policy requires that every outbound connection from these instances to any other service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instance-id. In addition an x 509 certificates must Designed by the customer‘s Key management service in order to be trusted for authentication. Which of the following configurations will support these requirements?


Options are :

  • Configure the launched instances to generate a new certificate upon first boot Have the Key management service poll the Auto Scaling group for associated instances and send new instances a certificate signature that contains the specific instance-id.
  • Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted key management service. Have the Key management service generate a signed certificate and send it directly to the newly launched instance.
  • Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot. (Correct)
  • Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the launched instances generate a certificate signature request with the instance‘s assigned instance-id to the Key management service for signature.

Answer : Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the certificate from Amazon S3 upon first boot.

A user is hosting a public website on AWS. The user wants to have the database and the application server on the AWS VPC . The user wants to setup a database that connect to the internet for any patch upgrade but cannot receive any request from the internet. How can the user set this up?


Options are :

  • Setup DB in a public subnet with the security group allowing only inbound data.
  • Setup DB in a local data center and use a private gateway to connect the application with DB
  • Setup DB in a private subnet which is connected to the internet via NAT for outbound. (Correct)
  • Setup DB in a private subnet with the security group allowing only outbound traffic

Answer : Setup DB in a private subnet which is connected to the internet via NAT for outbound.

You are designing a social media site and are considering how to mitigate distributed denial-of-service (DD0S) attacks. Which of the below are viable mitigation techniques? (Choose 3 answers)


Options are :

  • Add alert Amazon Cloud Watch to look for high Network In and CPU utilization. (Correct)
  • Use dedicated instances to ensure that each instance has the maximum performance possible.
  • Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
  • Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational Database Service (RDS) tiers
  • Use an Amazon Cloud Front distribution for both static and dynamic content. (Correct)
  • Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth. (Correct)

Answer : Add alert Amazon Cloud Watch to look for high Network In and CPU utilization. Use an Amazon Cloud Front distribution for both static and dynamic content. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.

In Amazon Elastic cache, which of the following statements is correct?


Options are :

  • When you launch an elastic cache cluster into an Amazon VPC private subnet, every cache node is assigned a public IP address within that subnet.
  • Elastic cache is not fully integrated with Amazon virtual private cloud(VPC) .
  • You cannot use Elastic cache in a VPC that is configured for dedicated instance tenancy (Correct)
  • If your AWs account supports only the EC2-VPC platform, elastic cache will never launch your cluster in a VPC.

Answer : You cannot use Elastic cache in a VPC that is configured for dedicated instance tenancy

In Amazon elastic cache, the default cache port is?


Options are :

  • For Me cached 11211 and for redis 6380
  • For Me cached 11210 and for redis 6379
  • For Me cached 11210 and for redis 6380
  • For Me cached 11211 and for redis 6379 (Correct)

Answer : For Me cached 11211 and for redis 6379

Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?


Options are :

  • Install your application on a compute-optimized EC2 instance capable of supporting the application‘s average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
  • Create an EBS backed private AMI which includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2, Auto Scaling, and ELB resources to support deploying the application across MultipleAvailability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. (Correct)
  • Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.
  • Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

Answer : Create an EBS backed private AMI which includes a fresh install of your application. Develop a Cloud Formation template which includes your AMI and the required EC2, Auto Scaling, and ELB resources to support deploying the application across MultipleAvailability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

You must architect the migration of a web application to AWS. The application consists of Linux web servers running a custom web server. You are required to save the logs generated from the application to a durable location. What options could you select to migrate the application to AWS? (Choose 2)


Options are :

  • Create Ducker file for the application. Create an AWS Ops Works stack consisting of a custom layer. Create custom recipes to install Ducker and to deploy your Ducker container using the Ducker file. Create customer recipes to install and configure the application to publish the logs to Amazon Cloud Watch Logs.
  • Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Served. (Correct)
  • Create a Ducker file for the application. Create an AWS Elastic Beanstalk application using the Ducker platform and the Ducker file. Enable logging the Ducker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3. (Correct)
  • Create Ducker file for the application. Create an AWS Ops Works stack consisting of a Ducker layer that uses the Ducker file. Create custom recipes to install and configure Amazon Kinesis to publish the logs into Amazon Cloud Watch
  • Use VM import/Export to import a virtual machine image of the server into AWS as an AMI. Create an Amazon Elastic Compute Cloud (EC2) instance from AMI, and install and configure the Amazon Cloud Watch Logs agent. Create a new AMI from the instance. Create an AWS Elastic Beanstalk application using the AMI platform and the new AMI.

Answer : Create an AWS Elastic Beanstalk application using the custom web server platform. Specify the web server executable and the application project and source files. Enable log file rotation to Amazon Simple Storage Served. Create a Ducker file for the application. Create an AWS Elastic Beanstalk application using the Ducker platform and the Ducker file. Enable logging the Ducker configuration to automatically publish the application logs. Enable log file rotation to Amazon S3.

An organization has setup RDS with VPC. The organization wants to be accessible from the internet. Which of the below mentioned configurations is not required in this scenario?


Options are :

  • The organization must setup RDS with the subnet group which has an external IP (Correct)
  • The organization must enable the VPC attributes DNS host names and DNS resolution.
  • The organization must allow access from the internet in the RDS VPC security group.
  • The organization must enable the parameter in the console which makes the RDS instance publicly accessible.

Answer : The organization must setup RDS with the subnet group which has an external IP

A user has created a VPC with public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0./16. The private subnet uses CIDR 20.0.0/24. The NAT instance ID is i-a 12345. Which of the below mentioned entries are required in the main route table attached with the private subnet to allow instances to connect with the internet?


Options are :

  • Destination:20.0.0.0/24 and target:i-a12345
  • Destination:20.0.0.0/0 and target: ia12345
  • Detintion:0.0.0.0/0 and target:i-a12345 (Correct)
  • Destination:20.0.0.0/0 and target:80

Answer : Detintion:0.0.0.0/0 and target:i-a12345

You currently operate a web application In the AWS US-East region The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2.IAM And RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?


Options are :

  • Create a new Cloud Trail trail with an existing S3 bucket to store the logs and with the global services option selected Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs
  • Create a new Cloud Trail with one new S3 bucket to store the logs Configure SNS to send log 1le delivery notifications to your management system Use IAM roles and S3 bucket policies on the S3 bucket mat stores your logs.
  • Create three new Cloud Trail trails with three new S3 buckets to store the logs one for the WS Management console, one for AWS SDKs and one for command line tools Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.
  • Create a new Cloud I rail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Correct)

Answer : Create a new Cloud I rail trail with one new S3 bucket to store the logs and with the global services option selected Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.

Your department creates regular analytics reports from your company‘s log files All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in csv format for an Amazon Red shift data warehouse. Your CEO requests that you optimize the cost structure for this system. Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data? . Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot


Options are :

  • Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Red shift. (Correct)
  • Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon ENIR jobs. Use Reserved Instances for Amazon Red shift.
  • Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Red shift.
  • Use reduced redundancy storage (RRS) for PDF and . csv data in Si Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Red shift.

Answer : Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Red shift.

Which of the following components of AWS data pipeline polls for tasks and the perform those tasks?


Options are :

  • Amazon Elastic Map Reduce(EMR)
  • AWS direct connect
  • Pipeline Definition
  • Task Runner (Correct)

Answer : Task Runner

You are implementing a URL white listing system for a company that wants to restrict outbound HTTP‘S connections to specific domains from their EC2-hosted applications you deploy a single EC2 instance running proxy software and configure It to accept traffic from all subnets and EC2 instances in the VPC. You configure the proxy to only pass through traffic to domains that you define in its white list configuration You have a nightly maintenance window or 10 minutes where ail instances fetch new software updates. Each update Is about 200MB In size and there are 500 instances In the VPC that routinely fetch updates After a few days you notice that some machines are failing to successfully download some, but not all of their updates within the maintenance window. The download URL5 used for these updates are correctly listed in the proxy‘s white list configuration and you are able to access them manual y using a web browser on the instances. What might be happening? (Choose 2 answers)


Options are :

  • The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy.
  • You have not allocated enough storage to the EC2 instance running the proxy so the network
  • You are running the proxy on a sufficiently-sized EC2 instance in a private subnet any network throughput is being throttled by a NAT running on an undersized EC2 instance.
  • You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time. (Correct)

Answer : You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time.

An International company has deployed a multi-tier web application that relies on Dynamo UB in a single region For regulatory reasons they need disaster recovery capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision me web application rapidly using Cloud Formation. The objective is to minimize changes to the existing web application, control the throughput of Dynamo DB used for the synchronization of data and synchronize only the modified elements, Which design would you choose to meet these requirements?


Options are :

  • Use AWS data Pipeline to schedule an export of the Dynamo DB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to Dynamo DB in the other region
  • Send also each Ante into an SQS queue in me second region, use an auto-scaling group behind the SOS queue to replay the write in the second region.
  • Use AWS data Pipeline to schedule a Dynamo DB cross region copy once a day, create a ?Last updated? attribute in your Dynamo DB table that would represent the timestamp of the last update and use it as a filter. (Correct)
  • Use EMR and write a custom script to retrieve data from Dynamo DB in the current region using a SCAN operation and push it to Dynamo DB in the second region.

Answer : Use AWS data Pipeline to schedule a Dynamo DB cross region copy once a day, create a ?Last updated? attribute in your Dynamo DB table that would represent the timestamp of the last update and use it as a filter.

Your company is getting ready to do a major public announcement of a social media site on AWS. The website is running on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS My SQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS My SQL. Which are the best approaches to meet these requirements? (Choose 2 answers)


Options are :

  • Implement shading to distribute load to multiple RDS MY SQL instances
  • Increase the RDS My SQL Instance size and Implement provisioned IOPS (Correct)
  • Deploy Elastic Cache in-memory cache running in each availability zone (Correct)
  • Add an ROS My SQL read replica in each availability zone .

Answer : Increase the RDS My SQL Instance size and Implement provisioned IOPS Deploy Elastic Cache in-memory cache running in each availability zone

To serve web traffic for a popular product your chief financial officer and IT director have purchased 10 mi large heavy utilization reserved instances evenly spread across two availability zones: route 53 is used to deliver the traffic to an elastic load balancer. After several months, the product grows even more popular and you need additional capacity as a result, your company purchases two c3.2 large medium utilization risk you register the two c3 2x large instances with your ELB and quickly find that the ml large instances are at 100% of capacity and the c3 2x large instances have significant capacity that is unused which option is the most cost effective and uses EC2 capacity most effectively?


Options are :

  • Route traffic to EC2 m1. Large and s3.2x large instances directly using route 53 latency based routing and health checks. Shut off ELB.
  • Use a separate ELB for each instance type and distribute load to ELBs with route 53 weighted round robin.
  • Configure ELB with two c3.2x large instances and use on demand auto scaling group for the up to two additional c3.2x large instances. Shut off m1.large instances. (Correct)
  • Configure auto scaling group and launch configuration with ELB to add up to more on demand m1. Large instances when triggered by cloud watch. Shut off c3. 2X large instances

Answer : Configure ELB with two c3.2x large instances and use on demand auto scaling group for the up to two additional c3.2x large instances. Shut off m1.large instances.

A read only news reporting site with a combined web and application tier and a database tier that receives large and un predictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?


Options are :

  • State full instances for the web and application tier in an auto scaling group monitored with cloud watch and RDS with read replicas.
  • Stateless instances for the web and application tier synchronized using elastic cache memory cached in an auto scaling group monitored with cloud watch and multi AZ RDS.
  • Stateless instances for the web and application tier synchronized using elastic cache Memory cached in an auto scaling group monitored with cloud watch and RDS with read replicas. (Correct)
  • State full instances for the web and application tier in an auto scaling group monitored with cloud watch and multi AZ RDS.

Answer : Stateless instances for the web and application tier synchronized using elastic cache Memory cached in an auto scaling group monitored with cloud watch and RDS with read replicas.

You have deployed a web application targeting a global audience across multiple AWS regions under the domain name.example.com. You decide to use route 53 latency based routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server down time you configure weighted record sets associated with two web servers in separate availability zones per region. Dunning a DR test you notice that when you disable all web servers is one of the region route53 does not automatically direct all users to the other region. What cloud be happening? ( choose 2 answers)


Options are :

  • you did not set evaluate target health to yes on the latency alias resource set associated with the example com in the region where you disabled the servers. (Correct)
  • Latency resources record set cannot be used in combination with weighted resource record sets.
  • The value of the weight associated with the latency alias resources record set in the region with the disabled servers is higher than the weight for the other region.
  • You did not setup an HTTP health check to one or more of the weighted resources record sets associated with the disabled web servers (Correct)
  • one of the two working web servers in the other region did not pass it HTTP health check.

Answer : you did not set evaluate target health to yes on the latency alias resource set associated with the example com in the region where you disabled the servers. You did not setup an HTTP health check to one or more of the weighted resources record sets associated with the disabled web servers

An enterprise wants to use a third-party SAAS application. The SAAS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise‘s account The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privileged and there must be controls in place to ensure that the credentials used by the SAAS vendor cannot be used by any other third party. Which of the following would meet all of these conditions?


Options are :

  • Create an IAM user within the enterprise allows only the actions required by the SAAS application create a new access and secret key for the user and provide these credentials to the SAAS provider.
  • Create an IAM role for cross-account access allows the SAAS provider‘s account to assume the role and assign it a policy that allows only the actions required by the SAAS application (Correct)
  • Create an AM role for EC2 instances, assign it a policy that allows only the actions required tor the SAAs application to work, provide the role ARN to the SAAS provider to use when launching their application instances.
  • From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.

Answer : Create an IAM role for cross-account access allows the SAAS provider‘s account to assume the role and assign it a policy that allows only the actions required by the SAAS application

Identify a true statement about the statement ID (s id) in IAM


Options are :

  • You cannot expose the S id in the IAM API. (Correct)
  • You can expose the S id in the IAM API.
  • You cannot assign a S id value to each statement in a statement array.
  • You cannot use a Sid value as a sub-ID for a policy documents ID for services provided by SQS and SNS .

Answer : You cannot expose the S id in the IAM API.

Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database Information Is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database Which backup architecture will meet these requirements?


Options are :

  • Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore (Correct)
  • Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using Amis, and supplement by copying file system data to S3 to provide file level restore.
  • Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using Amis, and supplement with EBS snapshots for individual volume restore.
  • Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore

Answer : Backup RDS using automated daily DB backups Backup the EC2 instances using AMIs and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore

You are designing a connect May solution between on-premises infrastructure and Amazon VP C. Your servers on-premises will be communicating with your VPC instances. You will be establishing IPSec tunnels over the Internet You will be using VPN gateways, and terminating the IPSec tunnels on AWS supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? Choose 4 answers


Options are :

  • End-to-end protection of data in transit
  • Data encryption across the Internet (Correct)
  • Peer identity authentication between VPN gateway and customer gateway (Correct)
  • Data integrity protection across the Internet . (Correct)
  • Protection of data in transit over the Internet (Correct)
  • End-to-end Identity authentication

Answer : Data encryption across the Internet Peer identity authentication between VPN gateway and customer gateway Data integrity protection across the Internet . Protection of data in transit over the Internet

Your company hosts a social media website for storing and sharing documents. The web application allows user to upload large files while resuming and pausing the upload as needed. Currently, files are uploaded to your PHP front end backed by Elastic load Balancing and an auto scaling fleet of Amazon Elastic Compute Cloud (EC2) instances that scale upon average of bytes received (Network in). After a file has been uploaded, it is copied to Amazon Simple Storage Service (S3). Amazon EC2 instances use an AWS Identity and Access Management (IAM) role that allows Amazon S3 uploads. Over the last six months, your user base and scale have increased significantly, forcing you to increase the Auto Scaling groups Max parameter a few times. Your CFO is concerned about rising costs and has asked you to adjust the architecture where needed to better optimize costs. Which architecture change could you introduce to reduce costs and still keep your web application secure and scalable?


Options are :

  • Re-architect your ingest pattern, have the app authenticate against your identity provider, and use your identity provider as a broker fetching temporary AWS credentials from AWS Secure Token Service (Get Federation Token). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic to directly upload the file to Amazon S3 using the given credentials and S3 prefix.
  • Re-architect your ingest pattern, and move your web application instances into a VPC public subnet. Attach a public IP address for each EC2 instance (using the Auto Scaling launch configuration settings). Use Amazon Route 53 Round Robin records set and HTTP health check to DNS load balance the app requests; this approach will significantly reduce the cost by bypassing Elastic Load Balancing. (Correct)
  • Replace the Auto Scaling launch configuration to include c3.8xlarge instances; those instances can potentially yield a network through put of 10gbps.
  • Re-architect your ingest pattern, have the app authenticate against your identity provider, and use your Identity provider as a broker fetching temporary AWS credentials from AWS Secure Token Service (Get Federation Token). Securely pass the credentials and S3 endpoint/prefix to your app. Implement client-side logic that used the S3 multipart upload API to directly upload the file to Amazon S3 using the given credentials and S3 prefix.

Answer : Re-architect your ingest pattern, and move your web application instances into a VPC public subnet. Attach a public IP address for each EC2 instance (using the Auto Scaling launch configuration settings). Use Amazon Route 53 Round Robin records set and HTTP health check to DNS load balance the app requests; this approach will significantly reduce the cost by bypassing Elastic Load Balancing.

A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity The web server currently shares readonly data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly to off-site tapes Which AWS storage and database architecture meets the requirements of the application?


Options are :

  • Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of Dynamo DB and IP unique cast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web sewers, app sewers, and database backed up weekly to Glacier using snapshots.
  • Web sewers: store read-only data in S3, and copy from S3 to root volume at boot time. App sewers: share state using a combination of Dynamo DB and IP unique cast. Database: use RDS with multiAZ deployment. Backup: web and app servers backed up weekly via AM Is, database backed up via DB snapshots.
  • Web sewers: store read-only data in an EC2 NFS sewer; mount to each web server at boot time. App servers: share state using a combination of Dynamo DB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
  • Web sewers: store read-only data in S3, and copy from S3 to root volume at boot time. App sewers: share state using a combination of Dynamo DB and P unique cast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app sewers backed up weekly via AM Is, database backed up via DB snapshots. (Correct)

Answer : Web sewers: store read-only data in S3, and copy from S3 to root volume at boot time. App sewers: share state using a combination of Dynamo DB and P unique cast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app sewers backed up weekly via AM Is, database backed up via DB snapshots.

What does elasticity mean to AWS?


Options are :

  • The ability to scale computing resources up and down easily, with minimal friction. (Correct)
  • The ability to scale computing resources up easily, with minimal friction and down with latency.
  • The ability to recover from business continuity events with minimal friction.
  • The ability to provision cloud computing resources in expectation of future demand.

Answer : The ability to scale computing resources up and down easily, with minimal friction.

Dave is the main administrator in Example Corp., and he decides to use paths to help delineate the users in the company and set up a separate administrator group for each path-based division following is a subset of the full list of paths he plans to use: /marketing • /sales • Hegel Dave creates an administrator group for the marketing part of the company and calls it N marketing_ Admin. He assigns it the /marketing path. The group‘s ARN is ARN:AWS:IAM::1 2345618901 2:group/marketing/marketing_ Admin. Dave assigns the following policy to the marketing_ Admin group that gives the group permission to use all IAM actions with all groups and users in the /marketing path. The policy also gives the marketing_ Admin group permission to perform any AWS S3 actions on the objects in the portion of the corporate bucket. { ?Version‘: ?201 2-1 0-17‘, ?Statement: I ?Effect‘: ?Deny?, ?Action‘: ?Resource‘: I ?ARN:AWS:IAM::l 2345678901 2:group/marketing/*‘, h‘: AWS:IAM::123456789012:user/marketing/*a ?Effect: H allow tm ?Action: hIs3:*H, ?Resource?: ?arn:aws:s3 :::example _bucket/marketing/*‘ ?Effect: ?Allow?, ?Action‘: 1s3:ListBucket*‘, ?Resource?: ?arn:aws:s3 :::example _bucket‘, ?Condition?:{?String Like?:{?s3:prefix‘: ?marketing/})


Options are :

  • FALSE (Correct)
  • TRUE

Answer : FALSE

Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement Over the coming days. you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a My SQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required‘


Options are :

  • Migrate to AWS: Use VM Import/Export to quickly convert an on premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on Incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises My SQI server to migrate the database.
  • Hybrid environment: Create an AMI, which can be used to launch web sewers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted In AWS.
  • Offload traffic from on-premises environment: Setup a Cloud Front distribution, and configure Cloud Front to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache (Correct)
  • Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website

Answer : Offload traffic from on-premises environment: Setup a Cloud Front distribution, and configure Cloud Front to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache

You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet. Which of the following options would you consider?


Options are :

  • Configure a web proxy server in your VPC and enforce URL based rules for outbound access Remove default routes. (Correct)
  • Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
  • Implement security groups and configure outbound rules to only permit traffic to software depots.
  • Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only

Answer : Configure a web proxy server in your VPC and enforce URL based rules for outbound access Remove default routes.

A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using Cloud Watch and other monitoring tools It is discovered that there is an extremely large and unanticipated amount of Inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them. Which act Myth would be useful in defending against this attack?


Options are :

  • Create 15 Security Group rules to block the attacking IP addresses over port 80
  • Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses . (Correct)
  • Change the EIP (Elastic IP Address) of the NAT instance In the web tier subnet and update the main Route Table with the new EIP
  • Create a custom route table associated with the web tier and block the attacking IP addresses from the 1GW (Internet Gateway)

Answer : Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses .

A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an IPSec VPN. The application must authenticate against the on-premises LDAP server, After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3) key space specific to that user. Which two approaches can satisfy these objectives? (Choose 2 answers)


Options are :

  • Develop an identity broker that authenticates against LDAP and then cal s IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. (Correct)
  • The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate S3 bucket.
  • Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate S3 bucket.
  • The application authenticates against LDAP and retrieves the name of an IAM role associated with the user, The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket. (Correct)
  • The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate S3 bucket.

Answer : Develop an identity broker that authenticates against LDAP and then cal s IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket. The application authenticates against LDAP and retrieves the name of an IAM role associated with the user, The application then calls the IAM Security Token Service to assume that IAM role. The application can use the temporary credentials to access the appropriate S3 bucket.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions