AWS SAP-C00 Certified Solution Architect Professional Exam Set 6

You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations: The VMs single 10GB VNPDK is almost full; No virtual network interface still uses the iolVibps driver, which leaves your 100Mbps WAN connection completely underutilized; It is currently running on a highly customized. Windows VM within a VMware environment; You do not have me Installation media: This is a mission critical application with an RIO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?


Options are :

  • Use Import/Export to import the VNI as an ESS snapshot and attach to EC2.
  • Use me ec2-bundle-instance API to Import an Image of the VNI into EC2 .
  • Use the EC2 VM Import Connector for v Center to import the VNI into EC2. (Correct)
  • Use S3 to create a backup of the VM and restore the data into EC2.

Answer : Use the EC2 VM Import Connector for v Center to import the VNI into EC2.

Which of the following statements is correct about AWs direct connect?


Options are :

  • Connections to AWS direct connect require double clad fiber for 1 gigabit Ethernet with Auto negotiation enabled for the port
  • To use AWS Direct connect, your network must be collocated with anew AWS direct connect location.
  • AWS direct connect links your internal network to an AWS direct connect location over a standard 50 gigabit Ethernet cable
  • An AWS direct connect location provides access to Amazon web services in the region it is associated with (Correct)

Answer : An AWS direct connect location provides access to Amazon web services in the region it is associated with

you are designing a photo sharing mobile app. The application will store all pictures in a single Amazon S3 bucket. Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and download their own pictures directly from Amazon S3. You want to configrure security to handle potentially millions of users in the most secure manner possible. What should your server side application do when a new user registers on the photo sharing mobile application?


Options are :

  • Record the users information in Amazon Dynamo DB. When the user their mobile app, create temporary credentials using AS security token service with appropriate permissions. Store these credentials in the mobile app memory and use hem to access Amazon S3. Generate new credentials the next time the user runs the mobile app.
  • Record the users information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS security token service assume role function. Store these credential in the mobile app memory and use them to access Amazon s3. Generate new credential the next the user runs the mobile app. (Correct)
  • create an IAM user. Assign appropriate permissions to the IAM user. Generate an access key and secrete key for the IAM user, store them in the mobile app and use these credentials to access Amazon S3.
  • Create a set of long term credentials using AWS security token service with appropriate permissions. Store these credentials in the mobile app and use them to access Amazon S3.
  • Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user. Generate an access key and secret key for the IAM user, store them in the mobile app and use these credentials to access Amazon s3.

Answer : Record the users information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses their mobile app, create temporary credentials using the AWS security token service assume role function. Store these credential in the mobile app memory and use them to access Amazon s3. Generate new credential the next the user runs the mobile app.

You have been asked to design the storage layer for an application. The application requires disk performance of at least 1 00, 000 l OPS. In addition, the storage layer must be able to survive the loss of an individual disk, EC2 instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3 TB. Which of the following designs will meet these objectives?


Options are :

  • Instantiate a c3.8xlarge instance in us-east-i. Provision an AWS Storage Gateway and configure It for 3 TB of storage and 100, 000 l OPS. Attach the volume to the instance. E. Instantiate an i2.8xlarge instance in us-east-i A. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the instance. Configure synchronous, block level replication to an identically configured instance in us-east-lb.
  • Instantiate a c3.8xlarge instance in us-east-i. Provision 3xITB EBS volumes, attach them to the Instance, and configure them as a single RAID 0 volume. Ensure that EBS snapshots are performed every 15 minutes.
  • Instantiate an i2.8xlarge instance in us-east-i A. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume. (Correct)
  • Instantiate a c3.8xlarge instance in us-east-i. Provision 4x1 TB EBS volumes, attach them to the instance, and configure them as a single RAID 5 volume, Ensure that EBS snapshots are performed every 15 minutes.

Answer : Instantiate an i2.8xlarge instance in us-east-i A. Create a RAID 0 volume using the four 800GB SSD ephemeral disks provided with the Instance. Provision 3x1TB EBS volumes, attach them to the instance, and configure them as a second RAID 0 volume. Configure synchronous, block-level replication from the ephemeral-backed volume to the EBS-backed volume.

An organization is setting up their website on AWS. The organization is working on various security measures to be performed on the AWS EC2 instances. Which of the below mentioned security mechanisms will not help the organization to avoid future data leaks and identify security weakness?


Options are :

  • perform a hardening test on the AWS instance
  • perform a code check for any memory leaks (Correct)
  • Run penetration testing on AWS with prior approval from Amazon
  • perform SQL injection for application testing

Answer : perform a code check for any memory leaks

Is there any way to own a direct connection to Amazon Web services?


Options are :

  • Yes, you can via Amazon dedicated connection.
  • No, you can create an encrypted tunnel to VPC, but you cannot own the connection
  • Yes, you can via AWs Direct connect. (Correct)
  • No, AWS only allows access from the public internet

Answer : Yes, you can via AWs Direct connect.

You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application in a single VPC. You are considering the options for implementing IOS IPS protection for traffic coming from the Internet. Which of the following options would you consider? (Choose 2 answers)


Options are :

  • Implement Elastic Load Balancing with SSL listeners In front of the web applications
  • Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server. (Correct)
  • Implement IDS/IPS agents on each Instance running In VPC
  • Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic. (Correct)

Answer : Implement a reverse proxy layer in front of web servers and configure IDS/IPS agents on each reverse proxy server. Configure an instance in each subnet to switch its network interface card to promiscuous mode and analyze network traffic.

A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available. Sort stale Eventual consistency) rather than an ACID (Atomicity. Consistency isolation. Durability) consistency model. The application is exhibiting undesirable behave because the database Is not able to handle the volume of writes. How can you reduce the load on your onpremises database resources in the most cost-effective way?


Options are :

  • Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline
  • Use an Amazon Elastic Map Reduce (EMR) S3DIstCp as a synchronization mechanism between the on-premises database and a hadoop cluster on AWS. (Correct)
  • Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the on-premises database.
  • Modify the application to use Dynamo DB to feed an EMR cluster which uses a map function to write to the on-premises database.

Answer : Use an Amazon Elastic Map Reduce (EMR) S3DIstCp as a synchronization mechanism between the on-premises database and a hadoop cluster on AWS.

A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum. What AWS architecture would you recommend?


Options are :

  • Create an auto-scaling group of FTP servers with a scaling policy to automatically scalein when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance.
  • Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer.
  • Use Direct Connect to connect all regional My SQL deployments to the HO region and reduce network latency for the batch process .
  • ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an AM policy that permits access to sub-directories within the bucket via use of the ?username‘ Policy variable. (Correct)
  • Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer.

Answer : ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an AM policy that permits access to sub-directories within the bucket via use of the ?username‘ Policy variable.

You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce. You are using the cc2 8x large Instance type, whose CPUs are mostly idle during processing. Which of the below would be the most cost efficient way to reduce the runtime of the job?


Options are :

  • Use smaller instances that have higher aggregate I/O performance. (Correct)
  • Create more smaller flies on Amazon S3.
  • Create fewer, larger tiles on Amazon S3.
  • Add additional cc2 8x large instances by introducing a task group.

Answer : Use smaller instances that have higher aggregate I/O performance.

Company B is launching a new game app for mobile devices. Users will log into the game using their existing social media account to streamline data capture. Company B would like to directly save player data and scoring Information from the mobile app to a Dynamo DB table named Score Data When a user saves their game the progress data will be stored to the Game state S3 bucket. What is the best approach for storing data to Dynamo DB and S3?


Options are :

  • Use temporary security credentials that assume a role providing access to the Score Data Dynamo DB table and the Game State S3 bucket using web Identity federation. (Correct)
  • Use Login with Amazon allowing users to sign in with an Amazon account providing the mobile app with access to the Score Data Dynamo DB table and the Game State S3 bucket.
  • Use an IAM user with access credentials assigned a role providing access to the Score Data Dynamo DB table and the Game State S3 bucket for distribution with the mobile app.
  • Use an EC2 Instance that Is launched with an EC2 role providing access to the Score Data Dynamo DB table and the Game State S3 bucket that communicates with the mobile app via web services.

Answer : Use temporary security credentials that assume a role providing access to the Score Data Dynamo DB table and the Game State S3 bucket using web Identity federation.

Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SQS) to set up a message queue between EC2 instances which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch sewers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?


Options are :

  • Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.
  • Implement message passing between EC2 instances within a batch by exchanging messages through SQS.
  • Implement fault tolerance against EC2 instance failure since messages would remain in SOS and worn can continue with recovery of EC2 instances implement fault tolerance against SOS failure by backing up messages to S3.
  • Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
  • Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness. (Correct)

Answer : Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.

You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be created in the second region? (Choose 2 answers)


Options are :

  • Security Groups
  • Launch configurations
  • EC2 Key Pairs
  • Route 53 Record Sets (Correct)
  • IAM Roles
  • Elastic IP Addresses (EIP) (Correct)

Answer : Route 53 Record Sets Elastic IP Addresses (EIP)

Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing if the product does not meet the quality standards at any stage of process employees may force the process to repeat a step customers are notified via email about order status and any critical issues with their orders such as a payment failure. Your case architecture includes AWS elastic bean stalk for your website with an RDS MYSQL instance for customer data and orders. How can you implement the order fulfillment process while making sure that the emails are delivered reliably?


Options are :

  • Use an SQS queue to manage all process tasks use an auto scaling group of EC2 instances that poll the tasks and execute them. Use SES to send emails to customers.
  • use SWF with an auto scaling group of act may workers and a decider instance in another auto scaling group with min/max=1 use the decider instance to send emails to customers.
  • Use SWF with an auto scaling group of act my worker and a decider instance in another auto scaling group with min/max=1 use SES to send emails to customers. (Correct)
  • Add a business process management application to your elastic bean stalk app server and reuse the ROS database for tracking order status use one of the elastic bean stalk instances to send emails to customers.

Answer : Use SWF with an auto scaling group of act my worker and a decider instance in another auto scaling group with min/max=1 use SES to send emails to customers.

Which of the following cannot be used to manage Amazon Elastic Cache and perform administrative tasks?


Options are :

  • AWS cloud watch (Correct)
  • Elastic cache command Line interface(CLI)
  • Amazon S3
  • AWS software development kits(SDKs)

Answer : AWS cloud watch

Which of the following statements is correct about the number of security groups and rues applicable for an EC2-classic instance and an EC2-VPC network interface?


Options are :

  • In EC2-classic, you can associate an instance with up to 500 security groups and add up to 50 rules to a security group. In EC2-VPC, you can associate a network interface with up t 5 security groups and add up to 100 rules to a security group
  • In EC2-classic, you can associate an instance with up to 5 security groups and add up to 100 rules to a security group. and add up to 100 rules to security group. In EC2-VPC, you can associate a network interface with up to 500 security groups and add up to 50 rules to a security group.
  • In EC2-classic, you can associate an instance with up to 5 security groups and add up to 50 rules to a security group. In EC2-VPC, you can associate to a network interface with up to 500 security groups and add up to 100 rules to a security group
  • In EC2-classsic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group. In Ec2-VPC, you can associate a network interface with up to 5 security groups and add up to 50 rules to security group. (Correct)

Answer : In EC2-classsic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group. In Ec2-VPC, you can associate a network interface with up to 5 security groups and add up to 50 rules to security group.

A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires white listing of all public IP addresses used to communicate with It. A maximum of 4 white listing IP addresses are allowed at a time and can be added through an API. How should they architect their solution?


Options are :

  • White list the ELB IP addresses and route payment requests from the Application servers through the ELB.
  • Route payment requests through two NAT Instances setup for High Availability and white list the Elastic IP addresses attached to the MAT instances.
  • White list the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.
  • Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation white list API (Correct)

Answer : Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation white list API

Your company runs a customer facing event registration site This site Is built with a 3tier architecture with web and application tier servers and a My SQL database The application requires 6 web tier sewers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single NIl SQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?


Options are :

  • A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ
  • A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi -AZ RDS (Relational Database Service) deployment.
  • A web tier developed across 3 AZs with 2 EC2 (Elastic Com Dute C1oud instances in each AZ inside an auto scaling group behind ELB. And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an [LB and a Multi-AZ RDS (Relational Database services) deployment. (Correct)
  • A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas In the two other AZs

Answer : A web tier developed across 3 AZs with 2 EC2 (Elastic Com Dute C1oud instances in each AZ inside an auto scaling group behind ELB. And an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an [LB and a Multi-AZ RDS (Relational Database services) deployment.

YOU have a periodic image analysis application that gets some files in input and analyzes them and for each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day currently you have a server on Ec2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process. What services could be used to reduce the elaboration time and improve the availability of the solution?


Options are :

  • EBS with provisioned IOPS to store I/O files. SNS to distribute elaboration commands to a group of hosts working in parallel auto scaling to dynamically size the group of hosts depending on the number of SNS notifications.
  • S3 to store I/O files. SQS to distribute elaboration commands to group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the length of the SQS queue.
  • S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the numbers of SNS notifications. (Correct)
  • EBS provisioned IOPS to store I/O files SQS to distribute elaboration commands to a group of hosts working in parallel auto scaling to dynamically size the group of hosts depending on the length of the SQS queue.

Answer : S3 to store I/O files, SNS to distribute evaporation commands to a group of hosts working in parallel. Auto scaling to dynamically size the group of hosts depending on the numbers of SNS notifications.

An AWS customer runs a public blogging website. The site users upload two mu ion blog entries a month, The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries have a high update rate during the first 3 months following publication, this drops to no updates after 6 months. The customer wants to use Cloud Front to improve his user‘s load times. Which of the following recommendations would you make to the Customer?


Options are :

  • Duplicate entries into two different buckets and create two separate Cloud Front distributions where S3 access is restricted only to Cloud Front identity
  • Create a Cloud Front distribution with S3 access restricted only to the Cloud Front identity and partition the blog entry‘s location in S3 according to the month it was uploaded to be used with Cloud Front behaviors. (Correct)
  • Create a Cloud Front distribution with Restrict Viewer Access Forward Query string set to true and minimum TTL of 0.
  • Create a Cloud Front distribution with ?US Europe? price class for US/Europe users and a different Cloud Front distribution with ?All Edge Locations? for the remaining users.

Answer : Create a Cloud Front distribution with S3 access restricted only to the Cloud Front identity and partition the blog entry‘s location in S3 according to the month it was uploaded to be used with Cloud Front behaviors.

An organization is setting up their website on AWS. The organization is working on various security measures to be performed on the AWS EC2 instances. Which of the below mentioned security mechanisms will not help the organization to avoid future leaks and identify security weakness?


Options are :

  • Run penetration testing on AWS with prior approval from Amazon
  • perform SQL injection for application testing
  • Perform a code check for any memory leaks (Correct)
  • perform a hardening test on the AWS instance

Answer : Perform a code check for any memory leaks

You are looking to migrate your development and test environment to AWS. You have seceded to use separate AWS accounts to host each environment. You plan to link each account bill to am aster AWS account using consolidated billing. To make sure you keep with in budget you would like to implement a way for administrators in the master account to have access to stop, delete and or terminate resources in both the dev and test accounts. Identify which option will allow you to achieve this goal?


Options are :

  • Link the account using consolidated billing. This will give IAM users in the master account access to resources in the dev and test accounts
  • create IAM users and across account role in the master account that grants full admin permissions to the dev and test accounts.
  • create IAM users in the master account create cross account roles in the dev and test accounts that have full admin permissions and grant the master account access. (Correct)
  • Create IAM users in the master account with full admin permissions. Create cross account roles in the dev and test accounts that grant the master account access to the resources in the account by inheriting permissions from the master account.

Answer : create IAM users in the master account create cross account roles in the dev and test accounts that have full admin permissions and grant the master account access.

A user is hosting a public website on AWS. The user wants to have the database and the app server on the AWSVPC. The user wants to setup a database that can connect to the internet for any patch upgrade but cannot receive any request from the internet. How can the user set this up?


Options are :

  • Setup DB in a public subnet with the security group allowing only in bound data
  • set up DB in a local data center and use a private gateway to connect the application with DB.
  • Setup DB in a private subnet which is connected to the internet via NAT for outbound. (Correct)
  • Setup DB n a private subnet with security group allowing only outbound traffic

Answer : Setup DB in a private subnet which is connected to the internet via NAT for outbound.

An organization is hosting a scalable web application using AWS. The organization has configured internet facing ELB and Auto scaling to make the application scalable. Which of the below mentioned statements is required to be followed when the application is planning to host a web application on VPC?


Options are :

  • The ELB can be in a public or a private subnet but should have the ENI which is attached to an elastic IP.
  • The ELB can be in a public or a private subnet but must have routing tables attached to divert the internet traffic to it.
  • The ELB must be in a public subnet of the VPC to face internet traffic. (Correct)
  • The ELB must not be in any subnet; instead it should face the internet directly.

Answer : The ELB must be in a public subnet of the VPC to face internet traffic.

You are running an application on premises due to its dependency on non x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX compatible block based storage. You have 140 TB of data and would like to mount it as a single folder on your file server users must be able to access portions of this data while the backups are taking place. When backup solution would be most appropriate for this use case?


Options are :

  • configure your backup software to use s3 has the target of your backups
  • Use storage gateway and configure it to use gateway stored volumes.
  • use storage gateway configure it to use gateway cached volumes. (Correct)
  • configure your backup software to use glacier as the target of your data backups.

Answer : use storage gateway configure it to use gateway cached volumes.

An organization, which has the AWS account ID as 999888777, has created 50- IAM users. All the users are added to the same group exam killer. If the organization has enabled that each IAM user can login with the AWS console, which AWS login URL will the IAM users use?


Options are :

  • https://sign.aws.amazon.com/exam killer/
  • http://999988887777.signin.aws.amazon.com/console/ (Correct)
  • https://exam kller.signin.aws.amazon.com/999988887777/console/
  • https://99988887777.aws.amazon.com/exam killer/

Answer : http://999988887777.signin.aws.amazon.com/console/

A large real-estate brokerage is exploring the option o( adding a cost-effective location based alert to their existing mobile application The application backend infrastructure currently runs on AWS Users who opt in to this service will receive alerts on their mobile device regarding real state otters in proximity to their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5 mil ion users across the US. Which one of the following architectural suggestions would you make to the customer?


Options are :

  • The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others from Dynamo DB AWS Mobile Push will be used to send offers to the mobile application
  • The mobile application will send device location using AWS Nobile Push EC2 instances will retrieve the relevant offers from Dynamo DB EC2 instances will communicate with mobile carriers/device providers to push alerts back to the mobile application.
  • The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: Dynamo DB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application. (Correct)
  • Use AWS Direct Connect or VPN to establish connect Myth with mobile carriers EC2 instances will receive the mobile applications ?location through carrier connection: RDS will be used to store and relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application

Answer : The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and EC2 instances: Dynamo DB will be used to store and retrieve relevant offers EC2 instances will communicate with mobile earners/device providers to push alerts back to mobile application.

You are the new IT architect in a company that operates a mobile sleep tracking application. When activated at night, the mobile app Is sending collected data points of 1 kilobyte every 5 minutes to your backend. The backend takes care of authenticating the user and writing the data points into an Amazon Dynamo DB table. Every morning, you scan the table to extract and aggregate last night‘s data on a per user basis, and store the results in Amazon S3. Users are notified via Amazon SNS mobile push notifications that new data is available, which is parsed and visualized by the mobile app. Currently you have around 100k users who are mostly based out of North America. You have been tasked to optimize the architecture of the backend system to lower cost. What would you recommend? Choose 2 answers


Options are :

  • Introduce Amazon Elastic cache to cache reads from the Amazon Dynamo DB table and reduce provisioned read throughput (Correct)
  • Write data directly into an Amazon Red shift cluster replacing both Amazon Dynamo DB and Amazon S3.
  • Introduce an Amazon SOS queue to buffer writes to the Amazon Dynamo DB table and reduce provisioned write throughput.
  • Create a new Amazon Dynamo DB table each day and drop the one for the previous day after its data is on Amazon S3.
  • Have the mobile app access Amazon Dynamo DB directly Instead of JSON files stored on Amazon S3. (Correct)

Answer : Introduce Amazon Elastic cache to cache reads from the Amazon Dynamo DB table and reduce provisioned read throughput Have the mobile app access Amazon Dynamo DB directly Instead of JSON files stored on Amazon S3.

You are tasked with moving a legacy application from a virtual machine running inside your data center to an Amazon VPC un fortunately this app require access to a number of on premises services and no one who configured the app still works for your company. Even worse there is no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being re configured? (choose 3 answers)


Options are :

  • Entries in Amazon route 53 that allow the instance to resolve its dependencies IP address.
  • An elastic IP address on the VPC instance
  • An AWS direct connect link between the VPC and the network housing the internal services. (Correct)
  • A VM import of the current virtual machine (Correct)
  • An internet gateway to allow a VPN connection.
  • An P address space that does not conflict with the one premises (Correct)

Answer : An AWS direct connect link between the VPC and the network housing the internal services. A VM import of the current virtual machine An P address space that does not conflict with the one premises

Your company has recently extended its data center into a AVVS to add burst computing capacity as needed members of your network operations center need to be able to go to the AWS management console and administer Amazon EC2 instances as necessary you do not want to create new IAM users for each NOC member and make those users sign in again to the AWS management console which option below will meet the needs for your NOC members?


Options are :

  • Use your on premises SAML 2.0 compliant identity provider to the grant the NOC members federated access to the AWS management console via the AWS single sign on endpoint.
  • Use web identify federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS management console.
  • Use your premises SAML 2.0 compliant identity provider to retrieve temporary security credentials to enable NOC members to sign in to the AWS management console. (Correct)
  • Use oath 20 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS management console.

Answer : Use your premises SAML 2.0 compliant identity provider to retrieve temporary security credentials to enable NOC members to sign in to the AWS management console.

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions