AWS SAP-C00 Certified Solution Architect Professional Exam Set 9

You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider. What Is the correct way to configure AWS Direct connect for access to services such as Amazon S3?


Options are :

  • Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.
  • Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC.
  • Create a public interface on your AWS Direct Connect link Redistribute BGP routes into your existing routing infrastructure; advertise specific routes for your network to AWS. (Correct)
  • Configure a public Interface on your AWS Direct Connect link Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP.

Answer : Create a public interface on your AWS Direct Connect link Redistribute BGP routes into your existing routing infrastructure; advertise specific routes for your network to AWS.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 3

The AWS IT infrastructure that AWS provides, complies with the following IT security standards, including:


Options are :

  • PCI DSS Level 1, ISO 27001, ITAR and FIPS 140-2 (Correct)
  • HIPAA, Cloud Security Al lance (CSA) and Motion Picture Association of America (NIPAA)
  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2 and SOC 3 (Correct)
  • FISMA, DIACAP, and Fedora VIP (Correct)
  • All of the above

Answer : PCI DSS Level 1, ISO 27001, ITAR and FIPS 140-2 SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70 Type II), SOC 2 and SOC 3 FISMA, DIACAP, and Fedora VIP

Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design tor the application that leverages multiple legions tor the most recently accessed content and latency sensitive portions of the wet) site The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application‘s requirements?


Options are :

  • Serve user content from S3. Cloud Front and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local Dynamo DB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table. (Correct)
  • Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 Cloud Front and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a Dynamo DB table and leverage SQS to capture changes to user preferences with SOS workers for propagating Dynamo DB updates.
  • Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. Cloud Front with dynamic content and an ELB in each region Retrieve user preferences from an Elastic Cache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region.
  • .Li Serve user content from S3. Cloud Front with dynamic content, and an ELB in each region Retrieve user preferences from an Elastic Cache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each Elastic Cache cluster.

Answer : Serve user content from S3. Cloud Front and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local Dynamo DB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.

You are running a news website in the emu-west-I region that updates every 15 minutes. The website has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an Amazon RDS database Static content resides on Amazon S3, and is distributed through Amazon Cloud Front. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an Amazon RDS Extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%. While freezable memory is in the 2 GB range. Web analytics reports show that the average load time of your web pages Is around 1.5 to 2 seconds, but your SEO consultant wants to bring down the average load time to under 0.5 seconds. How would you improve page load times for your users? (Choose 3 answers)


Options are :

  • Switch the Amazon RDS database to the high memory extra large Instance type L. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region. (Correct)
  • Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. (Correct)
  • Configure Amazon Cloud Front dynamic content support to enable caching of re-usable content from your site
  • Add an Amazon Elastic Cache caching layer to your application for storing sessions and frequent DB queens (Correct)

Answer : Switch the Amazon RDS database to the high memory extra large Instance type L. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. Add an Amazon Elastic Cache caching layer to your application for storing sessions and frequent DB queens

AWS DVA-C00 Certified Developer Associate Practice Exam Set 4

Your company Is storing mil Ions of sensitive transactions across thousands of 100-GB files that must be encrypted in transit and at rest. Analysts concurrently depend on subsets of files, which can consume up to 5 TB of space, to generate simulations that can be used to steer business decisions. You are required to design an AWS solution that can cost effectively accommodate the long-term storage and In-flight subsets of data?


Options are :

  • Use HDFS on Amazon EMR, and run simulations on subsets in ephemeral drives on Amazon EC2.
  • Use Amazon Simple Storage Service (S3) with server-side encryption, arid run simulations on subsets in ephemeral drives on Amazon EC2.
  • Use HDFS on Amazon Elastic Map Reduce (EMR), and run simulations on subsets inmemory on Amazon Elastic Compute Cloud (EC2). (Correct)
  • Use Amazon S3 with server-side encryption, and run simulations on subsets in-memory on Amazon EC2.
  • Store the full data set in encrypted Amazon Elastic Block Store (EBS) volumes, and regularly capture snapshots that can be cloned to EC2 workstations.

Answer : Use HDFS on Amazon Elastic Map Reduce (EMR), and run simulations on subsets inmemory on Amazon Elastic Compute Cloud (EC2).

Your company has recently extended its datacenter into a VPC on AWS to add burst computing capacity as needed Members of your Network Operations Center need to be able to go to the AWS Management Console and administer Amazon EC2 instances as necessary You don‘t want to create new IAM users for each NOC member and make those users sign in again to the AWS Management Console Which option below will meet the needs for your NOC members?


Options are :

  • Use your on-premises SAML 2.0-compliant identity provider (IDP) to grant the NOC members federated access to the AWS Management Console via the AWS single sign-on (SSO) endpoint.
  • Use Auth 2.0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to the AWS Management Console,
  • Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console (Correct)
  • Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC members to sign in to the AWS Management Console.

Answer : Use your on-premises SAML2.0-compliam identity provider (IDP) to retrieve temporary security credentials to enable NOC members to sign in to the AWS Management Console

Your website Is serving on-demand training videos to your workforce. Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video trans coding expertise and it required you may need to pay for a consultant. How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery?


Options are :

  • A video trans coding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. Cloud Front to serve HLS trans coded videos from EC2.
  • A video trans coding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from Glacier.
  • Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. Cloud Front to serve HLS trans coded videos from EC2.
  • Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from S3. (Correct)

Answer : Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from S3.

AWS DVA-C01 Certified Developer Associate Practice Exam Set 7

You have an application running on an EC2 Instance which will allow users to download flies from a private S3 bucket using a pre-signed URL. Before generating the URL the application should verify the existence of the file in S3. How should the application use AWS credentials to access the S3 bucket securely?


Options are :

  • Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IANI user and retrieve the IAM users credentials from the EC2 instance user data.
  • Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.
  • Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role‘s credentials from the EC2 Instance metadata (Correct)
  • Use the AWS account access Keys the application retrieves the credentials from the source code of the application.

Answer : Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role‘s credentials from the EC2 Instance metadata

A newspaper organization has a on-premises application which allows the public to search us back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability. Which is the most appropriate?


Options are :

  • Use a Cloud Front download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.
  • Use S3 with reduced redundancy 10 store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load balancer.
  • Model the environment using Cloud Formation use an EC2 instance running Apache web server and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
  • Use a single-AZ RDS My SOL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SOL.
  • Use S3 with standard redundancy to store and serve the scanned files, use Cloud Search for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. (Correct)

Answer : Use S3 with standard redundancy to store and serve the scanned files, use Cloud Search for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.

How is AWS readily distinguished from other vendors in the traditional IT computing landscape?


Options are :

  • Experienced. Scalable and elastic. Secure. Costeffective. Reliable
  • Secure. Flexible. Cost-effective. Scalable and elastic. Global
  • Flexible. Cost-effective. Dynamic. Secure. Experienced.
  • Secure. Flexible. Cost-effective. Scalable and elastic. Experienced (Correct)

Answer : Secure. Flexible. Cost-effective. Scalable and elastic. Experienced

AWS Solutions Architect Associate 2019 with Practice Test Set 6

Your customer Is willing to consolidate their log streams (access logs application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours?


Options are :

  • Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs
  • Setup an Auto Scaling group of EC2 sys log servers, store the logs on S3, use EMR to apply heuristics on the logs
  • Send all the log events to Amazon SQS. setup an Auto Scaling group of EC2 sewers to consume the logs and apply the heuristics.
  • Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs (Correct)

Answer : Send all the log events to Amazon Kinesis, develop a client process to apply heuristics on the logs

When you put objects in Amazon S3, what is the indication that an object was successfully stored?


Options are :

  • A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful. (Correct)
  • Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
  • A success code is inserted into the S3 object metadata.
  • Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.

Answer : A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.

How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?


Options are :

  • Simply create a new volume in the other AZ and specify the original volume as the source.
  • Detach the volume, then use the ec2-migrate-voiume command to move it to another AZ.
  • Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ. (Correct)
  • Detach the volume and attach it to another EC2 instance in the other AZ.

Answer : Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.

AWS DVA-C00 Certified Developer Associate Practice Exam Set 3

Your application is using an ELB in front of an Auto Scaling group of web/application sewers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Me cached Elastic Cache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load? Why?


Options are :

  • No, if the cache node fails the automated Elastic Cache node recovery feature will prevent any availability impact.
  • No, if the cache node fails you can always get the same data from the DB without having any availability impact.
  • Yes, you should deploy the Me cached Elastic Cache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.
  • Yes, you should deploy two Me cached Elastic Cache Clusters in different AZs because the RDS instance will not be able to handle the load If the cache node fails. (Correct)

Answer : Yes, you should deploy two Me cached Elastic Cache Clusters in different AZs because the RDS instance will not be able to handle the load If the cache node fails.

In AWS, which security aspects are the customer‘s responsibility? Choose 3 answers


Options are :

  • Encryption of EBS (Elastic Block Storage) volumes (Correct)
  • Decommissioning storage devices
  • Security Group and ACL (Access Control List) settings (Correct)
  • Patch management on the EC2 instance‘s operating system (Correct)
  • Life-cycle management of AM credentials
  • Controlling physical access to compute resources

Answer : Encryption of EBS (Elastic Block Storage) volumes Security Group and ACL (Access Control List) settings Patch management on the EC2 instance‘s operating system

Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?


Options are :

  • Use two SOS queues, one for high priority messages, the other for default priority. Transformation instances first p01 the high priority queue; if there is no message, they p01 the default priority queue. (Correct)
  • Use a Dynamo DB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
  • Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
  • Use a single SOS queue. Each message contains the priority level. Transformation instances p0l1 high-priority messages first.

Answer : Use two SOS queues, one for high priority messages, the other for default priority. Transformation instances first p01 the high priority queue; if there is no message, they p01 the default priority queue.

AWS SAP-C00 Certified Solution Architect Professional Exam Set 1

You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that Is being stored In your web application s database. You are currently running a Multi-AZ RDS My SQL instance for the database tier. You also have implemented Elastic cache as a database caching layer between the application tier and database tier. Please select the answer that will allow you to successfully Implement the reporting tier with as little impact as possible to your database. ?


Options are :

  • Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Rephrase. (Correct)
  • Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
  • Generate the reports by querying the Elastic Cache database caching tier.
  • Generate the reports by querying the synchronously replicated standby RDS MY SQL instance maintained through multi-AZ.

Answer : Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Rephrase.

You‘ve been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC. The configuration is as follows: VPC: vpc-2f8bc447 1GW: igw-2d8bc445 NACL: ad-208bc448 Subnets and Web sewers: subnet-258bc44d Application servers: subnet-248bc44c Database sewers: subnet-91 89c6f9 Route Tables: Route Tables: rtb-238bc44b Associations: subnet-258bc44d : rtb-21 8bc449 subnet-248bc44c : rtb-238bc44b subnet-91 89c6f9 : rtb-238bc44b You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these sewers to retrieve updates from the Internet?


Options are :

  • Create a bastion and NAT instance in subnet-258bc44d, and add a route from rib- 238bc44b to the NAT instance. (Correct)
  • Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to lgw-2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet2 48bc44c.
  • Create a bastion and NAT instance in subnet-248bc44c, and add a route from RTB- 238bc44b to subneb258bc44d.
  • Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c.

Answer : Create a bastion and NAT instance in subnet-258bc44d, and add a route from rib- 238bc44b to the NAT instance.

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS. During the pilot, you measured a peak or 10 IOPS on the database, and you stored an :average of 3GB of sensor data. The current deployment consists of a loadbalanced auto scaled Ingestion layer using EC2 Instances and a Post SQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least LOOK sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?


Options are :

  • Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
  • Replace the RDS instance with a 6 node Red shift cluster with 96TB of storage (Correct)
  • Add an SOS queue to the Ingestion layer to buffer writes to the RDS Instance
  • Ingest data into a Dynamo DB table and move old data to a Red shift cluster

Answer : Replace the RDS instance with a 6 node Red shift cluster with 96TB of storage

AWS Solutions Architect Associate 2019 with Practice Test Set 6

The following are AWS Storage services? Choose 2 Answers


Options are :

  • AWS Elastic Cache (Correct)
  • Import/Export (Correct)
  • AWS Relational Database Service (AWS RDS)
  • AWS Glacier AWS

Answer : AWS Elastic Cache Import/Export

You are designing an SSUTLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication. The solution must be resilient. Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)


Options are :

  • Configure ELB with HTTPS listeners, and place the Web sewers behind it.
  • Configure your web sewers as the origins for a Cloud Front distribution. Use custom SSL certificates on your Cloud Front distribution.
  • Configure your Web servers with EIP5. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. (Correct)
  • Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it (Correct)

Answer : Configure your Web servers with EIP5. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers. Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it

Select the correct set of options. These are the initial settings for the default security group:


Options are :

  • Allow no inbound traffic, allow all outbound traffic and Allow instances associated with this security group to talk to each other (Correct)
  • Al low no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other
  • Allow all Inbound traffic, Al low no outbound traffic and Allow instances associated with this security group to talk to each other
  • Allow all inbound traffic, Al low all outbound traffic and Does NOT allow instances associated with this security group to talk to each other .

Answer : Allow no inbound traffic, allow all outbound traffic and Allow instances associated with this security group to talk to each other

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 9

Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a Direct Connect connection and would like to start using the new connection. After configuring Direct Connect settings in the AWS Console, which of the following options win provide the most seamless transition for your users?


Options are :

  • Delete your existing VPN connection to avoid routing loops configure your Direct Connect router with the appropriate settings and verity network traffic is leveraging Direct Connect
  • Update your VPC route tables to point to the Direct Connect connection configure your Direct Connect router with the appropriate settings verify network traffic is leveraging Direct Connect and then delete the VPN connection.
  • Configure your Direct Connect router with a higher BGP priority man your VPN router, verify network traffic is leveraging Direct connect and then delete your existing VPN connection.
  • Configure your Direct Connect router, update your VPC route tables to point to the Direct Connect connection, configure your VPN connection with a higher BGP priority. And verify network traffic is leveraging the Direct Connect connection. (Correct)

Answer : Configure your Direct Connect router, update your VPC route tables to point to the Direct Connect connection, configure your VPN connection with a higher BGP priority. And verify network traffic is leveraging the Direct Connect connection.

A company is running a batch analysis every hour on their main transactional DB. running on an RDS My SQL instance, to populate their central Data Warehouse running on Red shift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manual y-sent email notifies that an update is required. The on-premises system cannot be modified because Is managed by another team, How would you optimize this scenario to solve performance issues and automate the process as much as possible?


Options are :

  • Replace RDS with Red shift for the oaten analysis and SQS to send a message to the onpremises system to update the dashboard
  • Create an RDS Read Replica for the batch analysis and SOS to send a message to the onpremises system to update the dashboard.
  • Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
  • Replace RDS with Red shift for the batch analysis and SNS to notify the on-premises system to update the dashboard (Correct)

Answer : Replace RDS with Red shift for the batch analysis and SNS to notify the on-premises system to update the dashboard

Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with custom technical enhancements Customers can show off their on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards Assessments are a mixture of human and automated assessments you need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of processes over time?


Options are :

  • Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta data Use an auto-scaling group of G2 instances in a placement group. (Correct)
  • Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta data Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization)
  • Use AWS Data Pipeline to manage movement of data & meta-catty and assessments Use an auto-scaling group of G2 instances in a placement group.
  • None

Answer : Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta data Use an auto-scaling group of G2 instances in a placement group.

Practice Exam : AWS Certified Solutions Architect Associate

Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS. Which service should you use?


Options are :

  • Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database. (Correct)
  • Amazon Elastic Cache to store the writes until the writes are committed to the database.
  • Amazon RDS with provisioned IOPS up to the anticipated peak write throughput
  • Amazon Dynamo DB with provisioned write throughput up to the anticipated peak write throughput.

Answer : Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.

Your website is serving on demand training videos to your word force. Videos are uploaded monthly hi high resolution MP4 format. Your word force is distributed globally often on the move and using company tablets that require the HTTP use Streaming (HIS) protocol to watch a video. Your company has no video trans coding expertise and it required you may need to pay for a consultant How do you implement the most cost efficient architecture without compromising high availability and quality of video delivery?


Options are :

  • A video trans coding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. Cloud Front to serve HLS trans coded videos from EC2.
  • A video trans coding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from Glacier.
  • Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. Cloud Front to serve HLS trans coded videos from EC2.
  • Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from S3. (Correct)

Answer : Elastic Trans coder to trans code original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. Cloud Front to serve HLS trans coded videos from S3.

You are migrating a legacy client-server application to AWS. The application responds to a specific DNS domain (e.g. www.example.com) and has a 2-tier architecture, with multiple application sewers and a database sewer. Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket. A Multi-AZ RDS My SQL instance will be used for the database. During the migration you can change the application code, but you have to file a change request. How would you implement the architecture on AWS In order to maximize scalability and high availability?


Options are :

  • File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZS. (Correct)
  • File a change request to implement Alias Resource support in the application. Use Route 53 Alias Resource Record to distribute load on two application servers in different AZS.
  • File a change request to Implement Latency Based Routing support in the application. Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZS.
  • . File a change request to implement Cross-Zone support in the application. Use an RB with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.

Answer : File a change request to implement Proxy Protocol support in the application. Use an ELB with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZS.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 10

A customer Is deploying an SSL enabled web application to AWS and would like to implement a separation of roles between the EC2 service administrators that are entitled to login to Instances as well as making API cal s and the security officers who will maintain and have exclusive access to the application‘s X.509 certificate that contains the private key.


Options are :

  • Configure system permissions on the web servers to restrict access to the certificate only to the authority security officers
  • Upload the certificate on an S3 bucket owned by the security officers and accessible only by EC2 Role of the web servers.
  • Configure the web servers to retrieve the certificate upon boot from an CloudHSM is managed by the security officers.
  • Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB. (Correct)

Answer : Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB.

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and Dynamo DB as data store. The main web-application best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while and is therefore only done once per week. Recently, a new chat feature has been Implemented In node is and walls to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way?


Options are :

  • Create one AWS Ops Works stack create two AWS Ops Works layers, create one custom recipe
  • Create two AWS Ops Works stacks create two AWS Ops Works layers, create one custom recipe (Correct)
  • Create two AWS Ops Works stacks create two AWS Ops Works layers, create two custom recipe (Correct)
  • Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe

Answer : Create two AWS Ops Works stacks create two AWS Ops Works layers, create one custom recipe Create two AWS Ops Works stacks create two AWS Ops Works layers, create two custom recipe

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions