Certification : AWS Certified Solutions Architect Associate

A company uses CloudFront to provide low-latency access to cached files. An Architect is considering the implications of using CloudFront Regional Edge Caches. Which statements are correct in relation to this service? (choose 2)   

Options are :

  • Regional Edge Caches are enabled by default for CloudFront Distributions (Correct)
  • There are additional charges for using Regional Edge Caches
  • Regional Edge Caches have larger cache-width than any individual edge location, so your objects remain in cache longer at these locations (Correct)
  • Regional Edge Caches are read-only
  • Distributions must be updated to use Regional Edge Caches

Answer : Regional Edge Caches are enabled by default for CloudFront Distributions Regional Edge Caches have larger cache-width than any individual edge location, so your objects remain in cache longer at these locations

Explanation Regional Edge Caches are located between origin web servers and global edge locations and have a larger cache than any individual edge location, so your objects remain in cache longer at these locations. Regional Edge caches aim to get content closer to users and are enabled by default for CloudFront Distributions (so you don't need to update your distributions) There are no additional charges for using Regional Edge Caches You can write to regional edge caches too References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/ https://aws.amazon.com/about-aws/whats-new/2016/11/announcing-regional-edge-caches-for-amazon-cloudfront/

AWS Certified Solutions Architect Associate

The company you work for has a presence across multiple AWS regions. As part of disaster recovery planning you are formulating a solution to provide a regional DR capability for an application running on a fleet of Amazon EC2 instances that are provisioned by an Auto Scaling Group (ASG). The applications are stateless and read and write data to an S3 bucket. You would like to utilize the current AMI used by the ASG as it has some customizations made to it.

What are the steps you might take to enable a regional DR capability for this application? (choose 2)

Options are :

  • Enable cross region replication on the S3 bucket and specify a destination bucket in the DR region (Correct)
  • Enable multi-AZ for the S3 bucket to enable synchronous replication to the DR region
  • Modify the permissions of the AMI so it can be used across multiple regions
  • Copy the AMI to the DR region and create a new launch configuration for the ASG that uses the AMI (Correct)
  • Modify the launch configuration for the ASG in the DR region and specify the AMI

Answer : Enable cross region replication on the S3 bucket and specify a destination bucket in the DR region Copy the AMI to the DR region and create a new launch configuration for the ASG that uses the AMI

Explanation There are two parts to this solution. First you need to copy the S3 data to each region (as the instances are stateless), then you need to be able to deploy instances from an ASG using the same AMI in each regions. - CRR is an Amazon S3 feature that automatically replicates data across AWS Regions. With CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS Region that you choose, this enables you to copy the existing data across to each region - AMIs of both Amazon EBS-backed AMIs and instance store-backed AMIs can be copied between regions. You can then use the copied AMI to create a new launch configuration (remember that you cannot modify an ASG launch configuration, you must create a new launch configuration) There's no such thing as Multi-AZ for an S3 bucket (it's an RDS concept) Changing permissions on an AMI doesn't make it usable from another region, the AMI needs to be present within each region to be used References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

An application hosted in your VPC uses an EC2 instance with a MySQL DB running on it. The database uses a single 1TB General Purpose SSD (GP2) EBS volume. Recently it has been noticed that the database is not performing well and you need to improve the read performance. What are two possible ways this can be achieved? (choose 2)   

Options are :

  • Add multiple EBS volumes in a RAID 1 array
  • Add multiple EBS volumes in a RAID 0 array (Correct)
  • Add an RDS read replica in another AZ
  • Use a provisioned IOPS volume and specify the number of I/O operations required (Correct)
  • Create an active/passive cluster using MySQL

Answer : Add multiple EBS volumes in a RAID 0 array Use a provisioned IOPS volume and specify the number of I/O operations required

Explanation RAID 0 = 0 striping – data is written across multiple disks and increases performance but no redundancy RAID 1 = 1 mirroring – creates 2 copies of the data but does not increase performance, only redundancy SSD, Provisioned IOPS – I01 provides higher performance than General Purpose SSD (GP2) and you can specify the IOPS required up to 50 IOPS per GB and a maximum of 32000 IOPS RDS read replicas cannot be created from EC2 instances Creating an active/passive cluster doesn't improve read performance as the passive node is not servicing requests. This is use for fault tolerance References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

Your company is reviewing their information security processes. One of the items that came out of a recent audit is that there is insufficient data recorded about requests made to a few S3 buckets. The security team requires an audit trail for operations on the S3 buckets that includes the requester, bucket name, request time, request action, and response status.

Which action would you take to enable this logging?

Options are :

  • Create a CloudTrail trail that audits S3 bucket operations
  • Enable S3 event notifications for the specific actions and setup an SNS notification
  • Enable server access logging for the S3 buckets to save access logs to a specified destination bucket (Correct)
  • Create a CloudWatch metric that monitors the S3 bucket operations and triggers an alarm

Answer : Enable server access logging for the S3 buckets to save access logs to a specified destination bucket

Explanation Server access logging provides detailed records for the requests that are made to a bucket. To track requests for access to your bucket, you can enable server access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and an error code, if relevant For capturing IAM/user identity information in logs you would need to configure AWS CloudTrail Data Events (however this does not audit the bucket operations required in the question) Amazon S3 event notifications can be sent in response to actions in Amazon S3 like PUTs, POSTs, COPYs, or DELETEs.S3 event notifications records the request action but not the other requirements of the security team CloudWatch metrics do not include the bucket operations specified in the question References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html

AWS SAP-C00 Certified Solution Architect Professional Exam Set 3

A colleague has asked you some questions about how AWS charge for DynamoDB. He is interested in knowing what type of workload DynamoDB is best suited for in relation to cost and how AWS charges for DynamoDB? (choose 2)   

Options are :

  • DynamoDB is more cost effective for read heavy workloads (Correct)
  • DynamoDB is more cost effective for write heavy workloads
  • Priced based on provisioned throughput (read/write) regardless of whether you use it or not (Correct)
  • You provision for expected throughput but are only charged for what you use
  • DynamoDB scales automatically and you are charged for what you use

Answer : DynamoDB is more cost effective for read heavy workloads Priced based on provisioned throughput (read/write) regardless of whether you use it or not

Explanation DynamoDB charges: - DynamoDB is more cost effective for read heavy workloads - It is priced based on provisioned throughput (read/write) regardless of whether you use it or not NOTE: With the DynamoDB Auto Scaling feature you can now have DynamoDB dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. However, this is relatively new and may not yet feature on the exam. See the link below for more details References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

You are a Solutions Architect at Digital Cloud Training. One of your clients runs an application that writes data to a DynamoDB table. The client has asked how they can implement a function that runs code in response to item level changes that take place in the DynamoDB table. What would you suggest to the client?   

Options are :

  • Enable server access logging and create an event source mapping between AWS Lambda and the S3 bucket to which the logs are written
  • Enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream (Correct)
  • Create a local secondary index that records item level changes and write some custom code that responds to updates to the index
  • Use Kinesis Data Streams and configure DynamoDB as a producer

Answer : Enable DynamoDB Streams and create an event source mapping between AWS Lambda and the relevant stream

Explanation DynamoDB Streams help you to keep a list of item level changes or provide a list of item level changes that have taken place in the last 24hrs. Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records An event source mapping identifies a poll-based event source for a Lambda function. It can be either an Amazon Kinesis or DynamoDB stream. Event sources maintain the mapping configuration except for stream-based services (e.g. DynamoDB, Kinesis) for which the configuration is made on the Lambda side and Lambda performs the polling You cannot configure DynamoDB as a Kinesis Data Streams producer You can write Lambda functions to process S3 bucket events, such as the object-created or object-deleted events. For example, when a user uploads a photo to a bucket, you might want Amazon S3 to invoke your Lambda function so that it reads the image and creates a thumbnail for the photo . However, the questions asks for a solution that runs code in response to changes in a DynamoDB table, not an S3 bucket A local secondary index maintains an alternate sort key for a given partition key value, it does not record item level changes References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/ https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/

Your company is starting to use AWS to host new web-based applications. A new two-tier application will be deployed that provides customers with access to data records. It is important that the application is highly responsive and retrieval times are optimized. You’re looking for a persistent data store that can provide the required performance. From the list below what AWS service would you recommend for this requirement?   

Options are :

  • ElastiCache with the Memcached engine
  • ElastiCache with the Redis engine (Correct)
  • Kinesis Data Streams
  • RDS in a multi-AZ configuration

Answer : ElastiCache with the Redis engine

Explanation ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads There are two different database engines with different characteristics as per below: Memcached - Not persistent - Cannot be used as a data store - Supports large nodes with multiple cores or threads - Scales out and in, by adding and removing nodes Redis - Data is persistent - Can be used as a datastore - Not multi-threaded - Scales by adding shards, not nodes Kinesis Data Streams is used for processing streams of data, it is not a persistent data store RDS is not the optimum solution due to the requirement to optimize retrieval times which is a better fit for an in-memory data store such as ElastiCache References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-elasticache/

Certification : AWS Certified Solutions Architect Associate Practice Exams Set 6

You are a Solutions Architect at Digital Cloud Training. A client from a large multinational corporation is working on a deployment of a significant amount of resources into AWS. The client would like to be able to deploy resources across multiple AWS accounts and regions using a single toolset and template. You have been asked to suggest a toolset that can provide this functionality?   

Options are :

  • Use a CloudFormation template that creates a stack and specify the logical IDs of each account and region
  • Use a CloudFormation StackSet and specify the target accounts and regions in which the stacks will be created (Correct)
  • Use a third-party product such as Terraform that has support for multiple AWS accounts and regions
  • This cannot be done, use separate CloudFormation templates per AWS account and region

Answer : Use a CloudFormation StackSet and specify the target accounts and regions in which the stacks will be created

Explanation AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions. An administrator account is the AWS account in which you create stack sets A stack set is managed by signing in to the AWS administrator account in which it was created. A target account is the account into which you create, update, or delete one or more stacks in your stack set Before you can use a stack set to create stacks in a target account, you must set up a trust relationship between the administrator and target accounts A regular CloudFormation template cannot be used across regions and accounts. You would need to create copies of the template and then manage updates You do not need to use a third-party product such as Terraform as this functionality can be delivered through native AWS technology References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/ https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html

Your client is looking for a fully managed directory service in the AWS cloud. The service should provide an inexpensive Active Directory-compatible service with common directory features. The client is a medium-sized organization with 4000 users. As the client has a very limited budget it is important to select a cost-effective solution.

What would you suggest?

Options are :

  • AWS Active Directory Service for Microsoft Active Directory
  • AWS Simple AD (Correct)
  • Amazon Cognito
  • AWS Single Sign-On

Answer : AWS Simple AD

Explanation Simple AD is an inexpensive Active Directory-compatible service with common directory features. It is a standalone, fully managed, directory on the AWS cloud and is generally the least expensive option. It is the best choice for less than 5000 users and when you don’t need advanced AD features Active Directory Service for Microsoft Active Directory is the best choice if you have more than 5000 users and/or need a trust relationship set up. It provides advanced AD features that you don't get with SimpleAD Amazon Cognito is an authentication service for web and mobile apps AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

You have been asked to implement a solution for capturing, transforming and loading streaming data into an Amazon RedShift cluster. The solution will capture data from Amazon Kinesis Data Streams. Which AWS services would you utilize in this scenario? (choose 2)   

Options are :

  • Kinesis Data Firehose for capturing the data and loading it into RedShift (Correct)
  • Kinesis Video Streams for capturing the data and loading it into RedShift
  • EMR for transforming the data
  • Lambda for transforming the data (Correct)
  • AWS Data Pipeline for transforming the data

Answer : Kinesis Data Firehose for capturing the data and loading it into RedShift Lambda for transforming the data

Explanation For this solution Kinesis Data Firehose can be used as it can use Kinesis Data Streams as a source and can capture, transform, and load streaming data into a RedShift cluster. Kinesis Data Firehose can invoke a Lambda function to transform data before delivering it to destinations Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing, this solution does not involve video streams AWS Data Pipeline is used for processing and moving data between compute and storage services. It does not work with streaming data as Kinesis does Elastic Map Reduce (EMR) is used for processing and analyzing data using the Hadoop framework. It is not used for transforming streaming data References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

AWS Certification

You are creating a design for a web-based application that will be based on a web front-end using EC2 instances and a database back-end. This application is a low priority and you do not want to incur costs in general day to day management.

Which AWS database service can you use that will require the least operational overhead?   

Options are :

  • RDS
  • RedShift
  • EMR
  • DynamoDB (Correct)

Answer : DynamoDB

Explanation Out of the options in the list, DynamoDB requires the least operational overhead as there are no backups, maintenance periods, software updates etc. to deal with RDS, RedShift and EMR all require some operational overhead to deal with backups, software updates and maintenance periods References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-dynamodb/

A new Big Data application you are developing will use hundreds of EC2 instances to write data to a shared file system. The file system must be stored redundantly across multiple AZs within a region and allow the EC2 instances to concurrently access the file system. The required throughput is multiple GB per second.

From the options presented which storage solution can deliver these requirements?

Options are :

  • Amazon EBS using multiple volumes in a RAID 0 configuration
  • Amazon EFS (Correct)
  • Amazon S3
  • Amazon Storage Gateway

Answer : Amazon EFS

Explanation Amazon EFS is the best solution as it is the only solution that is a file-level storage solution (not block/object-based), stores data redundantly across multiple AZs within a region and you can concurrently connect up to thousands of EC2 instances to a single filesystem Amazon EBS volumes cannot be accessed by concurrently by multiple instances Amazon S3 is an object store, not a file system Amazon Storage Gateway is a range of products used for on-premises storage management and can be configured to cache data locally, backup data to the cloud and also provides a virtual tape backup solution References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-efs/

A company has deployed Amazon RedShift for performing analytics on user data. When using Amazon RedShift, which of the following statements are correct in relation to availability and durability? (choose 2)   

Options are :

  • RedShift always keeps three copies of your data (Correct)
  • Single-node clusters support data replication
  • RedShift provides continuous/incremental backups (Correct)
  • RedShift always keeps five copies of your data
  • Manual backups are automatically deleted when you delete a cluster

Answer : RedShift always keeps three copies of your data RedShift provides continuous/incremental backups

Explanation RedShift always keeps three copies of your data and provides continuous/incremental backups Corrections: Single-node clusters do not support data replication Manual backups are not automatically deleted when you delete a cluster References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/

AWS Certified Solutions Architect Associate

You are planning to launch a RedShift cluster for processing and analyzing a large amount of data. The RedShift cluster will be deployed into a VPC with multiple subnets. Which construct is used when provisioning the cluster to allow you to specify a set of subnets in the VPC that the cluster will be deployed into?   

Options are :

  • DB Subnet Group
  • Subnet Group
  • Availability Zone (AZ)
  • Cluster Subnet Group (Correct)

Answer : Cluster Subnet Group

Explanation You create a cluster subnet group if you are provisioning your cluster in your virtual private cloud (VPC) A cluster subnet group allows you to specify a set of subnets in your VPC When provisioning a cluster you provide the subnet group and Amazon Redshift creates the cluster on one of the subnets in the group A DB Subnet Group is used by RDS A Subnet Group is used by ElastiCache Availability Zones are part of the AWS global infrastructure, subnets reside within AZs but in RedShift you provision the cluster into Cluster Subnet Groups References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/database/amazon-redshift/ https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-cluster-subnet-groups.html

There is a temporary need to share some video files that are stored in a private S3 bucket. The consumers do not have AWS accounts and you need to ensure that only authorized consumers can access the files. What is the best way to enable this access?   

Options are :

  • Enable public read access for the S3 bucket
  • Use CloudFront to distribute the files using authorization hash tags
  • Generate a pre-signed URL and distribute it to the consumers (Correct)
  • Configure an allow rule in the Security Group for the IP addresses of the consumers

Answer : Generate a pre-signed URL and distribute it to the consumers

Explanation S3 pre-signed URLs can be used to provide temporary access to a specific object to those who do not have AWS credentials. This is the best option Enabling public read access does not restrict the content to authorized consumers You cannot use CloudFront as hash tags are not a CloudFront authentication mechanism Security Groups do not apply to S3 buckets References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

A Solutions Architect is deploying an Auto Scaling Group (ASG) and needs to determine what CloudWatch monitoring option to use. Which of the statements below would assist the Architect in making his decision? (choose 2)   

Options are :

  • Basic monitoring is enabled by default if the ASG is created from the console (Correct)
  • Detailed monitoring is enabled by default if the ASG is created from the CLI (Correct)
  • Basic monitoring is enabled by default if the ASG is created from the CLI
  • Detailed monitoring is chargeable and must always be manually enabled
  • Detailed monitoring is free and can be manually enabled

Answer : Basic monitoring is enabled by default if the ASG is created from the console Detailed monitoring is enabled by default if the ASG is created from the CLI

Explanation Basic monitoring sends EC2 metrics to CloudWatch about ASG instances every 5 minutes Detailed can be enabled and sends metrics every 1 minute (it is always chargeable) When the launch configuration is created from the CLI detailed monitoring of EC2 instances is enabled by default When you enable Auto Scaling group metrics, Auto Scaling sends sampled data to CloudWatch every minute References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 7

An application you are designing receives and processes files. The files are typically around 4GB in size and the application extracts metadata from the files which typically takes a few seconds for each file. The pattern of updates is highly dynamic with times of little activity and then multiple uploads within a short period of time.

What architecture will address this workload the most cost efficiently?

Options are :

  • Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata (Correct)
  • Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata
  • Store the file in an EBS volume which can then be accessed by another EC2 instance for processing
  • Use a Kinesis data stream to store the file, and use Lambda for processing

Answer : Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a Lambda function to extract the metadata

Explanation Storing the file in an S3 bucket is cost-efficient, and using S3 event notifications to invoke a Lambda function works well for this unpredictable workload and is cost-efficient Kinesis data streams consumers run on EC2 instances (not Lambda) SQS queues have a maximum message size of 256KB. You can use the extended client library for Java to use pointers to a payload on S3 but the maximum payload size is 2GB Storing the file in an EBS volume and using EC2 instances for processing is not cost efficient References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

A Solutions Architect is developing a mobile web app that will provide access to health related data. The web apps will be tested on Android and iOS devices. The Architect needs to run tests on multiple devices simultaneously and to be able to reproduce issues, and record logs and performance data to ensure quality before release.

What AWS service can be used for these requirements?

Options are :

  • AWS Cognito
  • AWS Device Farm (Correct)
  • AWS Workspaces
  • Amazon Appstream 2.0

Answer : AWS Device Farm

Explanation AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. It is not used for testing Amazon WorkSpaces is a managed, secure cloud desktop service Amazon AppStream 2.0 is a fully managed application streaming service References: https://aws.amazon.com/device-farm/

A Solutions Architect is designing a highly-scalable system to track records. Records must remain available for immediate download for three months, and then the records must be deleted.

What's the most appropriate decision for this use case?

Options are :

  • Store the files on Amazon EBS, and create a lifecycle policy to remove the files after three months
  • Store the files on Amazon S3, and create a lifecycle policy to remove the files after three months (Correct)
  • Store the files on Amazon Glacier, and create a lifecycle policy to remove the files after three months
  • Store the files on Amazon EFS, and create a lifecycle policy to remove the files after three months

Answer : Store the files on Amazon S3, and create a lifecycle policy to remove the files after three months

Explanation With S3 you can create a lifecycle action using the "expiration action element" which expires objects (deletes them) at the specified time S3 lifecycle actions apply to any storage class, including Glacier, however Glacier would not allow immediate download There is no lifecycle policy available for deleting files on EBS and EFS NOTE: The new Amazon Data Lifecycle Manager (DLM) feature automates the creation, retention, and deletion of EBS snapshots but not the individual files within an EBS volume. This is a new feature that may not yet feature on the exam References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

Certification : AWS Certified Solutions Architect Associate Practice Exams Set 4

A Solutions Architect is responsible for a web application that runs on EC2 instances that sit behind an Application Load Balancer (ALB). Auto Scaling is used to launch instances across 3 Availability Zones. The web application serves large image files and these are stored on an Amazon EFS file system. Users have experienced delays in retrieving the files and the Architect has been asked to improve the user experience.

What should the Architect do to improve user experience?

Options are :

  • Move the digital assets to EBS
  • Reduce the file size of the images
  • Cache static content using CloudFront (Correct)
  • Use Spot instances

Answer : Cache static content using CloudFront

Explanation CloudFront is ideal for caching static content such as the files in this scenario and would increase performance Moving the files to EBS would not make accessing the files easier or improve performance Reducing the file size of the images may result in better retrieval times, however CloudFront would still be the preferable option Using Spot EC2 instances may reduce EC2 costs but it won't improve user experience References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/

A Solutions Architect needs to transform data that is being uploaded into S3. The uploads happen sporadically and the transformation should be triggered by an event. The transformed data should then be loaded into a target data store.

What services would be used to deliver this solution in the MOST cost-effective manner? (choose 2)

Options are :

  • Configure a CloudWatch alarm to send a notification to CloudFormation when data is uploaded
  • Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job (Correct)
  • Configure CloudFormation to provision a Kinesis data stream to transform the data and load it into S3
  • Use AWS Glue to extract, transform and load the data into the target data store (Correct)
  • Configure CloudFormation to provision AWS Data Pipeline to transform the data

Answer : Configure S3 event notifications to trigger a Lambda function when data is uploaded and use the Lambda function to trigger the ETL job Use AWS Glue to extract, transform and load the data into the target data store

Explanation S3 event notifications triggering a Lambda function is completely serverless and cost-effective AWS Glue can trigger ETL jobs that will transform that data and load it into a data store such as S3 Kinesis Data Streams is used for processing data, rather than extracting and transforming it. The Kinesis consumers are EC2 instances which are not as cost-effective as serverless solutions AWS Data Pipeline can be used to automate the movement and transformation of data, it relies on other services to actually transform the data References: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html https://aws.amazon.com/glue/

A Solutions Architect is developing an encryption solution. The solution requires that data keys are encrypted using envelope protection before they are written to disk.

Which solution option can assist with this requirement?

Options are :

  • AWS KMS API (Correct)
  • AWS Certificate Manager
  • API Gateway with STS
  • IAM Access Key

Answer : AWS KMS API

Explanation The AWS KMS API can be used for encrypting data keys (envelope encryption) AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users) IAM access keys are used for signing programmatic requests you make to AWS References: https://docs.aws.amazon.com/kms/latest/APIReference/Welcome.html

AWS Solutions Architect Associate 2019 with Practice Test Set 3

A Solutions Architect has been asked to suggest a solution for analyzing data in S3 using standard SQL queries. The solution should use a serverless technology.

Which AWS service can the Architect use?

Options are :

  • Amazon Athena (Correct)
  • Amazon RedShift
  • AWS Glue
  • AWS Data Pipeline

Answer : Amazon Athena

Explanation Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run Amazon RedShift is used for analytics but cannot analyze data in S3 AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. It is not used for analyzing data in S3 AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals References: https://aws.amazon.com/athena/

An application you manage stores encrypted data in S3 buckets. You need to be able to query the encrypted data using SQL queries and write the encrypted results back the S3 bucket. As the data is sensitive you need to implement fine-grained control over access to the S3 bucket.

What combination of services represent the BEST options support these requirements? (choose 2)

Options are :

  • Use Athena for querying the data and writing the results back to the bucket (Correct)
  • Use IAM policies to restrict access to the bucket (Correct)
  • Use bucket ACLs to restrict access to the bucket
  • Use AWS Glue to extract the data, analyze it, and load it back to the S3 bucket
  • Use the AWS KMS API to query the encrypted data, and the S3 API for writing the results

Answer : Use Athena for querying the data and writing the results back to the bucket Use IAM policies to restrict access to the bucket

Explanation Athena also allows you to easily query encrypted data stored in Amazon S3 and write encrypted results back to your S3 bucket. Both, server-side encryption and client-side encryption are supported With IAM policies, you can grant IAM users fine-grained control to your S3 buckets, and is preferable to using bucket ACLs AWS Glue is an ETL service and is not used for querying and analyzing data in S3 The AWS KMS API can be used for encryption purposes, however it cannot perform analytics so is not suitable References: https://aws.amazon.com/athena/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

An application tier of a multi-tier web application currently hosts two web services on the same set of instances. The web services each listen for traffic on different ports. Which AWS service should a Solutions Architect use to route traffic to the service based on the incoming request path?   

Options are :

  • Application Load Balancer (ALB) (Correct)
  • Amazon Route 53
  • Classic Load Balancer (CLB)
  • Amazon CloudFront

Answer : Application Load Balancer (ALB)

Explanation An Application Load Balancer is a type of Elastic Load Balancer that can use layer 7 (HTTP/HTTPS) protocol data to make forwarding decisions. An ALB supports both path-based (e.g. /images or /orders) and host-based routing (e.g. example.com) In this scenario a single EC2 instance is listening for traffic for each application on a different port. You can use a target group that listens on a single port (HTTP or HTTPS) and then uses listener rules to selectively route to a different port on the EC2 instance based on the information in the URL path. So you might have example.com/images going to one back-end port and example.com/orders going to a different back0end port You cannot use host-based or path-based routing with a CLB Amazon CloudFront caches content, it does not direct traffic to different ports on EC2 instances Amazon Route 53 is a DNS service. It can be used to load balance however it does not have the ability to route based on information in the incoming request path References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 8

An application runs on two EC2 instances in private subnets split between two AZs. The application needs to connect to a CRM SaaS application running on the Internet. The vendor of the SaaS application restricts authentication to a whitelist of source IP addresses and only 2 IP addresses can be configured per customer.

What is the most appropriate and cost-effective solution to enable authentication to the SaaS application?

Options are :

  • Use a Network Load Balancer and configure a static IP for each AZ
  • Use multiple Internet-facing Application Load Balancers with Elastic IP addresses
  • Configure a NAT Gateway for each AZ with an Elastic IP address (Correct)
  • Configure redundant Internet Gateways and update the routing tables for each subnet

Answer : Configure a NAT Gateway for each AZ with an Elastic IP address

Explanation In this scenario you need to connect the EC2 instances to the SaaS application with a source address of one of two whitelisted public IP addresses to ensure authentication works. A NAT Gateway is created in a specific AZ and can have a single Elastic IP address associated with it. NAT Gateways are deployed in public subnets and the route tables of the private subnets where the EC2 instances reside are configured to forward Internet-bound traffic to the NAT Gateway. You do pay for using a NAT Gateway based on hourly usage and data processing, however this is still a cost-effective solution A Network Load Balancer can be configured with a single static IP address (the other types of ELB cannot) for each AZ. However, using a NLB is not an appropriate solution as the connections are being made outbound from the EC2 instances to the SaaS app and ELBs are used for distributing inbound connection requests to EC2 instances (only return traffic goes back through the ELB) An ALB does not support static IP addresses and is not suitable for a proxy function AWS Route 53 is a DNS service and is not used as an outbound proxy server so is not suitable for this scenario References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

The website for a new application received around 50,000 requests each second and the company wants to use multiple applications to analyze the navigation patterns of the users on their website so they can personalize the user experience.

What can a Solutions Architect use to collect page clicks for the website and process them sequentially for each user?

Options are :

  • Amazon SQS standard queue
  • Amazon SQS FIFO queue
  • Amazon Kinesis Streams (Correct)
  • AWS CloudTrail trail

Answer : Amazon Kinesis Streams

Explanation This is a good use case for Amazon Kinesis streams as it is able to scale to the required load, allow multiple applications to access the records and process them sequentially Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications Amazon Kinesis streams allows up to 1 MiB of data per second or 1,000 records per second for writes per shard. There is no limit on the number of shards so you can easily scale Kinesis Streams to accept 50,000 per second The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream Standard SQS queues do not ensure that messages are processed sequentially and FIFO SQS queues do not scale to the required number of transactions a second CloudTrail is used for auditing an is not useful here References: https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html https://aws.amazon.com/kinesis/data-streams/faqs/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

An AWS user has created a Provisioned IOPS EBS volume which is attached to an EBS optimized instance and configured 1000 IOPS. Based on the EC2 SLA, what is the average IOPS the user will achieve for most of the year?   

Options are :

  • 1000
  • 950
  • 990
  • 900 (Correct)

Answer : 900

Explanation Unlike gp2, which uses a bucket and credit model to calculate performance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers within 10 percent of the provisioned IOPS performance 99.9 percent of the time over a given year. Therefore you should expect to get 900 IOPS most of the year References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 12

Which of the following approaches provides the lowest cost for Amazon elastic block store snapshots while giving you the ability to fully restore data?   

Options are :

  • Maintain two snapshots: the original snapshot and the latest incremental snapshot
  • Maintain the original snapshot; subsequent snapshots will overwrite one another
  • Maintain a single snapshot; the latest snapshot is both incremental and complete (Correct)
  • Maintain the most current snapshot; archive the original to Amazon Glacier

Answer : Maintain a single snapshot; the latest snapshot is both incremental and complete

Explanation You can backup data on an EBS volume by periodically taking snapshots of the volume. The scenario is that you need to reduce storage costs by maintaining as few EBS snapshots as possible whilst ensuring you can restore all data when required. If you take periodic snapshots of a volume, the snapshots are incremental which means only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed such that you need to retain only the most recent snapshot in order to restore the volume You cannot just keep the original snapshot as it will not be incremental and complete You do not need to keep the original and latest snapshot as the latest snapshot is all that is needed There is no need to archive the original snapshot to Amazon Glacier. EBS copies your data across multiple servers in an AZ for durability References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You are a Solutions Architect at Digital Cloud Training. One of your clients is expanding their operations into multiple AWS regions around the world. The client has requested some advice on how to leverage their existing AWS Identity and Access Management (IAM) configuration in other AWS regions. What advice would you give to your client?   

Options are :

  • IAM is a regional service and the client will need to copy the configuration items required across to other AWS regions
  • IAM is a global service and the client can use users, groups, roles, and policies in any AWS region (Correct)
  • The client can use Amazon Cognito to create a single sign-on configuration across multiple AWS regions
  • The client will need to create a VPC peering configuration with each remote AWS region and then allow IAM access across regions

Answer : IAM is a global service and the client can use users, groups, roles, and policies in any AWS region

Explanation IAM is universal (global) and does not apply to regions so you will use the same IAM configuration no matter if you use one of all regions VPC peering is not required Amazon Cognito is used for authentication with web and mobile apps, it is not required to make IAM work across regions References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

AWS Solutions Architect Associate 2019 with Practice Test Set 4

You have deployed a highly available web application across two AZs. The application uses an Auto Scaling Group (ASG) and an Application Load Balancer (ALB) to distribute connections between the EC2 instances that make up the web front-end. The load has increased and the ASG has launched new instances in both AZs, however you noticed that the ALB is only distributing traffic to the EC2 instances in one AZ.

From the options below, what is the most likely cause of the issue?

Options are :

  • The ASG has not registered the new instances with the ALB
  • The EC2 instances in one AZ are not passing their health checks
  • Cross-zone load balancing is not enabled on the ALB
  • The ALB does not have a public subnet defined in both AZs (Correct)

Answer : The ALB does not have a public subnet defined in both AZs

Explanation Cross-zone load balancing is enabled on the ALB by default. Also, if it was disabled the ALB would send traffic equally to each AZ configured regardless of the number of hosts in each AZ so some traffic would still get through Internet facing ELB nodes have public IPs and route traffic to the private IP addresses of the EC2 instances. You need one public subnet in each AZ where the ELB is defined The ASG would automatically register new instances with the ALB EC2 instance health checks are unlikely to be the issue here as the instances in both AZs are all being launched from the same ASG so should be identically configured Please refer to the AWS article linked below for detailed information on the configuration described in this scenario References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/ https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/

One of your clients has asked you for some advice on an issue they are facing regarding storage. The client uses an on-premise block based storage array which is getting close to capacity. The client would like to maintain a configuration where reads/writes to a subset of frequently accessed data are performed on-premise whilst also alleviating the local capacity issues by migrating data into the AWS cloud. What would you suggest as the BEST solution to the client’s current problems?   

Options are :

  • Implement a Storage Gateway Virtual Tape Library, backup the data and then delete the data from the array
  • Implement a Storage Gateway Volume Gateway in cached mode (Correct)
  • Use S3 copy command to copy data into the AWS cloud
  • Archive data that is not accessed regularly straight into Glacier

Answer : Implement a Storage Gateway Volume Gateway in cached mode

Explanation Backing up the data and then deleting it is not the best solution when much of the data is accessed regularly A Storage Gateway Volume Gateway in cached mode will store the entire dataset on S3 and a cache of the most frequently accessed data is cached on-site The S3 copy command doesn’t help here as the data is not in S3 You cannot archive straight into Glacier, you must store data on S3 first. Also, archiving is not the best solution to this problem References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/aws-storage-gateway/

You are a Solutions Architect at Digital Cloud Training. Your client’s company is growing and now has over 10,000 users. The client would like to start deploying services into the AWS Cloud including AWS Workspaces. The client expects there to be a large take-up of AWS services across their user base and would like to use their existing Microsoft Active Directory identity source for authentication. The client does not want to replicate account credentials into the AWS cloud.

You have been tasked with designing the identity, authorization and access solution for the customer. Which AWS services will you include in your design? (choose 2)

Options are :

  • Use an AWS Cognito user pool
  • Setup trust relationships to extend authentication from the on-premises Microsoft Active Directory into the AWS cloud (Correct)
  • Use a Large AWS Simple AD
  • Use a Large AWS AD Connector
  • Use the Enterprise Edition of AWS Directory Service for Microsoft Active Directory (Correct)

Answer : Setup trust relationships to extend authentication from the on-premises Microsoft Active Directory into the AWS cloud Use the Enterprise Edition of AWS Directory Service for Microsoft Active Directory

Explanation The customer wants to leverage their existing directory but not replicate account credentials into the cloud. Therefore they can use the Active Directory Service for Microsoft Active Directory and create a trust relationship with their existing AD domain. This will allow them to authenticate using local user accounts in their existing directory without creating an AD Domain Controller in the cloud (which would entail replicating account credentials) Active Directory Service for Microsoft Active Directory is the best choice if you have more than 5000 users and/or need a trust relationship set up AWS Simple AD does not support trust relationships with other domains and therefore cannot be used in this situation AD Connector would be a good solution for this scenario, however it does not support the number of users in the organization (up to 5000 users only) Amazon Cognito is used for mobile and web app authentication References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

AWS SCS-C01 Certified Security Speciality Practice Exam Set 5

An EC2 instance that you manage has an IAM role attached to it that provides it with access to Amazon S3 for saving log data to a bucket. A change in the application architecture means that you now need to provide the additional ability for the application to securely make API requests to Amazon API Gateway.

Which two methods could you use to resolve this challenge? (choose 2)

Options are :

  • Add an IAM policy to the existing IAM role that the EC2 instance is using granting permissions to access Amazon API Gateway (Correct)
  • Create an IAM role with a policy granting permissions to Amazon API Gateway and add it to the EC2 instance as an additional IAM role
  • Delegate access to the EC2 instance from the API Gateway management console
  • Create a new IAM role with multiple IAM policies attached that grants access to Amazon S3 and Amazon API Gateway, and replace the existing IAM role that is attached to the EC2 instance (Correct)
  • You cannot modify the IAM role assigned to an EC2 instance after it has been launched. You’ll need to recreate the EC2 instance and assign a new IAM role

Answer : Add an IAM policy to the existing IAM role that the EC2 instance is using granting permissions to access Amazon API Gateway Create a new IAM role with multiple IAM policies attached that grants access to Amazon S3 and Amazon API Gateway, and replace the existing IAM role that is attached to the EC2 instance

Explanation There are two possible solutions here. In one you create a new IAM role with multiple policies, in the other you add a new policy to the existing IAM role. Contrary to one of the incorrect answers, you can modify IAM roles after an instance has been launched - this was changed quite some time ago now. However, you cannot add multiple IAM roles to a single EC2 instance. If you need to attach multiple policies you must attach them to a single IAM role. There is no such thing as delegating access using the API Gateway management console References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

As a Solutions Architect for Digital Cloud Training you are designing an online shopping application for a new client. The application will be composed of distributed, decoupled components to ensure that the failure of a single component does not affect the availability of the application.

You will be using SQS as the message queueing service and the client has stipulated that the messages related to customer orders must be processed in the order that they were submitted in the online application. The client expects that the peak rate of transactions will not exceed 140 transactions a second.

What will you explain to the client?

Options are :

  • This can be achieved by using a FIFO queue which will guarantee the order of messages (Correct)
  • The only way this can be achieved is by configuring the applications to process messages from the queue in the right order based on timestamps
  • This is fine, standard SQS queues can guarantee the order of the messages
  • This is not possible with SQS as you cannot control the order in the queue

Answer : This can be achieved by using a FIFO queue which will guarantee the order of messages

Explanation Queues can be either standard or first-in-first-out (FIFO) Standard queues provide a loose-FIFO capability that attempts to preserve the order of messages and provide at-least-once delivery, which means that each message is delivered at least once. Therefore you could not use a standard queue for this solution as it would not be guaranteed that the order of the messages would be maintained FIFO (first-in-first-out) queues preserve the exact order in which messages are sent and received.. If you use a FIFO queue, you don’t have to place sequencing information in your message and they provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. A FIFO queue would fit the solution requirements for this question Configuring the application to process messages from the queue based on timestamps is more complex and not necessary when you can implement FIFO queues References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sqs/

A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able to access data from another customer.

Which solution should the Architect use to meet the requirements?

Options are :

  • Network ACLs
  • IAM Instance Profile for EC2 instances
  • IAM roles for tasks (Correct)
  • IAM roles for EC2 instances

Answer : IAM roles for tasks

Explanation IAM roles for ECS tasks enabled you to secure your infrastructure by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This means you can have one task that uses a specific IAM role for access to S3 and one task that uses an IAM role to access DynamoDB With IAM roles for EC2 instances you assign all of the IAM policies required by tasks in the cluster to the EC2 instances that host the cluster. This does not allow the secure separation requested An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts. Again, this does not allow the secure separation requested Network ACLs are applied at the subnet level and would not assist here References: https://aws.amazon.com/blogs/compute/help-secure-container-enabled-applications-with-iam-roles-for-ecs-tasks/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 15

You are a Solutions Architect at Digital Cloud Training. One of your clients has a global presence and their web application runs out of multiple AWS regions. The client wants to personalize the experience for the customers in different parts of the world so they receive a customized application interface in the users’ language. The client has created the customized web applications and need to ensure customers are directed to the correct application based on their location.

How can this be achieved?

Options are :

  • Use Route 53 with a latency based routing policy that will direct users to the closest region
  • Use CloudFront to cache the content in edge locations
  • Use Route 53 with a geolocation routing policy that directs users based on their geographical location (Correct)
  • Use Route 53 with a multi-value answer routing policy that presents multiple options to the users

Answer : Use Route 53 with a geolocation routing policy that directs users based on their geographical location

Explanation Latency based routing would direct users to the closest region but geolocation allows you to configure settings based on specified attributes rather than just latency (distance) Geolocation provides: - Caters to different users in different countries and different languages - Contains users within a particular geography and offers them a customized version of the workload based on their specific needs - Geolocation can be used for localizing content and presenting some or all of your website in the language of your users - Can also protect distribution rights Multi-value answers are used for responding to DNS queries with up to eight healthy records selected at random CloudFront can cache content but would not provide the personalization features requested References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-route-53/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions