Practice Questions : AWS Certified Solutions Architect Associate

You are using encryption with several AWS services and are looking for a solution for secure storage of the keys. Which AWS service provides a hardware-based storage solution for cryptographic keys?   

Options are :

  • Virtual Private Cloud (VPC)
  • Public Key Infrastructure (PKI)
  • CloudHSM (Correct)
  • Key Management Service (KMS)

Answer : CloudHSM

Explanation AWS CloudHSM is a cloud-based hardware security module (HSM) that allows you to easily add secure key storage and high-performance crypto operations to your AWS applications CloudHSM is a managed service that automates time-consuming administrative tasks, such as hardware provisioning, software patching, high availability, and backups CloudHSM is one of several AWS services, including AWS Key Management Service (KMS), which offer a high level of security for your cryptographic keys KMS provides an easy, cost-effective way to manage encryption keys on AWS that meets the security needs for the majority of customer data A VPC is a logical networking construct within an AWS account PKI is a term used to describe the whole infrastructure responsible for the usage of public key cryptography References: https://aws.amazon.com/cloudhsm/details/

You are developing a multi-tier application that includes loosely-coupled, distributed application components and need to determine a method of sending notifications instantaneously. Using SNS which transport protocols are supported? (choose 2)   

Options are :

  • HTTPS (Correct)
  • SWF
  • Email-JSON (Correct)
  • Lambda
  • FTP

Answer : HTTPS Email-JSON

Explanation Note that the questions asks you which transport protocols are supported, NOT which subscribers - therefore Lambda is not supported SNS supports notifications over multiple transport protocols: HTTP/HTTPS – subscribers specify a URL as part of the subscription registration Email/Email-JSON – messages are sent to registered addresses as email (text-based or JSON-object) SQS – users can specify an SQS standard queue as the endpoint SMS – messages are sent to registered phone numbers as SMS text messages References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sns/

You have deployed a number of AWS resources using CloudFormation. You need to make some changes to a couple of resources within the stack and are planning how to implement the updates. Due to recent bad experiences, you’re a little concerned about what the effects of implementing updates to the resources might have on other resources in the stack.

What is the easiest way to proceed cautiously?

Options are :

  • Deploy a new stack to test the changes
  • Create and execute a change set (Correct)
  • Use a direct update
  • Use OpsWorks to manage the configuration changes

Answer : Create and execute a change set

Explanation AWS CloudFormation provides two methods for updating stacks: direct update or creating and executing change sets. When you directly update a stack, you submit changes and AWS CloudFormation immediately deploys them. Use direct updates when you want to quickly deploy your updates. With change sets, you can preview the changes AWS CloudFormation will make to your stack, and then decide whether to apply those changes Direct updates will not provide the safeguard of being able to preview the changes as changes sets do You do not need to go to the trouble and cost of deploying a new stack You cannot use OpsWorks to manage the configuration changes. OpsWorks is used for implementing managed Chef and Puppet services References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/aws-cloudformation/ https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html

AWS Regions provide multiple, physically separated and isolated _____________ which are connected with low latency, high throughput, and highly redundant networking. Select the missing term from the options below.   

Options are :

  • Subnets
  • Edge locations
  • Facilities
  • Availability zones (Correct)

Answer : Availability zones

Explanation Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones and are connected with low latency, high throughput, and highly redundant networking Subnets are created within availability zones (AZs). Each subnet must reside entirely within one Availability Zone and cannot span zones Each AZ is located in one or more data centers (facilities) An Edge Location is a CDN endpoint for CloudFront References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

You are a Solutions Architect at Digital Cloud Training. One of your customers runs an application on-premise that stores large media files. The data is mounted to different servers using either the SMB or NFS protocols. The customer is having issues with scaling the storage infrastructure on-premise and is looking for a way to offload the data set into the cloud whilst retaining a local cache for frequently accessed content.

Which of the following is the best solution?

Options are :

  • Use the AWS Storage Gateway File Gateway (Correct)
  • Create a script that migrates infrequently used data to S3 using multi-part upload
  • Use the AWS Storage Gateway Volume Gateway in cached volume mode
  • Establish a VPN and use the Elastic File System (EFS)

Answer : Use the AWS Storage Gateway File Gateway

Explanation File gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3. It can be used for on-premises applications, and for Amazon EC2-resident applications that need file storage in S3 for object based workloads. Used for flat files only, stored directly on S3. File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching The AWS Storage Gateway Volume Gateway in cached volume mode is a block-based (not file-based) solution so you cannot mount the storage with the SMB or NFS protocols With Cached Volume mode – the entire dataset is stored on S3 and a cache of the most frequently accessed data is cached on-site You could mount EFS over a VPN but it would not provide you a local cache of the data Creating a script the migrates infrequently used data to S3 is possible but that data would then not be indexed on the primary filesystem so you wouldn't have a method of retrieving it without developing some code to pull it back from S3. This is not the best solution References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/aws-storage-gateway/

Your operations team would like to be notified if an RDS database exceeds certain metric thresholds. They have asked you how this could be automated?   

Options are :

  • Create a CloudWatch alarm and associate an SNS topic with it that sends an email notification (Correct)
  • Create a CloudTrail alarm and configure a notification event to send an SMS
  • Setup an RDS alarm and associate an SNS topic with it that sends an email
  • Create a CloudWatch alarm and associate an SQS with it that delivers a message to SES

Answer : Create a CloudWatch alarm and associate an SNS topic with it that sends an email notification

Explanation You can create a CloudWatch alarm that watches a single CloudWatch metric or the result of a math expression based on CloudWatch metrics. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over a number of time periods. The action can be an Amazon EC2 action, an Amazon EC2 Auto Scaling action, or a notification sent to an Amazon SNS topic. SNS can be configured to send an email notification CloudTrail is used for auditing API access, not for performance monitoring CloudWatch performs performance monitoring so you don't setup alarms in RDS itself You cannot associate an SQS queue with a CloudWatch alarm References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/management-tools/amazon-cloudwatch/

A Solutions Architect is conducting an audit and needs to query several properties of EC2 instances in a VPC. What two methods are available for accessing and querying the properties of an EC2 instance such as instance ID, public keys and network interfaces? (choose 2)   

Options are :

  • Download and run the Instance Metadata Query Tool (Correct)
  • Run the command “curl http://169.254.169.254/latest/meta-data/? (Correct)
  • Run the command “curl http://169.254.169.254/latest/dynamic/instance-identity/?
  • Use the Batch command
  • Use the EC2 Config service

Answer : Download and run the Instance Metadata Query Tool Run the command “curl http://169.254.169.254/latest/meta-data/?

Explanation This information is stored in the instance metadata on the instance. You can access the instance metadata through a URI or by using the Instance Metadata Query tool The instance metadata is available at http://169.254.169.254/latest/meta-data The Instance Metadata Query tool allows you to query the instance metadata without having to type out the full URI or category names The EC2 config service or batch command are not suitable for accessing this information References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ec2/

A solutions architect is building a scalable and fault tolerant web architecture and is evaluating the benefits of the Elastic Load Balancing (ELB) service. Which statements are true regarding ELBs? (select 2)

Options are :

  • For public facing ELBs you must have one public subnet in each AZ where the ELB is defined (Correct)
  • Multiple subnets per AZ can be enabled for each ELB
  • Both types of ELB route traffic to the public IP addresses of EC2 instances
  • Internet facing ELB nodes have public IPs (Correct)
  • Internal-only load balancers require an Internet gateway

Answer : For public facing ELBs you must have one public subnet in each AZ where the ELB is defined Internet facing ELB nodes have public IPs

Explanation Internet facing ELB nodes have public IPs Both types of ELB route traffic to the private IP addresses of EC2 instances For public facing ELBs you must have one public subnet in each AZ where the ELB is defined Internal-only load balancers do not require an Internet gateway Only 1 subnet per AZ can be enabled for each ELB References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A Solutions Architect is designing the messaging and streaming layers of a serverless application. The messaging layer will manage communications between components and the streaming layer will manage real-time analysis and processing of streaming data.

The Architect needs to select the most appropriate AWS services for these functions. Which services should be used for the messaging and streaming layers? (choose 2)

Options are :

  • Use Amazon SNS for providing a fully managed messaging service (Correct)
  • Use Amazon EMR for collecting, processing and analyzing real-time streaming data
  • Use Amazon CloudTrail for collecting, processing and analyzing real-time streaming data
  • Use Amazon SWF for providing a fully managed messaging service
  • Use Amazon Kinesis for collecting, processing and analyzing real-time streaming data (Correct)

Answer : Use Amazon SNS for providing a fully managed messaging service Use Amazon Kinesis for collecting, processing and analyzing real-time streaming data

Explanation Amazon Kinesis makes it easy to collect, process, and analyze real-time streaming data. With Amazon Kinesis Analytics, you can run standard SQL or build entire streaming applications using SQL Amazon Simple Notification Service (Amazon SNS) provides a fully managed messaging service for pub/sub patterns using asynchronous event notifications and mobile push notifications for microservices, distributed systems, and serverless applications Amazon Elastic Map Reduce runs on EC2 instances so is not serverless Amazon Simple Workflow Service is used for executing tasks not sending messages Amazon CloudTrail is used for recording API activity on your account References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/ https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sns/

Using the VPC wizard, you have selected the option “VPC with Public and Private Subnets and Hardware VPN access?. Which of the statements below correctly describe the configuration that will be created? (choose 2)   

Options are :

  • A NAT gateway will be created for the private subnet
  • A peering connection will be made between the public and private subnets
  • One subnet will be connected to your corporate data center using an IPSec VPN tunnel (Correct)
  • A virtual private gateway will be created (Correct)
  • A physical VPN device will be allocated to your VPC

Answer : One subnet will be connected to your corporate data center using an IPSec VPN tunnel A virtual private gateway will be created

Explanation The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet, and a virtual private gateway to enable communication with your own network over an IPsec VPN tunnel Review the scenario described in the AWS article below for more information References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/ https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario3.html

You are building a small web application running on EC2 that will be serving static content. The user base is spread out globally and speed is important. Which AWS service can deliver the best user experience cost-effectively and reduce the load on the web server?

Options are :

  • Amazon CloudFront (Correct)
  • Amazon S3
  • Amazon RedShift
  • Amazon EBS volume

Answer : Amazon CloudFront

Explanation This is a good use case for CloudFront as the user base is spread out globally and CloudFront can cache the content closer to users and also reduce the load on the web server running on EC2 Amazon S3 is very cost-effective however a bucket is located in a single region and therefore performance is EBS is not the most cost-effective storage solution and the data would be located in a single region to latency could be an issue Amazon RedShift is a data warehouse and is not suitable in this solution References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/

You are using CloudWatch to monitor the performance of AWS Lambda. Which metrics does Lambda track? (choose 2)   

Options are :

  • Total number of connections
  • Total number of transactions
  • Latency per request (Correct)
  • Number of users
  • Total number of requests (Correct)

Answer : Latency per request Total number of requests

Explanation Lambda automatically monitors Lambda functions and reports metrics through CloudWatch. Lambda tracks the number of requests, the latency per request, and the number of requests resulting in an error You can view the request rates and error rates using the AWS Lambda Console, the CloudWatch console, and other AWS resources References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/

Which AWS service does API Gateway integrate with to enable users from around the world to achieve the lowest possible latency for API requests and responses?

Options are :

  • S3 Transfer Acceleration
  • CloudFront (Correct)
  • Lambda
  • Direct Connect

Answer : CloudFront

Explanation CloudFront is used as the public endpoint for API Gateway and provides reduced latency and distributed denial of service protection through the use of CloudFront Direct Connect provides a private network into AWS from your data center S3 Transfer Acceleration is not used with API Gateway, it is used to accelerate uploads of S3 objects Lambda is not used to reduce latency for API requests References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-api-gateway/

A Solutions Architect is creating a design for a multi-tiered web application. The application will use multiple AWS services and must be designed with elasticity and high-availability in mind.

Which architectural best practices should be followed to reduce interdependencies between systems? (choose 2)

Options are :

  • Enable automatic scaling for storage and databases
  • Implement asynchronous integration using Amazon SQS queues (Correct)
  • Implement service discovery using static IP addresses
  • Implement well-defined interfaces using a relational database
  • Enable graceful failure through AWS Auto Scaling (Correct)

Answer : Implement asynchronous integration using Amazon SQS queues Enable graceful failure through AWS Auto Scaling

Explanation ul> Asynchronous integration - this is another form of loose coupling where an interaction does not need an immediate response (think SQS queue or Kinesis) Graceful failure - build applications such that they handle failure in a graceful manner (reduce the impact of failure and implement retries). Auto Scaling helps to reduce the impact of failure by launching replacement instances Well-defined interfaces - reduce interdependencies in a system by enabling interaction only through specific, technology-agnostic interfaces (e.g. RESTful APIs). A relational database is not an example of a well-defined interface Service discovery - disparate resources must have a way of discovering each other without prior knowledge of the network topology. Usually DNS names and a method of resolution are preferred over static IP addresses which need to be hardcoded somewhere Though automatic scaling for storage and database provides scalability (not necessarily elasticity), it does not reduce interdependencies between systems References: https://aws.amazon.com/architecture/well-architected/

You are using encrypted Amazon Elastic Block Store (EBS) volumes with your instances in EC2. A security administrator has asked how encryption works with EBS. Which statements are correct? (choose 2)   

Options are :

  • Volumes created from encrypted snapshots are unencrypted
  • Encryption is supported on all Amazon EBS volume types (Correct)
  • Data in transit between an instance and an encrypted volume is also encrypted (Correct)
  • Data is only encrypted at rest
  • You cannot mix encrypted with unencrypted volumes on an instance

Answer : Encryption is supported on all Amazon EBS volume types Data in transit between an instance and an encrypted volume is also encrypted

Explanation All EBS types support encryption and all instance families now support encryption Not all instance types support encryption Data in transit between an instance and an encrypted volume is also encrypted (data is encrypted in trans You can have encrypted an unencrypted EBS volumes attached to an instance at the same time Snapshots of encrypted volumes are encrypted automatically EBS volumes restored from encrypted snapshots are encrypted automatically EBS volumes created from encrypted snapshots are also encrypted References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

A systems integration consultancy regularly deploys and manages multi-tiered web services for customers on AWS. The SysOps team are facing challenges in tracking changes that are made to the web services and rolling back when problems occur.

Which of the approaches below would BEST assist the SysOps team?

Options are :

  • Use Trusted Advisor to record updates made to the web services
  • Use AWS Systems Manager to manage all updates to the web services
  • Use CloudFormation templates to deploy and manage the web services (Correct)
  • Use CodeDeploy to manage version control for the web services

Answer : Use CloudFormation templates to deploy and manage the web services

Explanation When you provision your infrastructure with AWS CloudFormation, the AWS CloudFormation template describes exactly what resources are provisioned and their settings. Because these templates are text files, you simply track differences in your templates to track changes to your infrastructure, similar to the way developers control revisions to source code. For example, you can use a version control system with your templates so that you know exactly what changes were made, who made them, and when. If at any point you need to reverse changes to your infrastructure, you can use a previous version of your template AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. However, CloudFormation would be the preferred method of maintaining the state of the overall architecture AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda function AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices References: https://aws.amazon.com/cloudformation/resources/

You are developing some code that uses a Lambda function and you would like to enable the function to connect to an ElastiCache cluster within a VPC that you own. What VPC-specific information must you include in your function to enable this configuration? (choose 2)   

Options are :

  • VPC Security Group IDs (Correct)
  • VPC Route Table IDs
  • VPC Peering IDs
  • VPC Logical IDs
  • VPC Subnet IDs (Correct)

Answer : VPC Security Group IDs VPC Subnet IDs

Explanation To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function Please see the AWS article linked below for more details on the requirements References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-lambda/ https://docs.aws.amazon.com/lambda/latest/dg/vpc.html

You just created a new subnet in your VPC and have launched an EC2 instance into it. You are trying to directly access the EC2 instance from the Internet and cannot connect. Which steps should you take to troubleshoot the issue? (choose 2)

Options are :

  • Check that the instance has a public IP address (Correct)
  • Check that you can ping the instance from another subnet
  • Check that there is a NAT Gateway configured for the subnet
  • Check that Security Group has a rule for outbound traffic
  • Check that the route table associated with the subnet has an entry for an Internet Gateway (Correct)

Answer : Check that the instance has a public IP address Check that the route table associated with the subnet has an entry for an Internet Gateway

Explanation Public subnets are subnets that have: - “Auto-assign public IPv4 address? set to “Yes? - The subnet route table has an attached Internet Gateway A NAT Gateway is used for providing outbound Internet access for EC2 instances in private subnets Checking you can ping from another subnet does not relate to being able to access the instance remotely as it uses different protocols and a different network path Security groups are stateful and do not need a rule for outbound traffic. For this solution you would only need to create an inbound rule that allows the relevant protocol References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

The AWS Acceptable Use Policy describes permitted and prohibited behavior on AWS and includes descriptions of prohibited security violations and network abuse. According to the policy, what is AWS's position on penetration testing?   

Options are :

  • AWS allow penetration for some resources with prior authorization (Correct)
  • AWS allow penetration testing by customers on their own VPC resources
  • AWS allow penetration testing for all resources
  • AWS do not allow any form of penetration testing

Answer : AWS allow penetration for some resources with prior authorization

Explanation Permission is required for all penetration tests You must complete and submit the AWS Vulnerability / Penetration Testing Request Form to request authorization for penetration testing to or originating from any AWS resources There is a limited set of resources on which penetration testing can be performed References: https://aws.amazon.com/security/penetration-testing/

A company is moving some unstructured data into AWS and a Solutions Architect has created a bucket named "contosocustomerdata" in the ap-southeast-2 region.

Which of the following bucket URLs would be valid for accessing the bucket? (choose 2)

Options are :

  • https://amazonaws.s3-ap-southeast-2.com/contosocustomerdata
  • https://contosocustomerdata.s3.amazonaws.com (Correct)
  • https://s3-ap-southeast-2.amazonaws.com/contosocustomerdata (Correct)
  • https://s3-ap-southeast-2.amazonaws.com.contosocustomerdata
  • https://s3.amazonaws.com/contosocustomerdata

Answer : https://contosocustomerdata.s3.amazonaws.com https://s3-ap-southeast-2.amazonaws.com/contosocustomerdata

Explanation AWS supports S3 URLs in the format of https://.s3.amazonaws.com/ (virtual host style addressing) and https://s3-.amazonaws.com// References: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html

The development team in a media organization is moving their SDLC processes into the AWS Cloud. Which AWS service is primarily used for software version control?

Options are :

  • Step Functions
  • CodeCommit (Correct)
  • CodeStar
  • CloudHSM

Answer : CodeCommit

Explanation AWS CodeCommit is a fully-managedsource control service that hosts secure Git-based repositiories AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly References: https://aws.amazon.com/codecommit/

You are a developer at Digital Cloud Training. An application stack you are building needs a message bus to decouple the application components from each other. The application will generate up to 300 messages per second without using batching. You need to ensure that a message is only delivered once and duplicates are not introduced into the queue. It is not necessary to maintain the order of the messages.

Which SQS queue type will you use:

Options are :

  • Long polling queues
  • FIFO queues (Correct)
  • Standard queues
  • Auto Scaling queues

Answer : FIFO queues

Explanation The key fact you need to consider here is that duplicate messages cannot be introduced into the queue. For this reason alone you must use a FIFO queue. The statement about it not being necessary to maintain the order of the messages is meant to confuse you, as that might lead you to think you can use a standard queue, but standard queues don't guarantee that duplicates are not introduced into the queue FIFO (first-in-first-out) queues preserve the exact order in which messages are sent and received – note that this is not required in the question but exactly once processing is. FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it Standard queues provide a loose-FIFO capability that attempts to preserve the order of messages. Standard queues provide at-least-once delivery, which means that each message is delivered at least once Long polling is configuration you can apply to a queue, it is not a queue type There is no such thing as an Auto Scaling queue References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/application-integration/amazon-sqs/

You have setup multi-factor authentication (MFA) for your root account according to AWS best practices and configured it to work with Google Authenticator on your smart phone. Unfortunately, your smart phone has been lost. What are the options available to access your account as the root user?

Options are :

  • You will need to contact AWS support to request that the MFA device is deactivated and have your password reset
  • Unfortunately, you will no longer be able to access this account as the root user
  • On the AWS sign-in with authentication device web page, choose to sign in using alternative factors of authentication and use the verification email and code to sign in (Correct)
  • Get a user with administrative privileges in your AWS account to deactivate the MFA device assigned to the root account

Answer : On the AWS sign-in with authentication device web page, choose to sign in using alternative factors of authentication and use the verification email and code to sign in

Explanation Multi-factor authentication (MFA) can be enabled/enforced for the AWS account and for individual users under the account. MFA uses an authentication device that continually generates random, six-digit, single-use authentication codes If your AWS account root user multi-factor authentication (MFA) device is lost, damaged, or not working, you can sign in using alternative methods of authentication. This means that if you can't sign in with your MFA device, you can sign in by verifying your identity using the email and phone that are registered with your account There is a resolution to this problem as described above and you do not need to raise a support request with AWS to deactivate the device and reset your password An administrator can deactivate the MFA device but this does not enable you to access the account as the root user, you must sign in using alternative factors of authentication References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/ https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html

The financial institution you are working for stores large amounts of historical transaction records. There are over 25TB of records and your manager has decided to move them into the AWS Cloud. You are planning to use Snowball as copying the data would take too long. Which of the statements below are true regarding Snowball? (choose 2)   

Options are :

  • Uses a secure storage device for physical transportation (Correct)
  • Snowball can import to S3 but cannot export from S3
  • Can be used with multipart upload
  • Snowball can be used for migration on-premise to on-premise
  • Petabyte scale data transport solution for transferring data into or out of AWS (Correct)

Answer : Uses a secure storage device for physical transportation Petabyte scale data transport solution for transferring data into or out of AWS

Explanation Snowball is a petabyte scale data transport solution for transferring data into or out of AWS. It uses a secure storage device for physical transportation The AWS Snowball Client is software that is installed on a local computer and is used to identify, compress, encrypt, and transfer data. It uses 256-bit encryption (managed with the AWS KMS) and tamper-resistant enclosures with TPM Snowball can import to S3 or export from S3 Snowball cannot be used with multipart upload You cannot use Snowball for migration between on-premise data centers References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/migration/aws-snowball/

A Solutions Architect is creating a design for an online gambling application that will process thousands of records. Which AWS service makes it easy to collect, process, and analyze real-time, streaming data?   

Options are :

  • RedShift
  • Kinesis Data Streams (Correct)
  • EMR
  • S3

Answer : Kinesis Data Streams

Explanation Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Kinesis Data Streams enables real-time processing of streaming big data and is used for rapidly moving data off data producers and then continuously processing the data Amazon S3 is an object store and does not have any native functionality for collecting, processing or analyzing streaming data RedShift is a data warehouse that can be used for storing data in a columnar structure for later analysis. It is not however used for streaming data Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. It does not collect streaming data References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/analytics/amazon-kinesis/

A Solutions Architect needs to migrate an Oracle database running on RDS onto Amazon RedShift to improve performance and reduce cost. Which combination of tasks using AWS services should be used to execute the migration? (choose 2)   

Options are :

  • Convert the schema using the AWS Schema Conversion Tool (Correct)
  • Configure API Gateway to extract, transform and load the data into RedShift
  • Migrate the database using the AWS Database Migration Service (DMS) (Correct)
  • Enable log shipping from the Oracle database to RedShift
  • Take a snapshot of the Oracle database and restore the snapshot onto RedShift

Answer : Convert the schema using the AWS Schema Conversion Tool Migrate the database using the AWS Database Migration Service (DMS)

Explanation Convert the data warehouse schema and code from the Oracle database running on RDS using the AWS Schema Conversion Tool (AWS SCT) then migrate data from the Oracle database to Amazon Redshift using the AWS Database Migration Service (AWS DMS) API Gateway is not used for ETL functions Log shipping, or snapshots are not supported migration methods from RDS to RedShift References: https://aws.amazon.com/getting-started/projects/migrate-oracle-to-amazon-redshift/

A client with 400 staff has started using AWS and wants to provide AWS Management Console access to some of their staff. The company currently uses Active Directory on-premise and would like to continue to configure Role Based Access Control (RBAC) using the current directory service. The client would prefer to avoid complex federation infrastructure and replicating security credentials into AWS.

What is the simplest and most cost-effective solution? (choose 2)

Options are :

  • Use the AWS Directory Service Simple AD
  • Use IAM Roles (Correct)
  • Use Active Directory Service for Microsoft Active Directory
  • Use the AWS Directory Service AD Connector (Correct)
  • Install an Active Directory Domain Controller on EC2 and add it to the on-premise domain

Answer : Use IAM Roles Use the AWS Directory Service AD Connector

Explanation The key requirements here are that the existing AD is used to allow RBAC into AWS whilst avoiding a federation infrastructure and replicating credentials into AWS. The simplest and most cost-effective solution for an organization with 400 staff is to use a small AD connector which redirects requests to the on-premise AD. This eliminates the need for directory synchronization and the cost and complexity of hosting a federation infrastructure. IAM roles are used for enabling RBAC to AWS services Active Directory Service for Microsoft Active Directory does not support replication mode where you replicate your AD between on-premise and AWS (the question requires the credentials are not replicated anyway). It does support trust relationships however this is a more complex and expensive solution so is not the best choice Installing an AD Domain Controller on EC2 and adding it to the on-premise domain would involve replicating security credentials into the AWS cloud which the client does not want to happen Simple AD is an inexpensive Active Directory-compatible service in the AWS cloud with common directory features. Simple AD does not support trust relationships with other domains References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-directory-service/

You are a Solutions Architect at Digital Cloud Training. A client has asked you for some advice about how they can capture detailed information about all HTTP requests that are processed by their Internet facing Application Load Balancer (ALB). The client requires information on the requester, IP address, and request type for analyzing traffic patterns to better understand their customer base.

What would you recommend to the client?

Options are :

  • Configure metrics in CloudWatch for the ALB
  • Enable Access Logs and store the data on S3 (Correct)
  • Enable EC2 detailed monitoring
  • Use CloudTrail to capture all API calls made to the ALB

Answer : Enable Access Logs and store the data on S3

Explanation You can enable access logs on the ALB and this will provide the information required including requester, IP, and request type. Access logs are not enabled by default. You can optionally store and retain the log files on S3 CloudWatch is used for performance monitoring and CloudTrail is used for auditing API access Enabling EC2 detailed monitoring will not capture the information requested References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

A customer is deploying services in a hybrid cloud model. The customer has mandated that data is transferred directly between cloud data centers, bypassing ISPs.

Which AWS service can be used to enable hybrid cloud connectivity?

Options are :

  • Amazon Route 53
  • Amazon VPC
  • AWS Direct Connect (Correct)
  • IPSec VPN

Answer : AWS Direct Connect

Explanation With AWS Direct Connect, you can connect to all your AWS resources in an AWS Region, transfer your business-critical data directly from your datacenter, office, or colocation environment into and from AWS, bypassing your Internet service provider and removing network congestion Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service An IPSec VPN can be used to connect to AWS however it does not bypass the ISPs or Internet Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/aws-direct-connect/

You are using the Elastic Container Service (ECS) to run a number of containers using the EC2 launch type. To gain more control over scheduling containers you have decided to utilize Blox to integrate a third-party scheduler. The third-party scheduler will use the StartTask API to place tasks on specific container instances. What type of ECS scheduler will you need to use to enable this configuration?   

Options are :

  • ECS Scheduler
  • Custom Scheduler (Correct)
  • Service Scheduler
  • Cron Scheduler

Answer : Custom Scheduler

Explanation Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run tasks manually (for batch jobs or single run tasks), with Amazon ECS placing tasks on your cluster for you. The service scheduler is ideally suited for long running stateless services and applications. Amazon ECS allows you to create your own schedulers that meet the needs of your business, or to leverage third party schedulers Blox is an open- source project that gives you more control over how your containerized applications run on Amazon ECS. Blox enables you to build schedulers and integrate third-party schedulers with Amazon ECS while leveraging Amazon ECS to fully manage and scale your clusters Custom schedulers use the StartTask API operation to place tasks on specific container instances within your cluster. Custom schedulers are only compatible with tasks using the EC2 launch type. If you are using the Fargate launch type for your tasks, the StartTask API does not work A cron scheduler is used in UNIX/Linux but is not a type of ECS scheduler A service scheduler is not a type of ECS scheduler References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ecs/ https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html

One of the departments in your company has been generating a large amount of data on S3 and you are considering the increasing costs of hosting it. You have discussed the matter with the department head and he explained that data older than 90 days is rarely accessed but must be retained for several years. If this data does need to be accessed at least 24 hours notice will be provided.

How can you optimize the costs associated with storage of this data whilst ensuring it is accessible if required?

Options are :

  • Use S3 lifecycle policies to move data to GLACIER after 90 days (Correct)
  • Use S3 lifecycle policies to move data to the STANDARD_IA storage class
  • Select the older data and manually migrate it to GLACIER
  • Implement archival software that automatically moves the data to tape

Answer : Use S3 lifecycle policies to move data to GLACIER after 90 days

Explanation To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. Transition actions define when objects transition to another storage class For example, you might choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or archive objects to the GLACIER storage class one year after creating them STANDARD_IA is good for infrequently accessed data and provides faster access times than GLACIER but is more expensive so not the best option here GLACIER retrieval times: Standard retrieval is 3-5 hours which is well within the requirements here You can use Expedited retrievals to access data in 1 – 5 minutes You can use Bulk retrievals to access up to petabytes of data in approximately 5 – 12 hours References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/storage/amazon-s3/ https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html https://aws.amazon.com/about-aws/whats-new/2016/11/access-your-amazon-glacier-data-in-minutes-with-new-retrieval-options/

You created a new IAM user account for a temporary employee who recently joined the company. The user does not have permissions to perform any actions, which statement is true about newly created users in IAM?   

Options are :

  • They are created with limited permissions
  • They are created with user privileges
  • They are created with no permissions (Correct)
  • They are created with full permissions

Answer : They are created with no permissions

Explanation Every IAM user starts with no permissions In other words, by default, users can do nothing, not even view their own access keys To give a user permission to do something, you can add the permission to the user (that is, attach a policy to the user) Or you can add the user to a group that has the intended permission. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-identity-compliance/aws-iam/

A Solutions Architect has created a VPC and is in the process of formulating the subnet design. The VPC will be used to host a two-tier application that will include Internet facing web servers, and internal-only DB servers. Zonal redundancy is required.

How many subnets are required to support this requirement?

Options are :

  • 2 subnets
  • 1 subnet
  • 4 subnets (Correct)
  • 6 subnets

Answer : 4 subnets

Explanation Zonal redundancy indicates that the architecture should be split across multiple Availability Zones. Subnets are mapped 1:1 to AZs A public subnet should be used for the Internet-facing web servers and a separate private subnet should be used for the internal-only DB servers. Therefore you need 4 subnets - 2 (for redundancy) per public/private subnet References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

Your manager has asked you to explain some of the security features available in the AWS cloud. How can you describe the function of Amazon CloudHSM?   

Options are :

  • It provides server-side encryption for S3 objects
  • It can be used to generate, use and manage encryption keys in the cloud (Correct)
  • It is a firewall for use with web applications
  • It is a Public Key Infrastructure (PKI)

Answer : It can be used to generate, use and manage encryption keys in the cloud

Explanation AWS CloudHSM is a cloud-based hardware security module (HSM) that allows you to easily add secure key storage and high-performance crypto operations to your AWS applications. CloudHSM has no upfront costs and provides the ability to start and stop HSMs on-demand, allowing you to provision capacity when and where it is needed quickly and cost-effectively. CloudHSM is a managed service that automates time-consuming administrative tasks, such as hardware provisioning, software patching, high availability, and backups CloudHSM is a part of a PKI but a PKI is a broader term that does not specifically describe its function CloudHSM does not provide server-side encryption for S3 objects, it provides encryption keys that can be used to encrypt data CloudHSM is not a firewall References: https://aws.amazon.com/cloudhsm/details/

A financial services company regularly runs an analysis of the day’s transaction costs, execution reporting, and market performance. The company currently uses third-party commercial software for provisioning, managing, monitoring, and scaling the computing jobs which utilize a large fleet of EC2 instances.

The company is seeking to reduce costs and utilize AWS services. Which AWS service could be used in place of the third-party software?

Options are :

  • Amazon Athena
  • AWS Batch (Correct)
  • AWS Systems Manager
  • Amazon Lex

Answer : AWS Batch

Explanation AWS Batch eliminates the need to operate third-party commercial or open source batch processing solutions. There is no batch software or servers to install or manage. AWS Batch manages all the infrastructure for you, avoiding the complexities of provisioning, managing, monitoring, and scaling your batch computing jobs AWS Systems Manager gives you visibility and control of your infrastructure on AWS Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL Amazon Lex is a service for building conversational interfaces into any application using voice and text References: https://aws.amazon.com/batch/

You have just created a new AWS account and selected the Asia Pacific (Sydney) region. Within the default VPC there is a default security group. What settings are configured within this security group by default? (choose 2)

Options are :

  • There is an inbound rule that allows all traffic from any address
  • There is an outbound rule that allows all traffic to all addresses (Correct)
  • There is an outbound rule that allows all traffic to the security group itself
  • There is an inbound rule that allows all traffic from the security group itself (Correct)
  • There is an outbound rule that allows traffic to the VPC router

Answer : There is an outbound rule that allows all traffic to all addresses There is an inbound rule that allows all traffic from the security group itself

Explanation Default security groups have inbound allow rules (allowing traffic from within the group) Custom security groups do not have inbound allow rules (all inbound traffic is denied by default) All outbound traffic is allowed by default in custom and default security groups References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-vpc/

An application you manage runs a series of EC2 instances with a web application behind an Application Load Balancer (ALB). You are updating the configuration with a health check and need to select the protocol to use. What options are available to you? (choose 2)   

Options are :

  • HTTP (Correct)
  • ICMP
  • SSL
  • TCP
  • HTTPS (Correct)

Answer : HTTP HTTPS

Explanation The Classic Load Balancer (CLB) supports health checks on HTTP, TCP, HTTPS and SSL The Application Load Balancer (ALB) only supports health checks on HTTP and HTTPS References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/elastic-load-balancing/

You have created an Auto Scaling Group (ASG) that has launched several EC2 instances running Linux. The ASG was created using the CLI. You want to ensure that you do not pay for monitoring. What needs to be done to ensure that monitoring is free of charge?   

Options are :

  • The launch configuration will have been created with detailed monitoring enabled which is chargeable. You will need to recreate the launch configuration with basic monitoring enabled (Correct)
  • The launch configuration will have been created with detailed monitoring enabled which is chargeable. You will need to modify the settings on the ASG
  • The launch configuration will have been created with basic monitoring enabled which is free of charge so you do not need to do anything
  • The launch configuration will have been created with detailed monitoring enabled which is chargeable. You will need to change the settings on the launch configuration

Answer : The launch configuration will have been created with detailed monitoring enabled which is chargeable. You will need to recreate the launch configuration with basic monitoring enabled

Explanation Basic monitoring sends EC2 metrics to CloudWatch about ASG instances every 5 minutes Detailed can be enabled and sends metrics every 1 minute (chargeable) When the launch configuration is created from the CLI detailed monitoring of EC2 instances is enabled by default You cannot edit a launch configuration once defined If you want to change your launch configuration you have to create a new one, make the required changes, and use that with your auto scaling groups References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

You are planning on using AWS Auto Scaling to ensure that you have the correct number of Amazon EC2 instances available to handle the load for your applications. Which of the following statements is correct about Auto Scaling? (choose 2)   

Options are :

  • Auto Scaling relies on Elastic Load Balancing
  • Auto Scaling can span multiple AZs within the same AWS region (Correct)
  • You create collections of EC2 instances, called Launch groups
  • Auto Scaling is charged by the hour when registered
  • Auto Scaling is a region-specific service (Correct)

Answer : Auto Scaling can span multiple AZs within the same AWS region Auto Scaling is a region-specific service

Explanation Auto Scaling is a region specific service Auto Scaling can span multiple AZs within the same AWS region You create collections of EC2 instances, called Auto Scaling groups There is no additional cost for Auto Scaling, you just pay for the resources (EC2 instances) provisioned Auto Scaling does not rely on ELB but can be used with ELB. References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/aws-auto-scaling/

You need to launch a series of EC2 instances with multiple attached volumes by modifying the block device mapping. Which block device can be specified in a block device mapping to be used with an EC2 instance? (choose 2)   

Options are :

  • Instance store volume (Correct)
  • EFS volume
  • EBS volume (Correct)
  • S3 bucket
  • Snapshot

Answer : Instance store volume EBS volume

Explanation Each instance that you launch has an associated root device volume, either an Amazon EBS volume or an instance store volume You can use block device mapping to specify additional EBS volumes or instance store volumes to attach to an instance when it's launched. You can also attach additional EBS volumes to a running instance You cannot use a block device mapping to specify a snapshot, EFS volume or S3 bucket References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html https://digitalcloud.training/certification-training/aws-solutions-architect-associate/compute/amazon-ebs/

You need to run a PowerShell script on a fleet of EC2 instances running Microsoft Windows. The instances have already been launched in your VPC. What tool can be run from the AWS Management Console that will run the script on all target EC2 instances?   

Options are :

  • AWS OpsWorks
  • AWS CodeDeploy
  • Run Command (Correct)
  • AWS Config

Answer : Run Command

Explanation Run Command is designed to support a wide range of enterprise scenarios including installing software, running ad hoc scripts or Microsoft PowerShell commands, configuring Windows Update settings, and more. Run Command can be used to implement configuration changes across Windows instances on a consistent yet ad hoc basis and is accessible from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs AWS OpsWorks provides instances of managed Puppet and Chef AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It is not used for ad-hoc script execution AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services References: https://aws.amazon.com/blogs/aws/new-ec2-run-command-remote-instance-management-at-scale/

You have launched an EC2 instance into a VPC. You need to ensure that instances have both a private and public DNS hostname. Assuming you did not change any settings during creation of the VPC, how will DNS hostnames be assigned by default? (choose 2)   

Options are :

  • In a default VPC instances will be assigned a private but not a public DNS hostname
  • In all VPCs instances no DNS hostnames will be assigned
  • In a default VPC instances will be assigned a public and private DNS hostname (Correct)
  • In a non-default VPC instances will be assigned a public and private DNS hostname
  • In a non-default VPC instances will be assigned a private but not a public DNS hostname (Correct)

Answer : In a default VPC instances will be assigned a public and private DNS hostname In a non-default VPC instances will be assigned a private but not a public DNS hostname

Explanation When you launch an instance into a default VPC, we provide the instance with public and private DNS hostnames that correspond to the public IPv4 and private IPv4 addresses for the instance When you launch an instance into a nondefault VPC, we provide the instance with a private DNS hostname and we might provide a public DNS hostname, depending on the DNS attributes you specify for the VPC and if your instance has a public IPv4 address References: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html

A government agency is using CloudFront for a web application that receives personally identifiable information (PII) from citizens.

What feature of CloudFront applies an extra level of encryption at CloudFront edge locations to ensure the PII data is secured end-to-end?   

Options are :

  • Field-level encryption (Correct)
  • Origin access identity
  • Object invalidation
  • RTMP distribution

Answer : Field-level encryption

Explanation Field-level encryption adds an additional layer of security on top of HTTPS that lets you protect specific data so that it is only visible to specific applications Origin access identity applies to S3 bucket origins, not web servers Object invalidation is a method to remove objects from the cache An RTMP distribution is a method of streaming media using Adobe Flash References: https://digitalcloud.training/certification-training/aws-solutions-architect-associate/networking-and-content-delivery/amazon-cloudfront/ https://aws.amazon.com/about-aws/whats-new/2017/12/introducing-field-level-encryption-on-amazon-cloudfront/

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions