Questions : AWS Certified Security Specialty

A company hosts a critical web application on the AWS Cloud. This is a key revenue generating application for the company. The IT Security team is worried about potential DDos attacks against the web site. The senior management has also specified that immediate action needs to be taken in case of a potential DDos attack. What should be done in this regard?

Options are :

  • Consider using the AWS Shield Service
  • Consider using VPC Flow logs to monitor traffic for DDos attack and quickly take actions on a trigger of a potential attack.
  • Consider using the AWS Shield Advanced Service (Correct)
  • Consider using Cloudwatch logs to monitor traffic for DDos attack and quickly take actions on a trigger of a potential attack. (Incorrect)

Answer : Consider using the AWS Shield Advanced Service

Explanation Answer – C Option A is invalid because the normal AWS Shield Service will not help in immediate action against a DDos attack. This can be done via the AWS Shield Advanced Service Option B is invalid because this is a logging service for VPC’s traffic flow but cannot specifically protect against DDos attacks. Option D is invalid because this is a logging service for AWS Services but cannot specifically protect against DDos attacks. The AWS Documentation mentions the following AWS Shield Advanced provides enhanced protections for your applications running on Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront and Route 53 against larger and more sophisticated attacks. AWS Shield Advanced is available to AWS Business Support and AWS Enterprise Support customers. AWS Shield Advanced protection provides always-on, flow-based monitoring of network traffic and active application monitoring to provide near real-time notifications of DDoS attacks. AWS Shield Advanced also gives customers highly flexible controls over attack mitigations to take actions instantly. Customers can also engage the DDoS Response Team (DRT) 24X7 to manage and mitigate their application layer DDoS attacks. For more information on AWS Shield, please visit the below URL https://aws.amazon.com/shield/faqs/

You have setup a set of applications across 2 VPC’s. You have also setup VPC Peering. The applications are still not able to communicate across the Peering connection. Which network troubleshooting steps should be taken to resolve the issue?

Options are :

  • Ensure the applications are hosted in a public subnet
  • Check to see if the VPC has an Internet gateway attached.
  • Check to see if the VPC has a NAT gateway attached.
  • Check the Route tables for the VPC’s (Correct)

Answer : Check the Route tables for the VPC’s

Explanation Answer – D After the VPC peering connection is established, you need to ensure that the route tables are modified to ensure traffic can flow between the VPC’s Option A ,B and C are invalid because allowing access the Internet gateway and usage of public subnets can help for Internet access , but not for VPC Peering. For more information on VPC peering routing, please visit the below URL https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-peering-routing.html

Practice Questions : AWS Certified Solutions Architect Associate

A company requires that data stored in AWS be encrypted at rest. Which of the following approaches achieve this requirement? Select 2 answers from the options given below.

Options are :

  • When storing data in Amazon EBS, use only EBS–optimized Amazon EC2 instances.
  • When storing data in EBS, encrypt the volume by using AWS KMS. (Correct)
  • When storing data in Amazon S3, use object versioning and MFA Delete.
  • When storing data in Amazon EC2 Instance Store, encrypt the volume by using KMS. (Incorrect)
  • When storing data in S3, enable server-side encryption. (Correct)

Answer : When storing data in EBS, encrypt the volume by using AWS KMS. When storing data in S3, enable server-side encryption.

Explanation Answer – B and E The AWS Documentation mentions the following To create an encrypted Amazon EBS volume, select the appropriate box in the Amazon EBS section of the Amazon EC2 console. You can use a custom customer master key (CMK) by choosing one from the list that appears below the encryption box. If you do not specify a custom CMK, Amazon EBS uses the AWS-managed CMK for Amazon EBS in your account. If there is no AWS-managed CMK for Amazon EBS in your account, Amazon EBS creates one. Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options of protecting data at rest in Amazon S3. • Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects. • Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. Option A is invalid because using EBS–optimized Amazon EC2 instances alone will not guarantee protection of instances at rest. Option C is invalid because this will not encrypt data at rest for S3 objects. Option D is invalid because you don’t store data in Instance store For more information on EBS encryption, please visit the below URL https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html For more information on S3 encryption, please visit the below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

You need to ensure that objects in an S3 bucket are available in another region. This is because of the criticality of the data that is hosted in the S3 bucket. How can you achieve this in the easiest way possible?

Options are :

  • Enable cross region replication for the bucket (Correct)
  • Write a script to copy the objects to another bucket in the destination region
  • Create an S3 snapshot in the destination region
  • Enable versioning which will copy the objects to the destination region (Incorrect)

Answer : Enable cross region replication for the bucket

Explanation Answer – A Option B is partially correct but a big maintenance over head to create and maintain a script when the functionality is already available in S3 Option C is invalid because snapshots are not available in S3 Option D is invalid because versioning will not replicate objects The AWS Documentation mentions the following Cross-region replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions. For more information on Cross region replication in the Simple Storage Service, please visit the below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

You want to ensure that you keep a check on the Active EBS Volumes, Active snapshots and Elastic IP addresses you use so that you don’t go beyond the service limit. Which of the below services can help in this regard?

Options are :

  • AWS Cloudwatch
  • AWS EC2
  • AWS Trusted Advisor (Correct)
  • AWS SNS (Incorrect)

Answer : AWS Trusted Advisor

Explanation Answer – C Option A is invalid because this is used as a monitoring service and will not provide the level of details that are required as per the question. Option B is invalid because this is a compute service offered by AWS Option D is invalid because this is a notification service offered by AWS Below is a snapshot of the service limits that the Trusted Advisor can monitor Option A is invalid because even though you can monitor resources, it cannot be checked against the service limit. Option B is invalid because this is the Elastic Compute cloud service Option D is invalid because it can be send notification but not check on service limits For more information on the Trusted Advisor monitoring, please visit the below URL https://aws.amazon.com/premiumsupport/ta-faqs/

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 3

A company has a legacy application that outputs all logs to a local text file. Logs from all applications running on AWS must be continually monitored for security related messages.  What can be done to allow the company to deploy the legacy application on Amazon EC2 and still meet the monitoring requirement?

Options are :

  • Create a Lambda function that mounts the EBS volume with the logs and scans the logs for security incidents. Trigger the function every 5 minutes with a scheduled Cloudwatch event.
  • Send the local text log files to CloudWatch Logs and configure a CloudWatch metric filter. Trigger cloudWatch alarms based on the metrics. (Correct)
  • Install the Amazon Inspector agent on any EC2 instance running the legacy application. Generate CloudWatch alerts based on any Amazon Inspector findings. (Incorrect)
  • Export the local text log files to CloudTrail. Create a Lambda function that queries the CloudTrail logs for security incidents using Athena.

Answer : Send the local text log files to CloudWatch Logs and configure a CloudWatch metric filter. Trigger cloudWatch alarms based on the metrics.

Explanation Answer – B One can send the log files to Cloudwatch Logs. Log files can also be sent from On-premise servers. You can then specify metrics to search the logs for any specific values. And then create alarms based on these metrics. Option A is invalid because this will be just a long over drawn process to achieve this requirement Option C is invalid because AWS Inspector cannot be used to monitor for security related messages. Option D is invalid because files cannot be exported to AWS Cloudtrail For more information on Cloudwatch logs agent, please visit the below URL https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

Every application in a company’s portfolio has a separate AWS account for development and production. The security team wants to prevent the root user and all IAM users in the production accounts from accessing a specific set of unneeded services. How can they control this functionality?

Options are :

  • Create a Service Control Policy that denies access to the services. Assemble all production accounts in an organizational unit. Apply the policy to that organizational unit. (Correct)
  • Create a Service Control Policy that denies access to the services. Apply the policy to the root account.
  • Create an IAM policy that denies access to the services. Associate the policy with an IAM group and enlist all users and the root users in this group. (Incorrect)
  • Create an IAM policy that denies access to the services. Create a Config Rule that checks that all users have the policy assigned. Trigger a Lambda function that adds the policy when found missing.

Answer : Create a Service Control Policy that denies access to the services. Assemble all production accounts in an organizational unit. Apply the policy to that organizational unit.

Explanation Answer – A As an administrator of the master account of an organization, you can restrict which AWS services and individual API actions the users and roles in each member account can access. This restriction even overrides the administrators of member accounts in the organization. When AWS Organizations blocks access to a service or API action for a member account, a user or role in that account can't access any prohibited service or API action, even if an administrator of a member account explicitly grants such permissions in an IAM policy. Organization permissions overrule account permissions. Option B is invalid because service policies cannot be assigned to the root account at the account level. Option C and D are invalid because IAM policies alone at the account level would not be able to suffice the requirement For more information on attaching an IAM policy to a group, please visit the below URL https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html

An application running on EC2 instances in a VPC must call an external web service via TLS (port 443). The instances run in public subnets. Which configurations below allow the application to function and minimize the exposure of the instances? Select 2 answers from the options given below

Options are :

  • A network ACL with a rule that allows outgoing traffic on port 443.
  • A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports (Correct)
  • A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.
  • A security group with a rule that allows outgoing traffic on port 443 (Correct)
  • A security group with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports.
  • A security group with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.

Answer : A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports A security group with a rule that allows outgoing traffic on port 443

Explanation Answer – B and D Since here the traffic needs to flow outbound from the Instance to a web service on Port 443 , the outbound rules on both the Network and Security Groups need to allow outbound traffic. The Incoming traffic should be allowed on ephermal ports for the Operating System on the Instance to allow a connection to be established on any desired or available port. Option A is invalid because this rule alone is not enough. You also need to ensure incoming traffic on ephemeral ports Option C is invalid because need to ensure incoming traffic on ephemeral ports and not only port 443 Option E and F are invalid since here you are allowing additional ports on Security groups which are not required For more information on VPC Security Groups, please visit the below URL https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 7

A company is deploying a new web application on AWS. Based on their other web applications, they anticipate being the target of frequent DDoS attacks. Which steps can the company use to protect their application? Select 2 answers from the options given below.

Options are :

  • Associate the EC2 instances with a security group that blocks traffic from blacklisted IP addresses.
  • Use an ELB Application Load Balancer and Auto Scaling group to scale to absorb application layer traffic. (Correct)
  • Use Amazon Inspector on the EC2 instances to examine incoming traffic and discard malicious traffic.
  • Use CloudFront and AWS WAF to prevent malicious traffic from reaching the application (Correct)
  • Enable GuardDuty to block malicious traffic from reaching the application (Incorrect)

Answer : Use an ELB Application Load Balancer and Auto Scaling group to scale to absorb application layer traffic. Use CloudFront and AWS WAF to prevent malicious traffic from reaching the application

Explanation Answer B and D The below diagram from AWS shows the best case scenario for avoiding DDos attacks using services such as AWS Cloudfront , WAF , ELB and Autoscaling Option A is invalid because by default security groups don’t allow access Option C is invalid because AWS Inspector cannot be used to examine traffic Option E is invalid because this can be used for attacks on EC2 Instances but not against DDos attacks on the entire application For more information on DDos mitigation from AWS, please visit the below URL https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security?

Options are :

  • Save the API credentials to your PHP files.
  • Don’t save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it. (Correct)
  • Save your API credentials in a public Github repository.
  • Pass API credentials to the instance using instance userdata. (Incorrect)

Answer : Don’t save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it.

Explanation Answer – B Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it's challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Option A,C and D are invalid because using AWS Credentials in an application in production is a direct no recommendation for secure access For more information on IAM Roles, please visit the below URL http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

A company has a set of resources defined in AWS. It is mandated that all API calls to the resources be monitored. Also all API calls must be stored for lookup purposes. Any log data greater than 6 months must be archived. Which of the following meets these requirements? Choose 2 answers from the options given below. Each answer forms part of the solution.

Options are :

  • Enable CloudTrail logging in all accounts into S3 buckets (Correct)
  • Enable CloudTrail logging in all accounts into Amazon Glacier
  • Ensure a lifecycle policy is defined on the S3 bucket to move the data to EBS volumes after 6 months.
  • Ensure a lifecycle policy is defined on the S3 bucket to move the data to Amazon Glacier after 6 months. (Correct)

Answer : Enable CloudTrail logging in all accounts into S3 buckets Ensure a lifecycle policy is defined on the S3 bucket to move the data to Amazon Glacier after 6 months.

Explanation Answer – A and D Cloudtrail publishes the trail of API logs to an S3 bucket Option B is invalid because you cannot put the logs into Glacier from CloudTrail Option C is invalid because lifecycle policies cannot be used to move data to EBS volumes For more information on Cloudtrail logging, please visit the below URL https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-find-log-files.html You can then use Lifecycle policies to transfer data to Amazon Glacier after 6 months For more information on S3 lifecycle policies, please visit the below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 1

Your company has a set of 1000 EC2 Instances defined in an AWS Account. They want to effectively automate several administrative tasks on these instances. Which of the following would be an effective way to achieve this?

Options are :

  • Use the AWS Systems Manager Parameter Store
  • Use the AWS Systems Manager Run Command (Correct)
  • Use the AWS Inspector
  • Use AWS Config (Incorrect)

Answer : Use the AWS Systems Manager Run Command

Explanation Answer – B The AWS Documentation mentions the following AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. Option A is invalid because this service is used to store parameters Option C is invalid because this service is used to scan vulnerabilities in an EC2 Instance. Option D is invalid because this service is used to check for configuration changes For more information on executing remote commands, please visit the below URL https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html

You want to launch an EC2 Instance with your own key pair in AWS. How can you achieve this? Choose 2 answers from the options given below. Each option forms part of the solution.

Options are :

  • Use a third party tool to create the Key pair (Correct)
  • Create a new key pair using the AWS CLI
  • Import the public key pair into EC2 (Correct)
  • Import the private key pair into EC2 (Incorrect)

Answer : Use a third party tool to create the Key pair Import the public key pair into EC2

Explanation Answer – A and C This is given in the AWS Documentation Option B is invalid because you cannot use the AWS CLI to create a new key pair Option D is invalid because the public key needs to be stored in the EC2 Instance For more information on EC2 Key pairs, please visit the below URL https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

You have a set of Keys defined using the AWS KMS service. You want to stop using a couple of keys , but are not sure of which services are currently using the keys. Which of the following would be a safe option to stop using the keys from further usage.

Options are :

  • Delete the keys since anyway there is a 7 day waiting period before deletion
  • Disable the keys (Correct)
  • Set an alias for the key
  • Change the key material for the key (Incorrect)

Answer : Disable the keys

Explanation Answer – B Option A is invalid because once you schedule the deletion , you cannot come back from the deletion process Option C and D are invalid because these will not check to see if the keys are being used or not The AWS Documentation mentions the following Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is destructive and potentially dangerous. It deletes the key material and all metadata associated with the CMK, and is irreversible. After a CMK is deleted you can no longer decrypt the data that was encrypted under that CMK, which means that data becomes unrecoverable. You should delete a CMK only when you are sure that you don't need to use it anymore. If you are not sure, consider disabling the CMK instead of deleting it. You can re-enable a disabled CMK if you need to use it again later, but you cannot recover a deleted CMK. For more information on deleting keys from KMS, please visit the below URL https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 16

You are building a large-scale confidential documentation web server on AWS and all of the documentation for it will be stored on S3. One of the requirements is that it cannot be publicly accessible from S3 directly, and you will need to use CloudFront to accomplish this. Which of the methods listed below would satisfy the requirements as outlined? Choose an answer from the options below

Options are :

  • Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
  • Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. (Correct)
  • Create individual policies for each bucket the documents are stored in and in that policy grant access to only CloudFront. (Incorrect)
  • Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

Answer : Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

Explanation Answer – B If you want to use CloudFront signed URLs or signed cookies to provide access to objects in your Amazon S3 bucket, you probably also want to prevent users from accessing your Amazon S3 objects using Amazon S3 URLs. If users access your objects directly in Amazon S3, they bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control over the date and time that a user can no longer access your content and control over which IP addresses can be used to access content. In addition, if user’s access objects both through CloudFront and directly by using Amazon S3 URLs, CloudFront access logs are less useful because they're incomplete. Option A is invalid because you need to create a Origin Access Identity for Cloudfront and not an IAM user Option C and D are invalid because using policies will not help fulfil the requirement For more information on Origin Access Identity please see the below link http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

Your company makes use of S3 buckets for storing data. There is a company policy that all services should have logging enabled. How can you ensure that logging is always enabled for created S3 buckets in the AWS Account?

Options are :

  • Use AWS Inspector to inspect all S3 buckets and enable logging for those where it is not enabled
  • Use AWS Config Rules to check whether logging is enabled for buckets (Correct)
  • Use AWS Cloudwatch metrics to check whether logging is enabled for buckets
  • Use AWS Cloudwatch logs to check whether logging is enabled for buckets (Incorrect)

Answer : Use AWS Config Rules to check whether logging is enabled for buckets

Explanation Answer – B This is given in the AWS Documentation as an example rule in AWS Config Option A is invalid because AWS Inspector cannot be used to scan all buckets Option C and D are invalid because Cloudwatch cannot be used to check for logging enablement for buckets. For more information on Config Rules please see the below link https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html

A security engineer must ensure that all infrastructure launched in the company AWS account be monitored for deviation from compliance rules, specifically that all EC2 instances are launched from one of a specified list of AMIs and that all attached EBS volumes are encrypted. Infrastructure not in compliance should be terminated. What combination of steps should the Engineer implement? Select 2 answers from the options given below.

Options are :

  • Set up a CloudWatch event based on Trusted Advisor metrics
  • Trigger a Lambda function from a scheduled CloudWatch event that terminates non-compliant infrastructure. (Correct)
  • Set up a CloudWatch event based on Amazon inspector findings
  • Monitor compliance with AWS Config Rules triggered by configuration changes (Correct)
  • Trigger a CLI command from a CloudWatch event that terminates the infrastructure

Answer : Trigger a Lambda function from a scheduled CloudWatch event that terminates non-compliant infrastructure. Monitor compliance with AWS Config Rules triggered by configuration changes

Explanation Answer – B and D You can use AWS Config to monitor for such Events Option A is invalid because you cannot set Cloudwatch events based on Trusted Advisor checks. Option C is invalid Amazon inspector cannot be used to check whether instances are launched from a specific AMI Option E is invalid because triggering a CLI command is not the preferred option , instead you should use Lambda functions for all automation purposes. For more information on Config Rules please see the below link https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.html These events can then trigger a lambda function to terminate instances For more information on Cloudwatch events please see the below link https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 8

A company has external vendors that must deliver files to the company. These vendors have cross-account that gives them permission to upload objects to one of the company’s S3 buckets. 

What combination of steps must the vendor follow to successfully deliver a file to the company? Select 2 answers from the options given below

Options are :

  • Attach an IAM role to the bucket that grants the bucket owner full permissions to the object
  • Add a grant to the object’s ACL giving full permissions to bucket owner. (Correct)
  • Encrypt the object with a KMS key controlled by the company.
  • Add a bucket policy to the bucket that grants the bucket owner full permissions to the object (Incorrect)
  • Upload the file to the company’s S3 bucket as an object (Correct)

Answer : Add a grant to the object’s ACL giving full permissions to bucket owner. Upload the file to the company’s S3 bucket as an object

Explanation Answer – B and E This scenario is given in the AWS Documentation Option A and D are invalid because bucket ACL’s are used to give grants to bucket owners. Option C is not required since encryption is not part of the requirement For more information on this scenario please see the below link https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example3.html

An application running on EC2 instances in a VPC must access sensitive data in the data center. The access must be encrypted in transit and have consistent low latency. Which hybrid architecture will meet these requirements?

Options are :

  • Expose the data with a public HTTPS endpoint.
  • A VPN between the VPC and the data center over a Direct Connect connection (Correct)
  • A VPN between the VPC and the data center.
  • A Direct Connect connection between the VPC and data center. (Incorrect)

Answer : A VPN between the VPC and the data center over a Direct Connect connection

Explanation Answer – B Since this is required over a consistency low latency connection , you should use Direct Connect. For encryption , you can make use of a VPN Option A is invalid because exposing an HTTPS endpoint will not help all traffic to flow between a VPC and the data center. Option C is invalid because low latency is a key requirement Option D is invalid because only Direct Connect will not suffice For more information on the connection options please see the below link https://aws.amazon.com/answers/networking/aws-multiple-vpc-vpn-connection-sharing/

A company has several Customer Master Keys (CMK), some of which have imported key material. Each CMK must be rotated annually.  What two methods can the security team use to rotate each key? Select 2 answers from the options given below

Options are :

  • Enable automatic key rotation for a CMK (Correct)
  • Import new key material to an existing CMK
  • Use the CLI or console to explicitly rotate an existing CMK
  • Import new key material to a new CMK; Point the key alias to the new CMK. (Correct)
  • Delete an existing CMK and a new default CMK will be created. (Incorrect)

Answer : Enable automatic key rotation for a CMK Import new key material to a new CMK; Point the key alias to the new CMK.

Explanation Answer – A and D The AWS Documentation mentions the following When you enable automatic key rotation for a customer managed CMK, AWS KMS generates new cryptographic material for the CMK every year. AWS KMS also saves the CMK's older cryptographic material so it can be used to decrypt data that it encrypted. Because the new CMK is a different resource from the current CMK, it has a different key ID and ARN. When you change CMKs, you need to update references to the CMK ID or ARN in your applications. Aliases, which associate a friendly name with a CMK, make this process easier. Use an alias to refer to a CMK in your applications. Then, when you want to change the CMK that the application uses, change the target CMK of the alias. Option B is invalid because you also need to point the key alias to the new key Option C is invalid because existing CMK keys cannot be rotated as they are Option E is invalid because deleting existing keys will not guarantee the creation of a new default CMK key For more information on Key rotation please see the below link https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html

AWS SAP-C00 Certified Solution Architect Professional Exam Set 5

A new application will be deployed on EC2 instances in private subnets. The application will transfer sensitive data to and from an S3 bucket. Compliance requirements state that the data must not traverse the public internet. 

Which solution meets the compliance requirement?

Options are :

  • Access the S3 bucket through a proxy server
  • Access the S3 bucket through a NAT gateway.
  • Access the S3 bucket through a VPC endpoint for S3 (Correct)
  • Access the S3 bucket through the SSL protected S3 endpoint (Incorrect)

Answer : Access the S3 bucket through a VPC endpoint for S3

Explanation Answer – C The AWS Documentation mentions the following A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Option A is invalid because using a proxy server is not sufficient enough Option B and D are invalid because you need secure communication which should not traverse the internet For more information on VPC endpoints please see the below link https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

Your current setup in AWS consists of the following architecture. 2 public subnets, one subnet which has the web servers accessed by users across the internet and the other subnet for the database server. Which of the following changes to the architecture would add a better security boundary to the resources hosted in your setup

Options are :

  • Consider moving the web server to a private subnet
  • Consider moving the database server to a private subnet (Correct)
  • Consider moving both the web and database server to a private subnet
  • Consider creating a private subnet and adding a NAT instance to that subnet (Incorrect)

Answer : Consider moving the database server to a private subnet

Explanation Answer – B The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet. The below diagram from the AWS Documentation shows how this can be setup Option A and C are invalid because if you move the web server to a private subnet , then it cannot be accessed by users Option D is invalid because NAT instances should be present in the public subnet For more information on public and private subnets in AWS, please visit the following url https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

Your company has confidential documents stored in the simple storage service. Due to compliance requirements, you have to ensure that the data in the S3 bucket is available in a different geographical location. As an architect what is the change you would make to comply with this requirement.

Options are :

  • Apply Multi-AZ for the underlying S3 bucket
  • Copy the data to an EBS Volume in another Region
  • Create a snapshot of the S3 bucket and copy it to another region
  • Enable Cross region replication for the S3 bucket (Correct)

Answer : Enable Cross region replication for the S3 bucket

Explanation Answer – D This is mentioned clearly as a use case for S3 cross-region replication You might configure cross-region replication on a bucket for various reasons, including the following: • Compliance requirements – Although, by default, Amazon S3 stores your data across multiple geographically distant Availability Zones, compliance requirements might dictate that you store data at even further distances. Cross-region replication allows you to replicate data between distant AWS Regions to satisfy these compliance requirements. Option A is invalid because Multi-AZ cannot be used to S3 buckets Option B is invalid because copying it to an EBS volume is not a recommended practice Option C is invalid because creating snapshots is not possible in S3 For more information on S3 cross-region replication, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 18

When managing permissions for the API gateway, what can be used to ensure that the right level of permissions are given to developers, IT admins and users? These permissions should be easily managed.

Options are :

  • Use the secure token service to manage the permissions for the different users
  • Use IAM Policies to create different policies for the different types of users. (Correct)
  • Use the AWS Config tool to manage the permissions for the different users
  • Use IAM Access Keys to create sets of keys for the different types of users. (Incorrect)

Answer : Use IAM Policies to create different policies for the different types of users.

Explanation Answer – B The AWS Documentation mentions the following You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes: • To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. • To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway. Option A , B and C are invalid because these cannot be used to control access to AWS services. This needs to be done via policies For more information on permissions with the API gateway, please visit the following url https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html

A company hosts data in S3. There is a requirement to control access to the S3 buckets. Which are the 2 ways in which this can be achieved?

Options are :

  • Use Bucket policies (Correct)
  • Use the Secure Token service
  • Use IAM user policies (Correct)
  • Use AWS Access Keys (Incorrect)

Answer : Use Bucket policies Use IAM user policies

Explanation Answer – A and C The AWS Documentation mentions the following Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources. Option B and D are invalid because these cannot be used to control access to S3 buckets For more information on S3 access control, please refer to the below link https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

You are responsible to deploying a critical application onto AWS. Part of the requirements for this application is to ensure that the controls set for this application met PCI compliance. Also there is a need to monitor web application logs to identify any malicious activity. Which of the following services can be used to fulfil this requirement. Choose 2 answers from the options given below

Options are :

  • Amazon Cloudwatch Logs (Correct)
  • Amazon VPC Flow Logs
  • Amazon AWS Config
  • Amazon Cloudtrail (Correct)

Answer : Amazon Cloudwatch Logs Amazon Cloudtrail

Explanation Answer – A and D The AWS Documentation mentions the following about these services AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Option B is incorrect because VPC flow logs can only check for flow to instances in a VPC Option C is incorrect because this can check for configuration changes only For more information on Cloudtrail, please refer to below URL https://aws.amazon.com/cloudtrail/ You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs. For more information on Cloudwatch logs, please refer to below URL http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 12

A company wishes to enable Single Sign On (SSO) so its employees can login to the management console using their corporate directory identity. Which steps below are required as part of the process? Select 2 answers from the options given below.

Options are :

  • Create a Direct Connect connection between the corporate network and the AWS region with the company’s infrastructure. (Correct)
  • Create IAM policies that can be mapped to group memberships in the corporate directory.
  • Create a Lambda function to assign IAM roles to the temporary security tokens provided to the users.
  • Create IAM users that can be mapped to the employees’ corporate identities (Incorrect)
  • Create an IAM role that establishes a trust relationship between IAM and the corporate directory identity provider (IdP) (Correct)

Answer : Create a Direct Connect connection between the corporate network and the AWS region with the company’s infrastructure. Create an IAM role that establishes a trust relationship between IAM and the corporate directory identity provider (IdP)

Explanation Answer – A and E Create a Direct Connect connection so that corporate users can access the AWS account. Option B is incorrect because IAM policies are not directly mapped to group memberships in the corporate directory. It is IAM roles which are mapped. Option C is incorrect because Lambda functions is an incorrect option to assign roles. Option D is incorrect because IAM users are not directly mapped to employees’ corporate identities. For more information on Direct Connect , please refer to below URL https://aws.amazon.com/directconnect/ From the AWS Documentation , for federated access, you also need to ensure the right policy permissions are in place < href="https://s3.amazonaws.com/whizlabs-pub/AWS+Security+Specialty+Practice+Test+Images/Practice+Test+I/62.png" target="_blank"> For more information on SAML federation , please refer to below URL https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html

A company continually generates sensitive records that it stores in an S3 bucket. All objects in the bucket are encrypted using SSE-KMS using one of the company’s CMKs. Company compliance policies require that no more than one month of data be encrypted using the same encryption key.

  What solution below will meet the company’s requirements?

Options are :

  • Trigger a Lambda function with a monthly CloudWatch event that creates a new CMK and updates the S3 bucket to use the new CMK. (Correct)
  • Configure the CMK to rotate the key material every month.
  • Trigger a Lambda function with a monthly CloudWatch event that creates a new CMK, updates the S3 bucket to use the new CMK, and deletes the old CMK. (Incorrect)
  • Trigger a Lambda function with a monthly CloudWatch event that rotates the key material in the CMK.

Answer : Trigger a Lambda function with a monthly CloudWatch event that creates a new CMK and updates the S3 bucket to use the new CMK.

Explanation Answer – A You can use a Lambda function to create a new key and then update the S3 bucket to use the new key. Remember not to delete the old key , else you will not be able to decrypt the documents stored in the S3 bucket using the older key. Option B is incorrect because AWS KMS cannot rotate keys on a monthly basis Option C is incorrect because deleting the old key means that you cannot access the older objects Option D is incorrect because rotating key material is not possible. For more information on AWS KMS keys , please refer to below URL https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html

Company policy requires that all insecure server protocols, such as FTP, Telnet, HTTP, etc be disabled on all servers. The security team would like to regularly check all servers to ensure compliance with this requirement by using a scheduled CloudWatch event to trigger a review of the current infrastructure. What process will check compliance of the company’s EC2 instances?

Options are :

  • Trigger an AWS Config Rules evaluation of the restricted-common-ports rule against every EC2 instance. (Correct)
  • Query the Trusted Advisor API for all best practice security checks and check for “action recommened? status.
  • Enable a GuardDuty threat detection analysis targeting the port configuration on every EC2 instance. (Incorrect)
  • Run an Amazon Inspector assessment using the Runtime Behavior Analysis rules package against every EC2 instance.

Answer : Trigger an AWS Config Rules evaluation of the restricted-common-ports rule against every EC2 instance.

Explanation Answer – A Option B is incorrect because querying Trusted Advisor API’s are not possible Option C is incorrect because GuardDuty should be used to detect threats and not check the compliance of security protocols. Option D is incorrect because Amazon Inspector can be used to check for vulnerabilities only One of the Inbuilt AWS Config Rules is built specifically for this purpose For more information on AWS Config managed rules , please refer to below URL https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 12

A web application runs in a VPC on EC2 instances behind an ELB Application Load Balancer. The application stores data in an RDS MySQL DB instance. A Linux bastion host is used to apply schema updates to the database – administrators connect to the host via SSH from a corporate workstation. The following security groups are applied to the infrastructure-

  • sgLB – associated with the ELB
  • sgWeb – associated with the EC2 instances.
  • sgDB – associated with the database
  • sgBastion – associated with the bastion host Which security group configuration will allow the application to be secure and functional?

Options are :

  • sgLB :allow port 80 and 443 traffic from 0.0.0.0/0 sgWeb :allow port 80 and 443 traffic from 0.0.0.0/0 sgDB :allow port 3306 traffic from sgWeb and sgBastion sgBastion: allow port 22 traffic from the corporate IP address range (Incorrect)
  • sgLB :allow port 80 and 443 traffic from 0.0.0.0/0 sgWeb :allow port 80 and 443 traffic from sgLB sgDB :allow port 3306 traffic from sgWeb and sgLB sgBastion: allow port 22 traffic from the VPC IP address range
  • sgLB :allow port 80 and 443 traffic from 0.0.0.0/0 sgWeb :allow port 80 and 443 traffic from sgLB sgDB :allow port 3306 traffic from sgWeb and sgBastion sgBastion: allow port 22 traffic from the VPC IP address range
  • sgLB :allow port 80 and 443 traffic from 0.0.0.0/0 sgWeb :allow port 80 and 443 traffic from sgLB sgDB :allow port 3306 traffic from sgWeb and sgBastion sgBastion: allow port 22 traffic from the corporate IP address range (Correct)

Answer : sgLB :allow port 80 and 443 traffic from 0.0.0.0/0 sgWeb :allow port 80 and 443 traffic from sgLB sgDB :allow port 3306 traffic from sgWeb and sgBastion sgBastion: allow port 22 traffic from the corporate IP address range

Explanation Answer – D The Load Balancer should accept traffic on ow port 80 and 443 traffic from 0.0.0.0/0 The backend EC2 Instances should accept traffic from the Load Balancer The database should allow traffic from the Web server And the Bastion host should only allow traffic from a specific corporate IP address range Option A is incorrect because the Web group should only allow traffic from the Load balancer Option B and C are incorrect because the bastion host should only traffic from a corporate IP address For more information on AWS Security Groups , please refer to below URL https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

A company had developed an incident response plan 18 months ago. Regular implementations of the response plan are carried out. No changes have been made to the response plan have been made since its creation. Which of the following is a right statement with regards to the plan?

Options are :

  • It places too much emphasis on already implemented security controls.
  • The response plan is not implemented on a regular basis
  • The response plan does not cater to new services (Correct)
  • The response plan is complete in its entirety (Incorrect)

Answer : The response plan does not cater to new services

Explanation Answer – C So definitely the case here is that the incident response plan is not catering to newly created services. AWS keeps on changing and adding new services and hence the response plan must cater to these new services. Option A and B are invalid because we don’t know this for a fact. Option D is invalid because we know that the response plan is not complete , because it does not cater to new features of AWS For more information on incident response plan please visit the following url https://aws.amazon.com/blogs/publicsector/building-a-cloud-specific-incident-response-plan/

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 10

Your application currently uses customer keys which are generated via AWS KMS in the US east region. You now want to use the same set of keys from the EU-Central region. How can this be accomplished?

Options are :

  • Export the key from the US east region and import them into the EU-Central region
  • Use key rotation and rotate the existing keys to the EU-Central region
  • Use the backing key from the US east region and use it in the EU-Central region
  • This is not possible since keys from KMS are region specific (Correct)

Answer : This is not possible since keys from KMS are region specific

Explanation Answer – D Option A is invalid because keys cannot be exported and imported across regions. Option B is invalid because key rotation cannot be used to export keys Option C is invalid because the backing key cannot be used to export keys This is mentioned in the AWS documentation For more information on KMS please visit the following url https://aws.amazon.com/kms/faqs/

You have a requirement to conduct penetration testing on the AWS Cloud for a couple of EC2 Instances. How could you go about doing this? Choose 2 right answers from the options given below.

Options are :

  • Get prior approval from AWS for conducting the test (Correct)
  • Use a pre-approved penetration testing tool. (Correct)
  • Work with an AWS partner and no need for prior approval request from AWS
  • Choose the right AMI for the underlying instance type (Incorrect)

Answer : Get prior approval from AWS for conducting the test Use a pre-approved penetration testing tool.

Explanation Answer – A and B You can use a pre-approved solution from the AWS Marketplace. But till date the AWS Documentation still mentions that you have to get prior approval before conducting a test on the AWS Cloud for EC2 Instances. Option C and D are invalid because you have to get prior approval first For more information on penetration testing please visit the following url https://aws.amazon.com/kms/faqs/

You currently have an S3 bucket hosted in an AWS Account. It holds information that needs be accessed by a partner account. Which is the MOST secure way to allow the partner account to access the S3 bucket in your account

Options are :

  • Ensure an IAM role is created which can be assumed by the partner account. (Correct)
  • Ensure an IAM user is created which can be assumed by the partner account.
  • Ensure the partner uses an external id when making the request (Correct)
  • Provide the ARN for the role to the partner account (Correct)
  • Provide the Account Id to the partner account
  • Provide access keys for your account to the partner account

Answer : Ensure an IAM role is created which can be assumed by the partner account. Ensure the partner uses an external id when making the request Provide the ARN for the role to the partner account

Explanation Answer – A,C and D Option B is invalid because Roles are assumed and not IAM users Option E is invalid because you should not give the account ID to the partner Option F is invalid because you should not give the access keys to the partner The below diagram from the AWS documentation showcases an example on this wherein an IAM role and external ID is used to access an AWS account resources For more information on creating roles for external ID’s please visit the following url https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 13

Your company has created a set of keys using the AWS KMS service. They need to ensure that each key is only used for certain services. For example , they want one key to be used only for the S3 service. How can this be achieved?

Options are :

  • Create an IAM policy that allows the key to be accessed by only the S3 service.
  • Create a bucket policy that allows the key to be accessed by only the S3 service.
  • Use the kms:ViaService condition in the Key policy (Correct)
  • Define an IAM user , allocate the key and then assign the permissions to the required service

Answer : Use the kms:ViaService condition in the Key policy

Explanation Answer – C Option A and B are invalid because mapping keys to services cannot be done via either the IAM or bucket policy Option D is invalid because keys for IAM users cannot be assigned to services This is mentioned in the AWS Documentation The kms:ViaService condition key limits use of a customer-managed CMK to requests from particular AWS services. (AWS managed CMKs in your account, such as aws/s3, are always restricted to the AWS service that created them.) For example, you can use kms:ViaService to allow a user to use a customer managed CMK only for requests that Amazon S3 makes on their behalf. Or you can use it to deny the user permission to a CMK when a request on their behalf comes from AWS Lambda. For more information on key policy’s for KMS please visit the following url https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html

You have a set of Customer keys created using the AWS KMS service. These keys have been used for around 6 months. You are now trying to use the new KMS features for the existing set of key’s but are not able to do so. What could be the reason for this.

Options are :

  • You have not explicitly given access via the key policy (Correct)
  • You have not explicitly given access via the IAM policy
  • You have not given access via the IAM roles
  • You have not explicitly given access via IAM users

Answer : You have not explicitly given access via the key policy

Explanation Answer – A By default , keys created in KMS are created with the default key policy. When features are added to KMS, you need to explicitly update the default key policy for these keys. Option B,C and D are invalid because the key policy is the main entity used to provide access to the keys For more information on upgrading key policies please visit the following url https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-upgrading.html

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions