Mock Exam : AWS Certified Security Specialty

A company has a set of EC2 Instances hosted in AWS. The EC2 Instances have EBS volumes which is used to store critical information. There is a business continuity requirement to ensure high availability for the EBS volumes. How can you achieve this?

Options are :

  • Use lifecycle policies for the EBS volumes
  • Use EBS Snapshots (Correct)
  • Use EBS volume replication
  • Use EBS volume encryption

Answer : Use EBS Snapshots

Explanation Answer – B Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge. However, Amazon EBS replication is stored within the same availability zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots to Amazon S3 for long-term data durability Option A is invalid because there is no lifecycle policy for EBS volumes Option C is invalid because there is no EBS volume replication Option D is invalid because EBS volume encryption will not ensure business continuity For information on security for Compute Resources, please visit the below URL: https://d1.awsstatic.com/whitepapers/Security/Security_Compute_Services_Whitepaper.pdf

The CFO of a company wants to allow one of his employees to view only the AWS usage report page. Which of the below mentioned IAM policy statements allows the user to have access to the AWS usage report page?

Options are :

  • “Effect?: “Allow?, “Action?: [“Describe?], “Resource?: “Billing?
  • “Effect?: “Allow?, “Action?: [“AccountUsage], “Resource?: “*?
  • “Effect?: “Allow?, “Action?: [“aws-portal:ViewUsage?,? aws-portal:ViewBilling?], “Resource?: “*? (Correct)
  • “Effect?: “Allow?, “Action?: [“aws-portal: ViewBilling?], “Resource?: “*?

Answer : “Effect?: “Allow?, “Action?: [“aws-portal:ViewUsage?,? aws-portal:ViewBilling?], “Resource?: “*?

Explanation Answer – C As per the aws documentation, below is the access required for a user to access the Usage reports page and as per this , Option C is the right answer. Options A,B and D are invalid because the actions should be aws-portal:ViewUsage and aws-portal:ViewBilling For information on IAM policies, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

Your company has the following setup in AWS

a. A set of EC2 Instances hosting a web application

b. An application load balancer placed in front of the EC2 Instances 


There seems to be a set of malicious requests coming from a set of IP addresses. Which of the following can be used to protect against these requests?

 

Options are :

  • Use Security Groups to block the IP addresses
  • Use VPC Flow Logs to block the IP addresses
  • Use AWS Inspector to block the IP addresses
  • Use AWS WAF to block the IP addresses (Correct)

Answer : Use AWS WAF to block the IP addresses

Explanation Answer – D The AWS Documentation mentions the following on AWS WAF which can be used to protect Application Load Balancers and Cloud front A web access control list (web ACL) gives you fine-grained control over the web requests that your Amazon CloudFront distributions or Application Load Balancers respond to. You can allow or block the following types of requests: • Originate from an IP address or a range of IP addresses • Originate from a specific country or countries • Contain a specified string or match a regular expression (regex) pattern in a particular part of requests • Exceed a specified length • Appear to contain malicious SQL code (known as SQL injection) • Appear to contain malicious scripts (known as cross-site scripting) Option A is invalid because by default Security Groups have the Deny policy Options B and C are invalid because these services cannot be used to block IP addresses For information on AWS WAF, please visit the below URL: https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html

An organization has setup multiple IAM users. The organization wants that each IAM user accesses the IAM console only within the organization and not from outside. How can it achieve this?

Options are :

  • Create an IAM policy with the security group and use that security group for AWS console login
  • Create an IAM policy with a condition which denies access when the IP address range is not from the organization (Correct)
  • Configure the EC2 instance security group which allows traffic only from the organization’s IP range
  • Create an IAM policy with VPC and allow a secure gateway between the organization and AWS Console

Answer : Create an IAM policy with a condition which denies access when the IP address range is not from the organization

Explanation Answer – B You can actually use a Deny condition which will not allow the person to log in from outside. The below example shows the Deny condition to ensure that any address specified in the source address is not allowed to access the resources in aws. Option A is invalid because you don’t mention the security group in the IAM policy Option C is invalid because security groups by default don’t allow traffic Option D is invalid because the IAM policy does not have such an option For more information on IAM policy conditions , please visit the URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-ec2-two-conditions

You are creating a Lambda function which will be triggered by a Cloudwatch Event. The data from these events needs to be stored in a DynamoDB table. How should the Lambda function be given access to the DynamoDB table?

Options are :

  • Put the AWS Access keys in the Lambda function since the Lambda function by default is secure
  • Use an IAM role which has permissions to the DynamoDB table and attach it to the Lambda function. (Correct)
  • Use the AWS Access keys which has access to DynamoDB and then place it in an S3 bucket.
  • Create a VPC endpoint for the DynamoDB table. Access the VPC endpoint from the Lambda function.

Answer : Use an IAM role which has permissions to the DynamoDB table and attach it to the Lambda function.

Explanation Answer – B AWS Lambda functions uses roles to interact with other AWS services. So use an IAM role which has permissions to the DynamoDB table and attach it to the Lambda function. Options A and C are all invalid because you should never use AWS keys for access. Option D is invalid because the VPC endpoint is used for VPC’s For more information on Lambda function Permission model , please visit the URL: https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html

There is a set of Ec2 Instances in a private subnet. The application hosted on these EC2 Instances need to access a DynamoDB table. It needs to be ensured that traffic does not flow out to the internet. How can this be achieved?

Options are :

  • Use a VPC endpoint to the DynamoDB table (Correct)
  • Use a VPN connection from the VPC
  • Use a VPC gateway from the VPC
  • Use a VPC Peering connection to the DynamoDB table

Answer : Use a VPC endpoint to the DynamoDB table

Explanation Answer – A The following diagram from the AWS Documentation shows how you can access the DynamoDB service from within a VPC without going to the Internet. This can be done with the help of a VPC endpoint. Option B is invalid because this is used for connection between an on-premise solution and AWS Option C is invalid because there is no such option Option D is invalid because this is used to connect 2 VPC’s For more information on VPC endpoints for DynamoDB , please visit the URL: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html

A company has a requirement to create a DynamoDB table. The company’s software architect has provided the following CLI command for the DynamoDB table

 

aws dynamodb create-table \

  --table-name Customers \

  --attribute-definitions \

      AttributeName=ID,AttributeType=S \

      AttributeName=Name,AttributeType=S \

  --key-schema \

      AttributeName=ID,KeyType=HASH \

      AttributeName=Name,KeyType=RANGE \

  --provisioned-throughput \

      ReadCapacityUnits=10,WriteCapacityUnits=5 \

  --sse-specification Enabled=true Which of the following has been taken of from a security perspective from the above command?

Options are :

  • Since the ID is hashed , it ensures security of the underlying table.
  • The above command ensures data encryption at rest for the Customer table (Correct)
  • The above command ensures data encryption in transit for the Customer table
  • The right throughput has been specified from a security perspective

Answer : The above command ensures data encryption at rest for the Customer table

Explanation Answer – B The above command with the “--sse-specification Enabled=true“ parameter ensures that the data for the DynamoDB table is encrypted at rest. Options A,C and D are all invalid because this command is specifically used to ensure data encryption at rest For more information on DynamoDB encryption, please visit the URL: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html

You need to establish a secure backup and archiving solution for your company, using AWS. Documents should be immediately accessible for three months and available for five years for compliance reasons. Which AWS service fulfills these requirements in the most cost-effective way? Choose the correct answer:

Options are :

  • Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving. (Correct)
  • Upload the data on EBS, use lifecycle policies to move EBS snapshots into S3 and later into Glacier for long-term archiving.
  • Use Direct Connect to upload data to S3 and use IAM policies to move the data into Glacier for long-term archiving.
  • Use Storage Gateway to store data to S3 and use lifecycle policies to move the data into Redshift for long-term archiving.

Answer : Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving.

Explanation Answer – A Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. With Amazon lifecycle policies you can create transition actions in which you define when objects transition to another Amazon S3 storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Option B is invalid because lifecycle policies are not available for EBS volumes Option C is invalid because IAM policies cannot be used to move data to Glacier Option D is invalid because lifecycle policies is not used to move data to Redshif For more information on S3 lifecycle policies, please visit the URL: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

What is the result of the following bucket policy?

{

"Statement": [

{

"Sid": "Sid1",

"Action": "s3:*",

"Effect": "Allow",

"Resource": "arn:aws:s3:::mybucket/*.",

"Principal": {

{"AWS": ["arn:aws:iam::111111111:user/mark"]}

}

},

{

"Sid": "Sid2",

"Action": "s3:*",

"Effect": "Deny",

"Resource": "arn:aws:s3:::mybucket/*",

"Principal": {

"AWS": [

"*"

]

}

}

]

} Choose the correct answer:

Options are :

  • It will allow all access to the bucket mybucket
  • It will allow the user mark from AWS account number 111111111 all access to the bucket but deny everyone else all access to the bucket
  • It will deny all access to the bucket mybucket (Correct)
  • None of these

Answer : It will deny all access to the bucket mybucket

Explanation Answer – C The policy consists of 2 statements , one is the allow for the user mark to the bucket and the next is the deny policy for all other users. The deny permission will override the allow and hence all users will not have access to the bucket. Options A,B and D are all invalid because this policy is used to deny all access to the bucket mybucket For examples on S3 bucket policies, please refer to the below Link: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

A company is planning on using AWS EC2 and AWS Cloudfront for their web application. For which one of the below attacks is usage of Cloudfront most suited for?

Options are :

  • Cross side scripting
  • SQL Injection
  • DDoS attacks (Correct)
  • Malware attacks

Answer : DDoS attacks

Explanation Answer – C The below table from AWS shows the security capabilities of AWS Cloudfront. AWS Cloudfront is more prominent for DDoS attacks. Options A,B and D are invalid because Cloudfront is specifically used to protect sites against DDoS attacks For more information on security with Cloudfront, please refer to the below Link: https://d1.awsstatic.com/whitepapers/Security/Secure_content_delivery_with_CloudFront_whitepaper.pdf

Your company is planning on using AWS EC2 and ELB for deployment for their web applications. The security policy mandates that all traffic should be encrypted. Which of the following options will ensure that this requirement is met. Choose 2 answers from the options below.

Options are :

  • Ensure the load balancer listens on port 80
  • Ensure the load balancer listens on port 443 (Correct)
  • Ensure the HTTPS listener sends requests to the instances on port 443 (Correct)
  • Ensure the HTTPS listener sends requests to the instances on port 80

Answer : Ensure the load balancer listens on port 443 Ensure the HTTPS listener sends requests to the instances on port 443

Explanation Answer - B and C The AWS Documentation mentions the following You can create a load balancer that listens on both the HTTP (80) and HTTPS (443) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load balancer terminates the requests and communication from the load balancer to the instances is not encrypted. If the HTTPS listener sends requests to the instances on port 443, communication from the load balancer to the instances is encrypted. Option A is invalid because there is a need for secure traffic , so port 80 should not be used Option D is invalid because for the HTTPS listener you need to use port 443 For more information on HTTPS with ELB, please refer to the below Link: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html

Your company is hosting a set of EC2 Instances in AWS. They want to have the ability to detect if any port scans occur on their AWS EC2 Instances. Which of the following can help in this regard?

Options are :

  • Use AWS Inspector to consciously inspect the instances for port scans
  • Use AWS Trusted Advisor to notify of any malicious port scans
  • Use AWS Config to notify of any malicious port scans
  • Use AWS Guard Duty to monitor any malicious port scans (Correct)

Answer : Use AWS Guard Duty to monitor any malicious port scans

Explanation Answer – D The AWS blogs mention the following to support the use of AWS GuardDuty GuardDuty voraciously consumes multiple data streams, including several threat intelligence feeds, staying aware of malicious IP addresses, devious domains, and more importantly, learning to accurately identify malicious or unauthorized behavior in your AWS accounts. In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, this allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations. On the AWS side, it looks for suspicious AWS account activity such as unauthorized deployments, unusual CloudTrail activity, patterns of access to AWS API functions, and attempts to exceed multiple service limits. GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency. Options A,B and C are invalid because these services cannot be used to detect port scans For more information on AWS Guard Duty, please refer to the below Link: https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threat-detection/

You have an Amazon VPC that has a private subnet and a public subnet in which you have a NAT instance server. You have created a group of EC2 instances that configure themselves at startup by downloading a bootstrapping script from S3 that deploys an application via GIT.

Which one of the following setups would give us the highest level of security? Choose the correct answer from the options given below.

Options are :

  • EC2 instances in our public subnet, no EIPs, route outgoing traffic via the IGW
  • EC2 instances in our public subnet, assigned EIPs, and route outgoing traffic via the NAT
  • EC2 instance in our private subnet, assigned EIPs, and route our outgoing traffic via our IGW
  • EC2 instances in our private subnet, no EIPs, route outgoing traffic via the NAT (Correct)

Answer : EC2 instances in our private subnet, no EIPs, route outgoing traffic via the NAT

Explanation Answer – D The below diagram shows how the NAT instance works. To make EC2 instances very secure , they need to be in a private subnet, such as the database server shown below with no EIP and all traffic routed via the NAT. Options A and B are invalid because the instances need to be in the private subnet Option C is invalid because since the instance needs to be in the private subnet , you should not attach an EIP to the instance For more information on NAT instance, please refer to the below link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

In your LAMP application, you have some developers that say they would like access to your logs. However, since you are using an AWS Auto Scaling group, your instances are constantly being re-created. What would you do to make sure that these developers can access these log files? Choose the correct answer from the options below

Options are :

  • Give only the necessary access to the Apache servers so that the developers can gain access to the log files.
  • Give root access to your Apache servers to the developers.
  • Give read-only access to your developers to the Apache servers.
  • Set up a central logging server that you can use to archive your logs; archive these logs to an S3 bucket for developer-access. (Correct)

Answer : Set up a central logging server that you can use to archive your logs; archive these logs to an S3 bucket for developer-access.

Explanation Answer – D One important security aspect is to never give access to actual servers, hence Option A,B and C are just totally wrong from a security perspective. The best option is to have a central logging server that can be used to archive logs. These logs can then be stored in S3. Options A,B and C are all invalid because you should not give access to the developers on the Apache servers For more information on S3, please refer to the below link https://aws.amazon.com/documentation/s3/

Your company is planning on developing an application in AWS. This is a web based application. The application users will use their facebook or google identities for authentication. Which of the following is a step you include in your implementation for the web application?

Options are :

  • Ensure the Security Groups in the VPC only allow requests from the Google and Facebook Authentication servers.
  • Create an OIDC identity provider in AWS (Correct)
  • Create a SAML provider in AWS
  • Create an OIDC provider in both Google and Facebook

Answer : Create an OIDC identity provider in AWS

Explanation Answer – B The AWS Documentation mentions the following OIDC identity providers are entities in IAM that describe an identity provider (IdP) service that supports the OpenID Connect (OIDC) standard. You use an OIDC identity provider when you want to establish trust between an OIDC-compatible IdP—such as Google, Salesforce, and many others—and your AWS account. This is useful if you are creating a mobile app or web application that requires access to AWS resources, but you don't want to create custom sign-in code or manage your own user identities Option A is invalid because in the security groups you would not mention this information/ Option C is invalid because SAML is used for federated authentication Option D is invalid because you need to use the OIDC identity provider in AWS For more information on ODIC identity providers, please refer to the below Link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html

Your company is planning on developing an application in AWS. This is a web based application. The application users will use their facebook or google identities for authentication. You want to have the ability to manage user profiles without having to add extra coding to manage this. Which of the below would assist in this.

Options are :

  • Create an OIDC identity provider in AWS
  • Create a SAML provider in AWS
  • Use AWS Cognito to manage the user profiles (Correct)
  • Use IAM users to manage the user profiles

Answer : Use AWS Cognito to manage the user profiles

Explanation Answer – C The AWS Documentation mentions the following A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Facebook or Amazon, and through SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK. User pools provide: • Sign-up and sign-in services. • A built-in, customizable web UI to sign in users. • Social sign-in with Facebook, Google, and Login with Amazon, as well as sign-in with SAML identity providers from your user pool. • User directory management and user profiles. • Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. • Customized workflows and user migration through AWS Lambda triggers. Options A and B are invalid because these are not used to manage users Option D is invalid because this would be a maintenance overhead For more information on Cognito User Identity pools, please refer to the below Link: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html

Your company has many AWS accounts defined and all are managed via AWS Organizations. One AWS account has an S3 bucket that has critical data. How can it be ensured that only users from that account access the bucket?

Options are :

  • Ensure the bucket policy has a condition which involves aws:PrincipalOrgID (Correct)
  • Ensure the bucket policy has a condition which involves aws:AccountNumber
  • Ensure the bucket policy has a condition which involves aws:PrincipalID
  • Ensure the bucket policy has a condition which involves aws:OrgID

Answer : Ensure the bucket policy has a condition which involves aws:PrincipalOrgID

Explanation Answer – A The AWS Documentation mentions the following AWS Identity and Access Management (IAM) now makes it easier for you to control access to your AWS resources by using the AWS organization of IAM principals (users and roles). For some services, you grant permissions using resource-based policies to specify the accounts and principals that can access the resource and what actions they can perform on it. Now, you can use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account in the organization Option B,C and D are invalid because the condition in the bucket policy has to mention aws:PrincipalOrgID For more information on controlling access via Organizations, please refer to the below Link: https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/

Your company has defined a set of S3 buckets in AWS. They need to monitor the S3 buckets and know the source IP address and the person who make requests to the S3 bucket. How can this be achieved?

Options are :

  • Enable VPC flow logs to know the source IP addresses
  • Monitor the S3 API calls by using Cloudtrail logging (Correct)
  • Monitor the S3 API calls by using Cloudwatch logging
  • Enable AWS Inspector for the S3 bucket

Answer : Monitor the S3 API calls by using Cloudtrail logging

Explanation Answer – B The AWS Documentation mentions the following Amazon S3 is integrated with AWS CloudTrail. CloudTrail is a service that captures specific API calls made to Amazon S3 from your AWS account and delivers the log files to an Amazon S3 bucket that you specify. It captures API calls made from the Amazon S3 console or from the Amazon S3 API. Using the information collected by CloudTrail, you can determine what request was made to Amazon S3, the source IP address from which the request was made, who made the request, when it was made, and so on Options A,C and D are invalid because these services cannot be used to get the source IP address of the calls to S3 buckets For more information on Cloudtrail logging, please refer to the below Link: https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html

Your organization is preparing for a security assessment of your use of AWS. In preparation for this assessment, which three IAM best practices should you consider implementing? 

Options are :

  • Create individual IAM users for everyone in your organization (Correct)
  • Configure MFA on the root account and for privileged IAM users (Correct)
  • Assign IAM users and groups configured with policies granting least privilege access (Correct)
  • Ensure all users have been assigned and are frequently rotating a password, access ID/secret key, and X.509 certificate

Answer : Create individual IAM users for everyone in your organization Configure MFA on the root account and for privileged IAM users Assign IAM users and groups configured with policies granting least privilege access

Explanation Answer – A,B and C When you go to the security dashboard, the security status will show the best practices for initiating the first level of security. Option D is invalid because as per the dashboard , this is not part of the security recommendation For more information on best security practices please visit the URL: https://aws.amazon.com/whitepapers/aws-security-best-practices/

Your team is experimenting with the API gateway service for an application. There is a need to implement a custom module which can be used for authentication/authorization for calls made to the API gateway. How can this be achieved?

Options are :

  • Use the request parameters for authorization
  • Use a Lambda authorizer (Correct)
  • Use the gateway authorizer
  • Use CORS on the API gateway

Answer : Use a Lambda authorizer

Explanation Answer – B The AWS Documentation mentions the following An Amazon API Gateway Lambda authorizer (formerly known as a custom authorizer) is a Lambda function that you provide to control access to your API methods. A Lambda authorizer uses bearer token authentication strategies, such as OAuth or SAML. It can also use information described by headers, paths, query strings, stage variables, or context variables request parameters. Options A,C and D are invalid because these cannot be used if you need a custom authentication/authorization for calls made to the API gateway For more information on using the API gateway Lambda authorizer please visit the URL: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html

A company has set up EC2 instances on the AWS Cloud. There is a need to see all the IP addresses which are accessing the EC2 Instances. Which service can help achieve this?

Options are :

  • Use the AWS Inspector service
  • Use AWS VPC Flow Logs (Correct)
  • Use Network ACL’s
  • Use Security Groups

Answer : Use AWS VPC Flow Logs

Explanation Answer – B The AWS Documentation mentions the following A flow log record represents a network flow in your flow log. Each record captures the network flow for a specific 5-tuple, for a specific capture window. A 5-tuple is a set of five different values that specify the source, destination, and protocol for an internet protocol (IP) flow. Options A,C and D are all invalid because these services/tools cannot be used to get the the IP addresses which are accessing the EC2 Instances For more information on VPC Flow Logs please visit the URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

You have private video content in S3 that you want to serve to subscribed users on the Internet. User IDs, credentials, and subscriptions are stored in an Amazon RDS database. Which configuration will allow you to securely serve private content to your users?

Options are :

  • Generate pre-signed URLs for each user as they request access to protected S3 content (Correct)
  • Create an IAM user for each subscribed user and assign the GetObject permission to each IAM user
  • Create an S3 bucket policy that limits access to your private content to only your subscribed users’credentials
  • Create a CloudFront Origin Identity user for your subscribed users and assign the GetObject permission to this user

Answer : Generate pre-signed URLs for each user as they request access to protected S3 content

Explanation Answer – A All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions. When you create a pre-signed URL, you must provide your security credentials, specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration. Option B is invalid because this would be too difficult to implement at a user level. Option C is invalid because this is not possible Option D is invalid because this is used to serve private content via Cloudfront For more information on pre-signed urls , please refer to the Link: http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

A company is hosting sensitive data in an AWS S3 bucket. It needs to be ensured that the bucket always remains private. How can this be ensured continually? Choose 2 answers from the options given below

Options are :

  • Use AWS Config to monitor changes to the AWS Bucket (Correct)
  • Use AWS Lambda function to change the bucket policy (Correct)
  • Use AWS Trusted Advisor API to monitor the changes to the AWS Bucket
  • Use AWS Lambda function to change the bucket ACL

Answer : Use AWS Config to monitor changes to the AWS Bucket Use AWS Lambda function to change the bucket policy

Explanation Answer – A and B One of the AWS Blogs mentions the usage of AWS Config and Lambda to achieve this. Below is the diagram representation of this Option C is invalid because the Trusted Advisor API cannot be used to monitor changes to the AWS Bucket Option D is invalid because you need to change the bucket policy and not the ACL For more information on implementation of this use case , please refer to the Link: https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/

You have a set of 100 EC2 Instances in an AWS account. You need to ensure that all of these instances are patched and kept to date. All of the instances are in a private subnet. How can you achieve this. Choose 2 answers from the options given below

Options are :

  • Ensure a NAT gateway is present to download the updates (Correct)
  • Use the Systems Manager to patch the instances (Correct)
  • Ensure an Internet gateway is present to download the updates
  • Use the AWS Inspector to patch the updates

Answer : Ensure a NAT gateway is present to download the updates Use the Systems Manager to patch the instances

Explanation Answer – A and B Option C is invalid because the instances need to remain in the private subnet Option D is invalid because AWS inspector can only detect the patches One of the AWS Blogs mentions how patching of Linux servers can be accomplished. Below is the diagram representation of the architecture setup For more information on patching Linux workloads in AWS , please refer to the Link: https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

You are building a large-scale confidential documentation web server on AWS and all of the documentation for it will be stored on S3. One of the requirements is that it cannot be publicly accessible from S3 directly, and you will need to use CloudFront to accomplish this. Which of the methods listed below would satisfy the requirements as outlined? Choose an answer from the options below

Options are :

  • Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
  • Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. (Correct)
  • Create individual policies for each bucket the documents are stored in and in that policy grant access to only CloudFront.
  • Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

Answer : Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

Explanation Answer – B If you want to use CloudFront signed URLs or signed cookies to provide access to objects in your Amazon S3 bucket, you probably also want to prevent users from accessing your Amazon S3 objects using Amazon S3 URLs. If users access your objects directly in Amazon S3, they bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control over the date and time that a user can no longer access your content and control over which IP addresses can be used to access content. In addition, if users access objects both through CloudFront and directly by using Amazon S3 URLs, CloudFront access logs are less useful because they're incomplete. Option A is invalid because you have to create a cloudfront identity and not an IAM user Options C and D are invalid because individual policies and bucket policies cannot be used to restrict the access via Cloudfront For more information on Origin Access Identity please see the below Link: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

You have an EC2 Instance with the following Security configured

a) Inbound allowed for ICMP

b) Outbound denied for ICMP

c) Network ACL allowed for ICMP

d) Network denied for ICMP If Flow logs is enabled for the instance , 

which of the following flow records will be recorded. Choose 3 answers from the options give below

Options are :

  • An ACCEPT record for the request based on the Security Group (Correct)
  • An ACCEPT record for the request based on the NACL (Correct)
  • A REJECT record for the response based on the Security Group
  • A REJECT record for the response based on the NACL (Correct)

Answer : An ACCEPT record for the request based on the Security Group An ACCEPT record for the request based on the NACL A REJECT record for the response based on the NACL

Explanation Answer – A,B and D This example is given in the AWS documentation as well For example, you use the ping command from your home computer (IP address is 203.0.113.12) to your instance (the network interface's private IP address is 172.31.16.139). Your security group's inbound rules allow ICMP traffic and the outbound rules do not allow ICMP traffic; however, because security groups are stateful, the response ping from your instance is allowed. Your network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic. Because network ACLs are stateless, the response ping is dropped and will not reach your home computer. In a flow log, this is displayed as 2 flow log records: • An ACCEPT record for the originating ping that was allowed by both the network ACL and the security group, and therefore was allowed to reach your instance. • A REJECT record for the response ping that the network ACL denied. Option C is invalid because the REJECT record would not be present For more information on Flow Logs, please refer to the below URL: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

Your company looks at the gaming domain and hosts several Ec2 Instances as game servers. The servers each experience user loads in the thousands. There is a concern of DDos attacks on the EC2 Instances which could cause a huge revenue loss to the company. Which of the following can help mitigate this security concern and also ensure minimum downtime for the servers.

Options are :

  • Use VPC Flow logs to monitor the VPC and then implement NACL’s to mitigate attacks
  • Use AWS Shield Advanced to protect the EC2 Instances (Correct)
  • Use AWS Inspector to protect the EC2 Instances
  • Use AWS Trusted Advisor to protect the EC2 Instances

Answer : Use AWS Shield Advanced to protect the EC2 Instances

Explanation Answer – B Below is an excerpt from the AWS Documentation on some of the use cases for AWS Shield Option A is invalid because this would take time and revenue loss is big concern for the company Options C and D are invalid because they cannot be used to protect the instances For more information on AWS Shield Use cases, please refer to the below URL: https://docs.aws.amazon.com/waf/latest/developerguide/aws-shield-use-case.html

You currently operate a web application In the AWS US-East region. The application runs on an auto-scaled layer of EC2 instances and an RDS Multi-AZ database. Your IT security compliance officer has tasked you to develop a reliable and durable logging solution to track changes made to your EC2,IAM and RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these solutions would you recommend?

Options are :

  • Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs. (Correct)
  • Create a new CloudTrail with one new S3 bucket to store the logs. Configure SNS to send log file delivery notifications to your management system. Use IAM roles and S3 bucket policies on the S3 bucket that stores your logs.
  • Create a new CloudTrail trail with an existing S3 bucket to store the logs and with the global services option selected. Use S3 ACLs and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.
  • Create three new CloudTrail trails with three new S3 buckets to store the logs one for the AWS Management console, one for AWS SDKs and one for command line tools. Use IAM roles and S3 bucket policies on the S3 buckets that store your logs.

Answer : Create a new CloudTrail trail with one new S3 bucket to store the logs and with the global services option selected. Use IAM roles S3 bucket policies and Multi Factor Authentication (MFA) Delete on the S3 bucket that stores your logs.

Explanation Answer – A AWS Identity and Access Management (IAM) is integrated with AWS CloudTrail, a service that logs AWS events made by or on behalf of your AWS account. CloudTrail logs authenticated AWS API calls and also AWS sign-in events, and collects this event information in files that are delivered to Amazon S3 buckets. You need to ensure that all services are included. Hence option B is partially correct. Option B is invalid because you need to ensure that global services is select Option C is invalid because you should use bucket policies Option D is invalid because you should ideally just create one S3 bucket For more information on Cloudtrail, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html

An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access to issue several API commands to discover Amazon EC2 resources running within the enterprise’s account. The enterprise has internal security policies that require any outside access to their environment must conform to the principles of least privilege and there must be controls in place to ensure that the credentials used by the SaaS vendor cannot be used by any other third party. Which of the following would meet all of these conditions?

Options are :

  • From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account.
  • Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application. Create a new access and secret key for the user and provide these credentials to the SaaS provider.
  • Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and assign it a policy that allows only the actions required by the SaaS application. (Correct)
  • Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas application to work, provide the role ARN to the SaaS provider to use when launching their application instances.

Answer : Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and assign it a policy that allows only the actions required by the SaaS application.

Explanation Answer – C The below diagram from an AWS blog shows how access is given to other accounts for the services in your own account Options A and B are invalid because you should not user IAM users or IAM Access keys Options D is invalid because you need to create a role for cross account access For more information on Allowing access to external accounts, please visit the below URL: https://aws.amazon.com/blogs/apn/how-to-best-architect-your-aws-marketplace-saas-subscription-across-multiple-aws-accounts/

You have an S3 bucket defined in AWS. You want to ensure that you encrypt the data before sending it across the wire. What is the best way to achieve this.

Options are :

  • Enable server side encryption for the S3 bucket. This request will ensure that the data is encrypted first.
  • Use the AWS Encryption CLI to encrypt the data first (Correct)
  • Use a Lambda function to encrypt the data before sending it to the S3 bucket
  • Enable client side encryption for the S3 bucket

Answer : Use the AWS Encryption CLI to encrypt the data first

Explanation Answer – B One can use the AWS Encryption CLI to encrypt the data before sending it across to the S3 bucket. Options A and C are invalid because this would still mean that data is transferred in plain text Option D is invalid because you cannot just enable client side encryption for the S3 bucket For more information on Encrypting and Decrypting data, please visit the below URL: https://aws.amazon.com/blogs/security/how-to-encrypt-and-decrypt-your-data-with-the-aws-encryption-cli/

Your company has a set of EC2 Instances defined in AWS. These Ec2 Instances have strict security groups attached to them. You need to ensure that changes to the Security groups are noted and acted on accordingly. How can you achieve this?

Options are :

  • Use Cloudwatch logs to monitor the activity on the Security Groups. Use filters to search for the changes and use SNS for the notification.
  • Use Cloudwatch metrics to monitor the activity on the Security Groups. Use filters to search for the changes and use SNS for the notification.
  • Use AWS Inspector to monitor the activity on the Security Groups. Use filters to search for the changes and use SNS for the notification.
  • Use Cloudwatch events to be triggered for any changes to the Security Groups. Configure the Lambda function for email notification as well. (Correct)

Answer : Use Cloudwatch events to be triggered for any changes to the Security Groups. Configure the Lambda function for email notification as well.

Explanation Answer – D The below diagram from an AWS blog shows how security groups can be monitored Option A is invalid because you need to use Cloudwatch Events to check for changes Option B is invalid because you need to use Cloudwatch Events to check for changes Option C is invalid because AWS inspector is not used to monitor the activity on Security Groups For more information on monitoring security groups, please visit the below URL: https://aws.amazon.com/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/

Your company has just set up a new central server in a VPC. There is a requirement for other teams who have their servers located in different VPC’s in the same region to connect to the central server. Which of the below options is best suited to achieve this requirement. 

Options are :

  • Set up VPC peering between the central server VPC and each of the teams VPCs. (Correct)
  • Set up AWS DirectConnect between the central server VPC and each of the teams VPCs.
  • Set up an IPSec Tunnel between the central server VPC and each of the teams VPCs.
  • None of the above options will work.

Answer : Set up VPC peering between the central server VPC and each of the teams VPCs.

Explanation Answer – A A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region. Options B and C are invalid because you need to use VPC Peering Option D is invalid because VPC Peering is available For more information on VPC Peering please see the below Link: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

There is a requirement for a company to transfer large amounts of data between AWS and an on-premise location. There is an additional requirement for low latency and high consistency traffic to AWS. Given these requirements how would you design a hybrid architecture? Choose the correct answer from the options below

Options are :

  • Provision a Direct Connect connection to an AWS region using a Direct Connect partner. (Correct)
  • Create a VPN tunnel for private connectivity, which increases network consistency and reduces latency.
  • Create an IPSec tunnel for private connectivity, which increases network consistency and reduces latency.
  • Create a VPC peering connection between AWS and the Customer gateway.

Answer : Provision a Direct Connect connection to an AWS region using a Direct Connect partner.

Explanation Answer – A AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. Options B and C are invalid because these options will not reduce network latency Options D is invalid because this is only used to connect 2 VPC’s For more information on AWS direct connect, just browse to the below URL: https://aws.amazon.com/directconnect/

Which of the following bucket policies will ensure that objects being uploaded to a bucket called ‘demo’ are encrypted.

Options are :

  • { "Version":"2012-10-17", "Id":"PutObj", "Statement":[{ "Sid":"DenyUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObject", "Resource":"arn:aws:s3:::demo/*", "Condition":{ "StringNotEquals":{ "s3:x-amz-server-side-encryption":"aws:kms" } } } ] } (Correct)
  • { "Version":"2012-10-17", "Id":"PutObj", "Statement":[{ "Sid":"DenyUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObject", "Resource":"arn:aws:s3:::demo/*", "Condition":{ "StringEquals":{ "s3:x-amz-server-side-encryption":"aws:kms" } } } ] }
  • { "Version":"2012-10-17", "Id":"PutObj", "Statement":[{ "Sid":"DenyUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObject", "Resource":"arn:aws:s3:::demo/*" } } ] }
  • { "Version":"2012-10-17", "Id":"PutObj", "Statement":[{ "Sid":"DenyUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObjectEncrypted", "Resource":"arn:aws:s3:::demo/*" } } ] }

Answer : { "Version":"2012-10-17", "Id":"PutObj", "Statement":[{ "Sid":"DenyUploads", "Effect":"Deny", "Principal":"*", "Action":"s3:PutObject", "Resource":"arn:aws:s3:::demo/*", "Condition":{ "StringNotEquals":{ "s3:x-amz-server-side-encryption":"aws:kms" } } } ] }

Explanation Answer – A The condition of "s3:x-amz-server-side-encryption":"aws:kms" ensures that objects uploaded need to be encrypted. Options B,C and D are invalid because you have to ensure the condition of "s3:x-amz-server-side-encryption":"aws:kms" is present For more information on AWS KMS best practises, just browse to the below URL: https://d1.awsstatic.com/whitepapers/aws-kms-best-practices.pdf

A company's AWS account consists of approximately 300 IAM users. Now there is a mandate that an access change is required for 100 IAM users to have unlimited privileges to S3.As a system administrator, how can you implement this effectively so that there is no need to apply the policy at the individual user level?

Options are :

  • Create a new role and add each user to the IAM role
  • Use the IAM groups and add users, based upon their role, to different groups and apply the policy to group (Correct)
  • Create a policy and apply it to multiple users using a JSON script
  • Create an S3 bucket policy with unlimited access which includes each user's AWS account ID

Answer : Use the IAM groups and add users, based upon their role, to different groups and apply the policy to group

Explanation Answer – B Option A is incorrect since you don’t add a user to the IAM Role Option C is incorrect since you don’t assign multiple users to a policy Option D is incorrect since this is not an ideal approach An IAM group is used to collectively manage users who need the same set of permissions. By having groups, it becomes easier to manage permissions. So if you change the permissions on the group scale, it will affect all the users in that group For more information on IAM Groups, just browse to the below URL: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

You need to create a policy and apply it for just an individual user. How could you accomplish this in the right way?

Options are :

  • Add an AWS managed policy for the user
  • Add a service policy for the user
  • Add an IAM role for the user
  • Add an inline policy for the user (Correct)

Answer : Add an inline policy for the user

Explanation Answer – D Options A and B are incorrect since you need to add an inline policy just for the user Option C is invalid because you don’t assign an IAM role to a user The AWS Documentation mentions the following An inline policy is a policy that's embedded in a principal entity (a user, group, or role)—that is, the policy is an inherent part of the principal entity. You can create a policy and embed it in a principal entity, either when you create the principal entity or later. For more information on IAM Access and Inline policies, just browse to the below URL: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html

Your company is planning on using bastion hosts for administering the servers in AWS. Which of the following is the right way to setup the bastion host from a security perspective?


Options are :

  • A Bastion host should be on a private subnet and never a public subnet due to security concerns
  • A Bastion host sits on the outside of an internal network and is used as a gateway into the private network and is considered the critical strong point of the network
  • A Bastion host is used to SSH into the internal network to access private resources without a VPN (Correct)
  • A Bastion host should maintain extremely tight security and monitoring as it is available to the public

Answer : A Bastion host is used to SSH into the internal network to access private resources without a VPN

Explanation Answer – C A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. In AWS , A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. Options A and B are invalid because the bastion host needs to sit on the public network. Option D is invalid because bastion hosts are not used for monitoring For more information on bastion hosts, just browse to the below URL: https://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html

Your company uses AWS to host its resources. They have the following requirements

1) Record all API calls and Transitions

2) Help in understanding what resources are there in the account

3) Facility to allow auditing credentials and logins 

Which services would suffice the above requirements

Options are :

  • AWS Inspector, CloudTrail, IAM Credential Reports
  • CloudTrail, IAM Credential Reports, AWS SNS
  • CloudTrail, AWS Config, IAM Credential Reports (Correct)
  • AWS SQS, IAM Credential Reports, CloudTrail

Answer : CloudTrail, AWS Config, IAM Credential Reports

Explanation Answer – C You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management Console, AWS Command Line Interface, AWS SDKs, and other AWS services. Options A,B and D are invalid because you need to ensure that you use the services of CloudTrail, AWS Config, IAM Credential Reports For more information on Cloudtrail, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. For more information on the config service, please visit the below URL: https://aws.amazon.com/config/ You can generate and download a credential report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can get a credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the IAM API. For more information on Credentials Report, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

Your CTO is very worried about the security of your AWS account. How best can you prevent hackers from completely hijacking your account?

Options are :

  • Use short but complex password on the root account and any administrators.
  • Use AWS IAM Geo-Lock and disallow anyone from logging in except for in your city.
  • Use MFA on all users and accounts, especially on the root account. (Correct)
  • Don’t write down or remember the root account password after creating the AWS account.

Answer : Use MFA on all users and accounts, especially on the root account.

Explanation Answer – C Multi-factor authentication can add one more layer of security to your AWS account. Even when you go to your Security Credentials dashboard one of the items is to enable MFA on your root account Option A is invalid because you need to have a good password policy Option B is invalid because there is no IAM Geo-Lock Option D is invalid because this is not a recommended practise For more information on MFA, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html

Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks? 

Options are :

  • Use CloudTrail Log File Integrity Validation. (Correct)
  • Use AWS Config SNS Subscriptions and process events in real time.
  • Use CloudTrail backed up to AWS S3 and Glacier.
  • Use AWS Config Timeline forensics.

Answer : Use CloudTrail Log File Integrity Validation.

Explanation Answer – A The AWS Documentation mentions the following To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity. The CloudTrail log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time. Options B,C and D is invalid because you need to check for log File Integrity Validation for cloudtrail logs For more information on Cloudtrail log file validation, please visit the below URL: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

Your development team is using access keys to develop an application that has access to S3 and DynamoDB. A new security policy has outlined that the credentials should not be older than 2 months , and should be rotated. How can you achieve this?

Options are :

  • Use the application to rotate the keys in every 2 months via the SDK
  • Use a script which will query the date the keys are created. If older than 2 months , delete them and recreate new keys (Correct)
  • Delete the user associated with the keys after every 2 months. Then recreate the user again.
  • Delete the IAM Role associated with the keys after every 2 months. Then recreate the IAM Role again.

Answer : Use a script which will query the date the keys are created. If older than 2 months , delete them and recreate new keys

Explanation Answer – B One can use the CLI command list-access-keys to get the access keys. This command also returns the "CreateDate" of the keys. If the CreateDate is older than 2 months , then the keys can be deleted. The Returns list-access-keys CLI command returns information about the access key IDs associated with the specified IAM user. If there are none, the action returns an empty list. Option A is incorrect because you might as use a script for such maintenance activities Option C is incorrect because you would not rotate the users themselves Option D is incorrect because you don’t use IAM roles for such a purpose For more information on the CLI command, please refer to the below Link: http://docs.aws.amazon.com/cli/latest/reference/iam/list-access-keys.html

You work at a company that makes use of AWS resources. One of the key security policies is to ensure that all data is encrypted both at rest and in transit. Which of the following is one of the right ways to implement this.

Options are :

  • Using S3 Server Side Encryption (SSE) to store the information (Correct)
  • SSL termination on the ELB
  • Enabling Proxy Protocol
  • Enabling sticky sessions on your load balancer

Answer : Using S3 Server Side Encryption (SSE) to store the information

Explanation Answer – A By disabling SSL termination, you are leaving an unsecure connection from the ELB to the back end instances. Hence this means that part of the data transit is not being encrypted. Option B is incorrect because this would not guarantee complete encryption of data in transit Option C and D are incorrect because these would not guarantee encryption For more information on SSL Listeners for your load balancer, please visit the below URL: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-https-load-balancers.html

There are currently multiple applications hosted in a VPC. During monitoring it has been noticed that multiple port scans are coming in from a specific IP Address block. The internal security team has requested that all offending IP Addresses be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from the specified IP Address's.

Options are :

  • Create an AD policy to modify the Windows Firewall settings on all hosts in the VPC to deny access from the IP Address block.
  • Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP Address block. (Correct)
  • Add a rule to all of the VPC Security Groups to deny access from the IP Address block.
  • Modify the Windows Firewall settings on all AMI's that your organization uses in that VPC to deny access from the IP address block.

Answer : Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP Address block.

Explanation Answer – B A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. Option A,C and D are incorrect because you need to use Network ACL’s and not Windows Firewalls or Security Groups For more information on network ACL’s, please visit the below URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

Your company has defined privileged users for their AWS Account. These users are administrators for key resources defined in the company. There is now a mandate to enhance the security authentication for these users. How can this be accomplished?

Options are :

  • Enable MFA for these user accounts (Correct)
  • Enable versioning for these user accounts
  • Enable accidental deletion for these user accounts
  • Disable root access for the users

Answer : Enable MFA for these user accounts

Explanation Answer – A The AWS Documentation mentions the following as a best practise for IAM users For extra security, enable multi-factor authentication (MFA) for privileged IAM users (users who are allowed access to sensitive resources or APIs). With MFA, users have a device that generates a unique authentication code (a one-time password, or OTP). Users must provide both their normal credentials (like their user name and password) and the OTP. The MFA device can either be a special piece of hardware, or it can be a virtual device (for example, it can run in an app on a smartphone). Options B and C are invalid because these options are not available Option D is invalid because there is not root access for users For more information on IAM best practises, please visit the below URL: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions