AWS Certified Security Specialty

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The IT Security department has a suspicion that a DDos attack is coming from a suspecting IP. How can you protect the subnets from this attack?

Options are :

  • Change the Inbound Security Groups to deny access from the suspecting IP
  • Change the Outbound Security Groups to deny access from the suspecting IP
  • Change the Inbound NACL to deny access from the suspecting IP (Correct)
  • Change the Outbound NACL to deny access from the suspecting IP (Incorrect)

Answer : Change the Inbound NACL to deny access from the suspecting IP

Explanation Answer – C Option A and B are invalid because by default the Security Groups already block traffic. You can use NACL’s as an additional security layer for the subnet to deny traffic. Option D is invalid since just changing the Inbound Rules is sufficient. The AWS Documentation mentions the following A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For more information on Network Access Control Lists, please visit the following url https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

You are designing a custom IAM policy that would allow uses to list buckets in S3 only if they are MFA authenticated. Which of the following would best match this requirement?

Options are :

  • { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "Resource": "arn:aws:s3:::*", "Condition": { "Bool": {"aws:MultiFactorAuthPresent": true} } } } (Correct)
  • { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "Resource": "arn:aws:s3:::*", "Condition": { "Bool": {"aws:MultiFactorAuthPresent":false} } } }
  • { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "Resource": "arn:aws:s3:::*", "Condition": { "aws:MultiFactorAuthPresent":false } } }
  • { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "Resource": "arn:aws:s3:::*", "Condition": { "aws:MultiFactorAuthPresent":true } } }

Answer : { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": "Resource": "arn:aws:s3:::*", "Condition": { "Bool": {"aws:MultiFactorAuthPresent": true} } } }

Explanation Answer - A The Condition clause can be used to ensure users can only work with resources if they are MFA authenticated. Option B and C are wrong since the aws:MultiFactorAuthPresent clause should be marked as true. Here you are saying that only if the user has been MFA activated , that means it is true , then allow access. Option D is invalid because the “bool? clause is missing in the evaluation for the condition clause. For more information on an example on such a policy, please visit the following url https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_mfa-dates.html


You are hosting a web site via website hosting on an S3 bucket http://demo.s3-website-us-east-1.amazonaws.com. You have some web pages that use Javascript that access resources in another bucket which has web site hosting also enabled. But when users access the web pages , they are getting a blocked Javascript error.How can you rectify this?

Options are :

  • Enable CORS for the bucket (Correct)
  • Enable versioning for the bucket
  • Enable MFA for the bucket
  • Enable CRR for the bucket (Incorrect)

Answer : Enable CORS for the bucket

Explanation Answer – A Such a scenario is also given in the AWS Documentation Option B is invalid because versioning is only to create multiple versions of an object and can help in accidental deletion of objects Option C is invalid because this is used as an extra measure of caution for deletion of objects Option D is invalid because this is used for Cross region replication of objects For more information on Cross Origin Resource sharing, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

You have a vendor that needs access to an AWS resource. You create an AWS user account. You want to restrict access to the resource using a policy for just that user over a brief period. Which of the following would be an ideal policy to use?

Options are :

  • An AWS Managed Policy
  • An Inline Policy (Correct)
  • A Bucket Policy
  • A bucket ACL (Incorrect)

Answer : An Inline Policy

Explanation Answer – B The AWS Documentation gives an example on such a case Inline policies are useful if you want to maintain a strict one-to-one relationship between a policy and the principal entity that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to a principal entity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong principal entity. In addition, when you use the AWS Management Console to delete that principal entity, the policies embedded in the principal entity are deleted as well. That's because they are part of the principal entity. Option A is invalid because AWS Managed Polices are ok for a group of users , but for individual users , inline policies are better. Option C and D are invalid because they are specifically meant for access to S3 buckets For more information on policies, please visit the following url https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html

Your company has a requirement to monitor all root user activity. How can this best be achieved? Choose 2 answers from the options given below. Each answer forms part of the solution

Options are :

  • Create a Cloudwatch Events Rule (Correct)
  • Create a Cloudwatch Logs Rule
  • Use a Lambda function (Correct)
  • Use Cloudtrail API call (Incorrect)

Answer : Create a Cloudwatch Events Rule Use a Lambda function

Explanation Answer – A and C Below is a snippet from the AWS blogs on a solution Option B is invalid because you need to create a Cloudwatch Events Rule and there is such thing as a Cloudwatch Logs Rule Option D is invalid because Cloud Trail API calls can be recorded but cannot be used to send across notifications For more information on this blog article, please visit the following url https://aws.amazon.com/blogs/mt/monitor-and-notify-on-aws-account-root-user-activity/

A company wants to have a secure way of generating, storing and managing cryptographic keys. But they want to have exclusive access for the keys. Which of the following can be used for this purpose?

Options are :

  • Use KMS and the normal KMS encryption keys
  • Use KMS and use an external key material
  • Use S3 Server Side encryption
  • Use Cloud HSM (Correct)

Answer : Use Cloud HSM

Explanation Answer – D The AWS Documentation mentions the following The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. AWS and AWS Marketplace partners offer a variety of solutions for protecting sensitive data within the AWS platform, but for some applications and data subject to contractual or regulatory mandates for managing cryptographic keys, additional protection may be necessary. CloudHSM complements existing data protection solutions and allows you to protect your encryption keys within HSMs that are designed and validated to government standards for secure key management. CloudHSM allows you to securely generate, store and manage cryptographic keys used for data encryption in a way that keys are accessible only by you. Option A,B and C are invalid because in all of these cases , the management of the key will be with AWS. Here the question specifically mentions that you want to have exclusive access over the keys. This can be achieved with Cloud HSM For more information on CloudHSM, please visit the following url https://aws.amazon.com/cloudhsm/faqs/

A company is hosting a website that must be accessible to users for HTTPS traffic. Also port 22 should be open for administrative purposes. Which of the following security group configurations are the MOST secure but still functional to support these requirements? Choose 2 answers from the options given below

Options are :

  • Port 443 coming from 0.0.0.0/0 (Correct)
  • Port 443 coming from 10.0.0.0/16
  • Port 22 coming from 0.0.0.0/0
  • Port 22 coming from 10.0.0.0/16 (Correct)

Answer : Port 443 coming from 0.0.0.0/0 Port 22 coming from 10.0.0.0/16

Explanation Answer – A and D vSince HTTPS traffic is required for all users on the Internet , Port 443 should be open on all IP addresses. For port 22 , the traffic should be restricted to an internal subnet. Option B is invalid , because this only allow traffic from a particular CIDR block and not from the internet Option C is invalid because allowing port 22 from the internet is a security risk For more information on AWS Security Groups, please visit the following url https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

Your company has an EC2 Instance that is hosted in an AWS VPC. There is a requirement to ensure that logs files from the EC2 Instance are stored accordingly. The access should also be limited for the destination of the log files. How can this be accomplished? Choose 2 answers from the options given below. Each answer forms part of the solution

Options are :

  • Stream the log files to a separate Cloudtrail trail
  • Stream the log files to a separate Cloudwatch Log group (Correct)
  • Create an IAM policy that gives the desired level of access to the Cloudtrail trail
  • Create an IAM policy that gives the desired level of access to the Cloudwatch Log group (Correct)

Answer : Stream the log files to a separate Cloudwatch Log group Create an IAM policy that gives the desired level of access to the Cloudwatch Log group

Explanation Answer – B and D You can create a Log group and send all logs from the EC2 Instance to that group. You can then limit the access to the Log groups via an IAM policy. Option A is invalid because Cloudtrail is used to record API activity and not for storing log files Option C is invalid because Cloudtrail is the wrong service to be used for this requirement For more information on Access to Cloudwatch logs, please visit the following url https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/auth-and-access-control-cwl.html

You have an Ec2 Instance in a private subnet which needs to access the KMS service. Which of the following methods can help fulfil this requirement, keeping security in perspective

Options are :

  • Use a VPC endpoint (Correct)
  • Attach an Internet gateway to the subnet
  • Attach a VPN connection to the VPC
  • Use VPC Peering (Incorrect)

Answer : Use a VPC endpoint

Explanation Answer – A The AWS Documentation mentions the following You can connect directly to AWS KMS through a private endpoint in your VPC instead of connecting over the internet. When you use a VPC endpoint, communication between your VPC and AWS KMS is conducted entirely within the AWS network. Option B is invalid because this could open threats from the internet Option C is invalid because this is normally used for communication between on-premise environments and AWS. Option D is invalid because this is normally used for communication between VPC’s For more information on accessing KMS via an endpoint, please visit the following url https://docs.aws.amazon.com/kms/latest/developerguide/kms-vpc-endpoint.html

You have a web site that is sitting behind AWS Cloudfront. You need to protect the web site against threats such as SQL injection and Cross site scripting attacks. Which of the following service can help in such a scenario

Options are :

  • AWS Trusted Advisor
  • AWS WAF (Correct)
  • AWS Inspector
  • AWS Config (Incorrect)

Answer : AWS WAF

Explanation Answer – B The AWS Documentation mentions the following AWS WAF is a web application firewall that helps detect and block malicious web requests targeted at your web applications. AWS WAF allows you to create rules that can help protect against common web exploits like SQL injection and cross-site scripting. With AWS WAF you first identify the resource (either an Amazon CloudFront distribution or an Application Load Balancer) that you need to protect. Option A is invalid because this will only give advise on how you can better the security in your AWS account, but not protect against threats mentioned in the question. Option C is invalid because this can be used to scan EC2 Instances for vulnerabilities but not protect against threats mentioned in the question. Option D is invalid because this can be used to check config changes but not protect against threats mentioned in the question. For more information on AWS WAF, please visit the following url https://aws.amazon.com/waf/details/

Your company has a set of resources defined in the AWS Cloud. Their IT audit department has requested to get a list of resources that have been defined across the account. How can this be achieved in the easiest manner?

Options are :

  • Create a powershell script using the AWS CLI. Query for all resources with the tag of production.
  • Create a bash shell script with the AWS CLI. Query for all resources in all regions. Store the results in an S3 bucket.
  • Use Cloud Trail to get the list of all resources
  • Use AWS Config to get the list of all resources (Correct)

Answer : Use AWS Config to get the list of all resources

Explanation Answer – D The most feasible option is to use AWS Config. When you turn on AWS Config , you will get a list of resources defined in your AWS Account. A sample snapshot of the resources dashboard in AWS Config is shown below Option A is incorrect because this would give the list of production based resources and now all resources. Option B is partially correct. But this will just add more maintenance overhead. Option C is incorrect because this can be used to log API activities but not give an account of all resources For more information on AWS Config, please visit the below URL https://docs.aws.amazon.com/config/latest/developerguide/how-does-config-work.html


A Lambda function reads metadata from an S3 object and stores the metadata in a DynamoDB table. The function is triggered whenever an object is stored within the S3 bucket. How should the Lambda function be given access to the DynamoDB table?

Options are :

  • Create a VPC endpoint for DynamoDB within a VPC. Configure the Lambda function to access resources in the VPC.
  • Create a resource policy that grants the Lambda function permissions to write to the DynamoDB table. Attach the policy to the DynamoDB table.
  • Create an IAM user with permissions to write to the DynamoDB table. Store an access key for that user in the Lambda environment variables.
  • Create an IAM service role with permissions to write to the DynamoDB table. Associate that role with the Lambda function. (Correct)

Answer : Create an IAM service role with permissions to write to the DynamoDB table. Associate that role with the Lambda function.

Explanation Answer – D The ideal way is to create an IAM role which has the required permissions and then associate it with the Lambda function The AWS Documentation additionally mentions the following Each Lambda function has an IAM role (execution role) associated with it. You specify the IAM role when you create your Lambda function. Permissions you grant to this role determine what AWS Lambda can do when it assumes the role. There are two types of permissions that you grant to the IAM role: ? If your Lambda function code accesses other AWS resources, such as to read an object from an S3 bucket or write logs to CloudWatch Logs, you need to grant permissions for relevant Amazon S3 and CloudWatch actions to the role. ? If the event source is stream-based (Amazon Kinesis Data Streams and DynamoDB streams), AWS Lambda polls these streams on your behalf. AWS Lambda needs permissions to poll the stream and read new records on the stream so you need to grant the relevant permissions to this role. Option A is invalid because the VPC endpoint allows access instances in a private subnet to access DynamoDB Option B is invalid because resources policies are present for resources such as S3 and KMS , but not AWS Lambda Option C is invalid because AWS Roles should be used and not IAM Users For more information on the Lambda permission model, please visit the below URL https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html

Your company has defined privileged users for their AWS Account. These users are administrators for key resources defined in the company. There is now a mandate to enhance the security authentication for these users. How can this be accomplished?

Options are :

  • Enable MFA for these user accounts (Correct)
  • Enable versioning for these user accounts
  • Enable accidental deletion for these user accounts
  • Disable root access for the users (Incorrect)

Answer : Enable MFA for these user accounts

Explanation Answer – A The AWS Documentation mentions the following as a best practise for IAM users For extra security, enable multi-factor authentication (MFA) for privileged IAM users (users who are allowed access to sensitive resources or APIs). With MFA, users have a device that generates a unique authentication code (a one-time password, or OTP). Users must provide both their normal credentials (like their user name and password) and the OTP. The MFA device can either be a special piece of hardware, or it can be a virtual device (for example, it can run in an app on a smartphone). Option B,C and D are invalid because no such security options are available in AWS For more information on IAM best practises, please visit the below URL ttps://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

An application running on EC2 instances must use a username and password to access a database. The developer has stored those secrets in the SSM Parameter Store with type SecureString using the default KMS CMK. 

Which combination of configuration steps will allow the application to access the secrets via the API? Select 2 answers from the options below

Options are :

  • Add the EC2 instance role as a trusted service to the SSM service role.
  • Add permission to use the KMS key to decrypt to the SSM service role.
  • Add permission to read the SSM parameter to the EC2 instance role. (Correct)
  • Add permission to use the KMS key to decrypt to the EC2 instance role (Correct)
  • Add the SSM service role as a trusted service to the EC2 instance role.

Answer : Add permission to read the SSM parameter to the EC2 instance role. Add permission to use the KMS key to decrypt to the EC2 instance role

Explanation Answer – C and D The below example policy from the AWS Documentation is required to be given to the EC2 Instance in order to read a secure string from AWS KMS. Permissions need to be given to the Get Parameter API and the KMS API call to decrypt the secret. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:GetParameter*" ], "Resource": "arn:aws:ssm:us-west-2:111122223333:/parameter/ReadableParameters/*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" } ] } Option A is invalid because roles can be attached to EC2 and not EC2 roles to SSM Option B is invalid because the KMS key does not need to decrypt the SSM service role. Option E is invalid because this configuration is valid For more information on the parameter store, please visit the below URL https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html

When you enable automatic key rotation for an existing CMK key where the backing key is managed by AWS , after how long is the key rotated?

Options are :

  • After 30 days
  • After 128 days
  • After 365 days (Correct)
  • After 3 years (Incorrect)

Answer : After 365 days

Explanation Answer – C The AWS Documentation states the following Automatic key rotation is disabled by default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter. Option A,B and D are automatically invalid because the default is 365 days For more information on key rotation please visit the below URL https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html

You have a 2 tier application hosted in AWS. It consists of a web server and database server (SQL Server) hosted on separate EC2 Instances. You are devising the security groups for these EC2 Instances. The Web tier needs to be accessed by users across the Internet. You have created a web security group(wg-123) and database security group(db-345). Which combination of the following security group rules will allow the application to be secure and functional. Choose 2 answers from the options given below.

Options are :

  • wg-123 - Allow ports 80 and 443 from 0.0.0.0/0 (Correct)
  • db-345 - Allow port 1433 from wg-123 (Correct)
  • wg-123 - Allow port 1433 from wg-123
  • db-345 - Allow ports 1433 from 0.0.0.0/0 (Incorrect)

Answer : wg-123 - Allow ports 80 and 443 from 0.0.0.0/0 db-345 - Allow port 1433 from wg-123

Explanation Answer – A and B The Web security groups should allow access for ports 80 and 443 for HTTP and HTTPS traffic to all users from the internet. The database security group should just allow access from the web security group from port 1433. Option C is invalid because this is not a valid configuration Option D is invalid because database security should not be allowed on the internet For more information on Security Groups please visit the below URL https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

You are deivising a policy to allow users to have the ability to access objects in a bucket called appbucket.

You define the below custom bucket policy

{ "ID": "Policy1502987489630",

"Version": "2012-10-17",

"Statement": [

{

"Sid": "Stmt1502987487640",

"Action": [

"s3:GetObject",

"s3:GetObjectVersion"

],

"Effect": "Allow",

"Resource": "arn:aws:s3:::appbucket",

"Principal": "*"

}

]

}

But when you try to apply the policy you get the error

"Action does not apply to any resource(s) in statement.? What should be done to rectify the error

Options are :

  • Change the IAM permissions by applying PutBucketPolicy permissions.
  • Verify that the policy has the same name as the bucket name. If not, make it the same.
  • Change the Resource section to "arn:aws:s3:::appbucket/*". (Correct)
  • Create the bucket "appbucket" and then apply the policy. (Incorrect)

Answer : Change the Resource section to "arn:aws:s3:::appbucket/*".

Explanation Answer – C When you define access to objects in a bucket, you need to ensure that you specify to which objects in the bucket access needs to be given to. In this case , the * can be used to assign the permission to all objects in the bucket. Option A is invalid because the right permissions are already provided as per the question requirement Option B is invalid because it is not necessary that the policy has the same name as the bucket Option D is invalid because this should be the default flow for applying the policy For more information on bucket policies please visit the below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

A company wants to have an Intrusion detection system available for their VPC in AWS. They want to have complete control over the system. Which of the following would be ideal to implement?

Options are :

  • Use AWS WAF to catch all intrusions occurring on the systems in the VPC
  • Use a custom solution available in the AWS Marketplace (Correct)
  • Use VPC Flow logs to detect the issues and flag them accordingly.
  • Use AWS Cloudwatch to monitor all traffic (Incorrect)

Answer : Use a custom solution available in the AWS Marketplace

Explanation Answer – B Sometimes companies want to have custom solutions in place for monitoring Intrusions to their systems. In such a case , you can use the AWS Marketplace for looking at custom solutions. Option A,C and D are all invalid because they cannot be used to conduct intrusion detection or prevention For more information on using custom security solutions please visit the below URL https://d1.awsstatic.com/Marketplace/security/AWSMP_Security_Solution%20Overview.pdf

Your IT Security department has mandated that all data on EBS volumes created for underlying EC2 Instances need to be encrypted. Which of the following can help achieve this?

Options are :

  • AWS KMS API (Correct)
  • AWS Certificate Manager
  • API Gateway with STS
  • IAM Access Key (Incorrect)

Answer : AWS KMS API

Explanation Answer – A The AWS Documentation mentions the following on AWS KMS AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others to make it simple to encrypt your data with encryption keys that you manage Option B is incorrect - The AWS Certificate manager can be used to generate SSL certificates that can be used to encrypt traffic in transit, but not at rest Option C is incorrect is again used for issuing tokens when using API gateway for traffic in transit. Option D is used for secure access to EC2 Instances For more information on AWS KMS, please visit the following url https://docs.aws.amazon.com/kms/latest/developerguide/overview.html

You have an S3 bucket hosted in AWS. This is used to host promotional videos uploaded by yourself. You need to provide access to users for a limited duration of time. How can this be achieved?

Options are :

  • Use versioning and enable a timestamp for each version
  • Use Pre-signed URL’s (Correct)
  • Use IAM Roles with a timestamp to limit the access
  • Use IAM policies with a timestamp to limit the access (Incorrect)

Answer : Use Pre-signed URL’s

Explanation Answer – B The AWS Documentation mentions the following All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. Option A is invalid because this can be used to prevent accidental deletion of objects Option C is invalid because timestamps are not possible for Roles Option D is invalid because policies is not the right way to limit access based on time For more information on pre-signed URL’s, please visit the URL https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Your company has mandated that all calls to the AWS KMS service be recorded. How can this be achieved?

Options are :

  • Enable logging on the KMS service
  • Enable a trail in Cloudtrail (Correct)
  • Enable Cloudwatch logs
  • Use Cloudwatch metrics (Incorrect)

Answer : Enable a trail in Cloudtrail

Explanation Answer – B The AWS Documentation states the following AWS KMS is integrated with CloudTrail, a service that captures API calls made by or on behalf of AWS KMS in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures API calls from the AWS KMS console or from the AWS KMS API. Using the information collected by CloudTrail, you can determine what request was made, the source IP address from which the request was made, who made the request, when it was made, and so on. Option A is invalid because logging is not possible in the KMS service Option C and D are invalid because Cloudwatch cannot be used to monitor API calls For more information on logging using Cloudtrail please visit the below URL https://docs.aws.amazon.com/kms/latest/developerguide/logging-using-cloudtrail.html

You want to get a list of vulnerabilities for an EC2 Instance as per the guidelines set by the Center of Internet Security. How can you go about doing this?

Options are :

  • Enable AWS Guard Duty for the Instance
  • Use AWS Trusted Advisor
  • Use AWS Inspector (Correct)
  • Use AWS Macie (Incorrect)

Answer : Use AWS Inspector

Explanation Answer – C The AWS Inspector service can inspect EC2 Instances based on specific Rules. One of the rules packages is based on the guidelines set by the Center of Internet Security Option A is invalid because this can be used to protect an instance but not give the list of vulnerabilities Option B and D are invalid because these services cannot give a list of vulnerabilities For more information on the guidelines, please visit the below URL https://docs.aws.amazon.com/inspector/latest/userguide/inspector_cis.html

You have an instance setup in a test environment in AWS. You installed the required application and the promoted the server to a production environment. Your IT Security team has advised that there maybe traffic flowing in from an unknown IP address to port 22. How can this be mitigated immediately?

Options are :

  • Shutdown the instance
  • Remove the rule for incoming traffic on port 22 for the Security Group (Correct)
  • Change the AMI for the instance
  • Change the Instance type for the Instance (Incorrect)

Answer : Remove the rule for incoming traffic on port 22 for the Security Group

Explanation Answer – B In the test environment, the security groups might have been opened to all IP addresses for testing purpose. Always to ensure to remove this rule once all testing is completed. Option A , C and D are all invalid because this would affect the application running on the server. The easiest way is just to remove the rule for access on port 22. For more information on authorizing access to an instance, please visit the below URL https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html

Your company has defined a number of EC2 Instances over a period of 6 months. They want to know if any of the security groups allow unrestricted access to a resource. What is the best option to accomplish this requirement?

Options are :

  • Use AWS Inspector to inspect all the security Groups
  • Use the AWS Trusted Advisor to see which security groups have compromised access. (Correct)
  • Use AWS Config to see which security groups have compromised access.
  • Use the AWS CLI to query the security groups and then filter for the rules which have unrestricted access (Incorrect)

Answer : Use the AWS Trusted Advisor to see which security groups have compromised access.

Explanation Answer – B The AWS Trusted Advisor can check security groups for rules that allow unrestricted access to a resource. Unrestricted access increases opportunities for malicious activity (hacking, denial-of-service attacks, loss of data). If you go to AWS Trusted Advisor , you can see the details Option A is invalid because AWS Inspector is used to detect security vulnerabilities in instances and not for security groups. Option C is invalid because this can be used to detect changes in security groups but not show you security groups that have compromised access. Option D is partially valid but would just be a maintenance over head For more information on the AWS Trusted Advisor, please visit the below URL https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/

A company is using CloudTrail to log all AWS API activity for all regions in all of its accounts. The CISO has asked that additional steps be taken to protect the integrity of the log files. What combination of steps will protect the log files from intentional or unintentional alteration? Choose 2 answers from the options given below

Options are :

  • Create an S3 bucket in a dedicated log account and grant the other accounts write only access. Deliver all log files from every account to this S3 bucket. (Correct)
  • Write a Lambda function that queries the Trusted Advisor Cloud Trail checks. Run the function every 10 minutes.
  • Enable Cloud Trail log file integrity validation (Correct)
  • Use Systems Manager Configuration Compliance to continually monitor the access policies of S3 buckets containing Cloud Trail logs. (Incorrect)
  • Create a Security Group that blocks all traffic except calls from the CloudTrail service. Associate the security group with all the Cloud Trail destination S3 buckets.

Answer : Create an S3 bucket in a dedicated log account and grant the other accounts write only access. Deliver all log files from every account to this S3 bucket. Enable Cloud Trail log file integrity validation

Explanation Answer – A and C The AWS Documentation mentions the following To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. Option B is invalid because there is no such thing as Trusted Advisor Cloud Trail checks Option D is invalid because Systems Manager cannot be used for this purpose. Option E is invalid because Security Groups cannot be used to block calls from other services For more information on Cloudtrail log file validation, please visit the below URL https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html For more information on delivering Cloudtrail logs from multiple accounts, please visit the below URL https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html

You have just received an email from AWS Support stating that your AWS account might have been compromised. Which of the following steps would you look to carry out immediately. Choose 3 answers from the options below.

Options are :

  • Change the root account password. (Correct)
  • Rotate all IAM access keys (Correct)
  • Keep all resources running to avoid disruption
  • Change the password for all IAM users. (Correct)

Answer : Change the root account password. Rotate all IAM access keys Change the password for all IAM users.

Explanation Answer – A,B and D One of the articles from AWS mentions what should be done in such a scenario If you suspect that your account has been compromised, or if you have received a notification from AWS that the account has been compromised, perform the following tasks: • Change your AWS root account password and the passwords of any IAM users. • Delete or rotate all root and AWS Identity and Access Management (IAM) access keys. • Delete any resources on your account you didn’t create, especially running EC2 instances, EC2 spot bids, or IAM users. • Respond to any notifications you received from AWS Support through the AWS Support Center. Option C is invalid because there could be compromised instances or resources running on your environment. They should be shutdown or stopped immediately. For more information on the article, please visit the below URL https://aws.amazon.com/premiumsupport/knowledge-center/potential-account-compromise/

Your IT Security team has advised to carry out a penetration test on the resources in their company’s AWS Account. This is as part of their capability to analyze the security of the Infrastructure. What should be done first in this regard?

Options are :

  • Turn on Cloud trail and carry out the penetration test
  • Turn on VPC Flow Logs and carry out the penetration test
  • Submit a request to AWS Support (Correct)
  • Use a custom AWS Marketplace solution for conducting the penetration test (Incorrect)

Answer : Submit a request to AWS Support

Explanation Answer – C This concept is given in the AWS Documentation Option A,B and D are all invalid because the first step is to get prior authorization from AWS for penetration tests For more information on penetration testing, please visit the below URL https://aws.amazon.com/security/penetration-testing/

Your company is planning on hosting an internal network in AWS. They want machines in the VPC to authenticate using private certificates. They want to minimize the work and maintenance in working with certificates. What is the ideal way to fulfil this requirement.

Options are :

  • Consider using Windows Server 2016 Certificate Manager
  • Consider using AWS Certificate Manager (Correct)
  • Consider using AWS Access keys to generate the certificates
  • Consider using AWS Trusted Advisor for managing the certificates (Incorrect)

Answer : Consider using AWS Certificate Manager

Explanation Answer – B The AWS Documentation mentions the following ACM is tightly linked with AWS Certificate Manager Private Certificate Authority. You can use ACM PCA to create a private certificate authority (CA) and then use ACM to issue private certificates. These are SSL/TLS X.509 certificates that identify users, computers, applications, services, servers, and other devices internally. Private certificates cannot be publicly trusted Option A is partially invalid. Windows Server 2016 Certificate Manager can be used but since there is a requirement to “minimize the work and maintenance? , AWS Certificate Manager should be used Option C and D are invalid because these cannot be used for managing certificates. For more information on ACM, please visit the below URL https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html

You have enabled Cloudtrail logs for your company’s AWS account. In addition, the IT Security department has mentioned that the logs need to be encrypted. How can this be achieved?

Options are :

  • Enable SSL certificates for the Cloudtrail logs
  • There is no need to do anything since the logs will already be encrypted (Correct)
  • Enable Server side encryption for the trail
  • Enable Server side encryption for the destination S3 bucket (Incorrect)

Answer : There is no need to do anything since the logs will already be encrypted

Explanation Answer – B The AWS Documentation mentions the following By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an AWS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecycle rules to archive or delete log files automatically. If you want notifications about log file delivery and validation, you can set up Amazon SNS notifications. Option A,C and D are not valid since logs will already be encrypted For more information on how Cloudtrail works, please visit the following URL https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html

You have just recently set up a web and database tier in a VPC and hosted the application. When testing the application , you are not able to reach the home page for the app. You have verified the security groups. What can help you diagnose the issue.

Options are :

  • Use the AWS Trusted Advisor to se what can be done.
  • Use VPC Flow logs to diagnose the traffic (Correct)
  • Use AWS WAF to analyze the traffic
  • Use AWS Guard Duty to analyze the traffic (Incorrect)

Answer : Use VPC Flow logs to diagnose the traffic

Explanation Answer – B Option A is invalid because this can be used to check for security issues in your account , but not verify as to why you cannot reach the home page for your application Option C is invalid because this used to protect your app against application layer attacks , but not verify as to why you cannot reach the home page for your application Option D is invalid because this used to protect your instance against attacks , but not verify as to why you cannot reach the home page for your application The AWS Documentation mentions the following VPC Flow Logs capture network flow information for a VPC, subnet, or network interface and stores it in Amazon CloudWatch Logs. Flow log data can help customers troubleshoot network issues; for example, to diagnose why specific traffic is not reaching an instance, which might be a result of overly restrictive security group rules. Customers can also use flow logs as a security tool to monitor the traffic that reaches their instances, to profile network traffic, and to look for abnormal traffic behaviours For more information on AWS Security, please visit the following URL https://aws.amazon.com/answers/networking/vpc-security-capabilities/

A security team is creating a response plan in the event an employee executes unauthorized actions on AWS infrastructure. They want to include steps to determine if the employee’s IAM permissions changed as part of the incident. What steps should the team document in the plan?

Options are :

  • Use AWS Config to examine the employee’s IAM permissions prior to the incident and compare them to the employee’s current IAM permissions. (Correct)
  • Use Macie to examine the employee’s IAM permissions prior to the incident and compare them to the employee’s current IAM permissions.
  • Use CloudTrail to examine the employee’s IAM permissions prior to the incident and compare them to the employee’s current IAM permissions. (Incorrect)
  • Use Trusted Advisor to examine the employee’s IAM permissions prior to the incident and compare them to the employee’s current IAM permissions.

Answer : Use AWS Config to examine the employee’s IAM permissions prior to the incident and compare them to the employee’s current IAM permissions.

Explanation Answer – A You can use the AWS Config history to see the history of a particular item. The below snapshot shows an example configuration for a user in AWS Config Option B,C and D are all invalid because these services cannot be used to see the history of a particular configuration item. This can only be accomplished by AWS Config. For more information on tracking changes in AWS Config, please visit the below URL https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/TrackingChanges.html

A security team must present a daily briefing to the CISO that includes a report of which of the company’s thousands of EC2 instances and on-premises servers are missing the latest security patches. All instances/servers must be brought into compliance within 24 hours so they do not show up on the next day’s report.  

How can the security team fulfill these requirements?

Options are :

  • Use Amazon QuickSight and Cloud Trail to generate the report of out of compliance instances/servers. Redeploy all out of compliance instances/servers using an AMI with the latest patches.
  • Use Systems Manger Patch Manger to generate the report of out of compliance instances/ servers. Use Systems Manager Patch Manger to install the missing patches. (Correct)
  • Use Systems Manger Patch Manger to generate the report of out of compliance instances/ servers. Redeploy all out of compliance instances/servers using an AMI with the latest patches. (Incorrect)
  • Use Trusted Advisor to generate the report of out of compliance instances/ servers. Use Systems Manger Patch Manger to install the missing patches.

Answer : Use Systems Manger Patch Manger to generate the report of out of compliance instances/ servers. Use Systems Manager Patch Manger to install the missing patches.

Explanation Answer – B Use the Systems Manger Patch Manger to generate the report and also install the missing patches The AWS Documentation mentions the following AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Amazon Linux. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches. Option A is invalid because Amazon QuickSight and Cloud Trail cannot be used to generate the list of servers that don’t meet compliance needs. Option C is wrong because deploying instances via new AMI’s would impact the applications hosted on these servers Option D is invalid because Amazon Trusted Advisor cannot be used to generate the list of servers that don’t meet compliance needs. For more information on the AWS Patch Manager, please visit the below URL https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html

Your development team has started using AWS resources for development purposes. The AWS account has just been created. Your IT Security team is worried about possible leakage of AWS keys. What is the first level of measure that should be taken to protect the AWS account.

Options are :

  • Delete the AWS keys for the root account (Correct)
  • Create IAM Groups
  • Create IAM Roles
  • Restrict access using IAM policies (Incorrect)

Answer : Delete the AWS keys for the root account

Explanation Answer – A The first level or measure that should be taken is to delete the keys for the IAM root user When you log into your account and go to your Security Access dashboard , this is the first step that can be seen Option B and C are wrong because creation of IAM groups and roles will not change the impact of leakage of AWS root access keys Option D is wrong because the first key aspect is to protect the access keys for the root account For more information on best practises for Security Access keys, please visit the below URL https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html

Which of the following is not a best practice for carrying out a security audit?

Options are :

  • Conduct an audit on a yearly basis (Correct)
  • Conduct an audit if application instances have been added to your account
  • Conduct an audit if you ever suspect that an unauthorized person might have accessed your account
  • Whenever there are changes in your organization (Incorrect)

Answer : Conduct an audit on a yearly basis

Explanation Answer – A A year’s time is generally too long a gap for conducting security audits The AWS Documentation mentions the following You should audit your security configuration in the following situations: • On a periodic basis. • If there are changes in your organization, such as people leaving. • If you have stopped using one or more individual AWS services. This is important for removing permissions that users in your account no longer need. • If you've added or removed software in your accounts, such as applications on Amazon EC2 instances, AWS OpsWorks stacks, AWS CloudFormation templates, etc. • If you ever suspect that an unauthorized person might have accessed your account. Option B , C and D are all the right ways and recommended best practises when it comes to conducting audits For more information on Security Audit guideline, please visit the below URL https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html

Which of the following is used as a secure way to log into an EC2 Linux Instance?

Options are :

  • IAM User name and password
  • Key pairs (Correct)
  • AWS Access keys
  • AWS SDK keys (Incorrect)

Answer : Key pairs

Explanation Answer – B The AWS Documentation mentions the following Key pairs consist of a public key and a private key. You use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CloudFront. Option A,C and D are all wrong because these are not used to log into EC2 Linux Instances For more information on AWS Security credentials, please visit the below URL https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions