Mock : AWS Certified Security Specialty

Your company hosts a large section of EC2 instances in AWS. There are strict security rules governing the EC2 Instances. During a potential security breach , you need to ensure quick investigation of the underlying EC2 Instance. Which of the following service can help you quickly provision a test environment to look into the breached instance.

Options are :

  • AWS Cloudwatch
  • AWS Cloudformation (Correct)
  • AWS Cloudtrail
  • AWS Config

Answer : AWS Cloudformation

Explanation Answer – B The AWS Security best practises mentions the following Unique to AWS, security practitioners can use CloudFormation to quickly create a new, trusted environment in which to conduct deeper investigation. The CloudFormation template can pre-configure instances in an isolated environment that contains all the necessary tools forensic teams need to determine the cause of the incident. This cuts down on the time it takes to gather necessary tools, isolates systems under examination, and ensures that the team is operating in a clean room. Option A is incorrect since this is a logging service and cannot be used to provision a test environment Option C is incorrect since this is an API logging service and cannot be used to provision a test environment Option D is incorrect since this is a configuration service and cannot be used to provision a test environment For more information on AWS Security best practises, please refer to below URL https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf

Your company has a set of EBS volumes defined in AWS. The security mandate is that all EBS volumes are encrypted. What can be done to notify the IT admin staff if there are any unencrypted volumes in the account.

Options are :

  • Use AWS Inspector to inspect all the EBS volumes
  • Use AWS Config to check for unencrypted EBS volumes (Correct)
  • Use AWS Guard duty to check for the unencrypted EBS volumes
  • Use AWS Lambda to check for the unencrypted EBS volumes

Answer : Use AWS Config to check for unencrypted EBS volumes

Explanation Answer - B The encrypted-volumes config rule for AWS Config can be used to check for unencrypted volumes. Options A and C are incorrect since these services cannot be used to check for unencrypted EBS volumes Option D is incorrect because even though this is possible , trying to implement the solution alone with just the Lambda service would be too difficult For more information on AWS Config and encrypted volumes, please refer to below URL https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html

Your company use AWS KMS for management of its customer keys. From time to time , there is a requirement to delete existing keys as part of housekeeping activities. What can be done during the deletion process to verify that the key is no longer being used.

Options are :

  • Use CloudTrail to see if any KMS API request has been issued against existing keys (Correct)
  • Use Key policies to see the access level for the keys
  • Rotate the keys once before deletion to see if other services are using the keys
  • Change the IAM policy for the keys to see if other services are using the keys

Answer : Use CloudTrail to see if any KMS API request has been issued against existing keys

Explanation Answer – A The AWS Documentation mentions the following You can use a combination of AWS CloudTrail, Amazon CloudWatch Logs, and Amazon Simple Notification Service (Amazon SNS) to create an alarm that notifies you of AWS KMS API requests that attempt to use a customer master key (CMK) that is pending deletion. If you receive a notification from such an alarm, you might want to cancel deletion of the CMK to give yourself more time to determine whether you want to delete it. Options B and D are incorrect because Key policies nor IAM policies can be used to check if the keys are being used. Option C is incorrect since rotation will not help you check if the keys are being used. For more information on deleting keys, please refer to below URL https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys-creating-cloudwatch-alarm.html

Your company has just started using AWS and created an AWS account. They are aware of the potential issues when root access is enabled. How can they best safeguard the account when it comes to root access? Choose 2 answers from the options given below

Options are :

  • Delete the root access account
  • Create an Admin IAM user with the necessary permissions (Correct)
  • Change the password for the root account.
  • Delete the root access keys (Correct)

Answer : Create an Admin IAM user with the necessary permissions Delete the root access keys

Explanation Answer – B and D The AWS Documentation mentions the following All AWS accounts have root user credentials (that is, the credentials of the account owner). These credentials allow full access to all resources in the account. Because you can't restrict permissions for root user credentials, we recommend that you delete your root user access keys. Then create AWS Identity and Access Management (IAM) user credentials for everyday interaction with AWS. Option A is incorrect since you cannot delete the root access account. Option C is partially correct but cannot be used as the ideal solution for safeguarding the account For more information on root access vs admin IAM users, please refer to below URL https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html

You have a bucket and a VPC defined in AWS. You need to ensure that the bucket can only be accessed by the VPC endpoint. How can you accomplish this?

Options are :

  • Modify the security groups for the VPC to allow access to the S3 bucket
  • Modify the route tables to allow access for the VPC endpoint
  • Modify the IAM Policy for the bucket to allow access for the VPC endpoint
  • Modify the bucket Policy for the bucket to allow access for the VPC endpoint (Correct)

Answer : Modify the bucket Policy for the bucket to allow access for the VPC endpoint

Explanation Answer – D This is mentioned in the AWS Documentation Options A and B are incorrect because using Security Groups nor route tables will help to allow access specifically for that bucket via the VPC endpoint. Here you specifically need to ensure the bucket policy is changed. Option C is incorrect because it is the bucket policy that needs to be changed and not the IAM policy. For more information on example bucket policies for VPC endpoints, please refer to below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html

In order to encrypt data in transit for a connection to an AWS RDS instance, which of the following would you implement

Options are :

  • Transparent data encryption
  • SSL from your application (Correct)
  • Data keys from AWS KMS
  • Data Keys from CloudHSM

Answer : SSL from your application

Explanation Answer – B This is mentioned in the AWS Documentation You can use SSL from your application to encrypt a connection to a DB instance running MySQL, MariaDB, Amazon Aurora, SQL Server, Oracle, or PostgreSQL. Option A is incorrect since Transparent data encryption is used for data at rest and not in transit Options C and D are incorrect since keys can be used for encryption of data at rest For more information on working with RDS and SSL, please refer to below URL https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html

Which of the following is the responsibility of the customer? Choose 2 answers from the options given below.

Options are :

  • Management of the Edge locations
  • Encryption of data at rest (Correct)
  • Protection of data in transit (Correct)
  • Decommissioning of old storage devices

Answer : Encryption of data at rest Protection of data in transit

Explanation Answer – B and C Below is the snapshot of the Shared Responsibility Model Option A and D are incorrect since these are managed by AWS For more information on AWS Security best practises, please refer to below URL https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

A Devops team is currently looking at the security aspect of their CI/CD pipeline. They are making use of AWS resources for their infrastructure. They want to ensure that the EC2 Instances don’t have any high security vulnerabilities. They want to ensure a complete DevSecOps process. How can this be achieved?

Options are :

  • Use AWS Config to check the state of the EC2 instance for any sort of security issues.
  • Use AWS Inspector API’s in the pipeline for the EC2 Instances (Correct)
  • Use AWS Trusted Advisor API’s in the pipeline for the EC2 Instances
  • Use AWS Security Groups to ensure no vulnerabilities are present

Answer : Use AWS Inspector API’s in the pipeline for the EC2 Instances

Explanation Answer – B Amazon Inspector offers a programmatic way to find security defects or misconfigurations in your operating systems and applications. Because you can use API calls to access both the processing of assessments and the results of your assessments, integration of the findings into workflow and notification systems is simple. DevOps teams can integrate Amazon Inspector into their CI/CD pipelines and use it to identify any pre-existing issues or when new issues are introduced. Option A,C and D are all incorrect since these services cannot check for Security Vulnerabilities. These can only be checked by the AWS Inspector service. For more information on AWS Security best practises, please refer to below URL https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

You want to track access requests for a particular S3 bucket. How can you achieve this in the easiest possible way?

Options are :

  • Enable server access logging for the bucket (Correct)
  • Enable Cloudwatch metrics for the bucket
  • Enable Cloudwatch logs for the bucket
  • Enable AWS Config for the S3 bucket

Answer : Enable server access logging for the bucket

Explanation Answer – A The AWS Documentation mentions the following To track requests for access to your bucket, you can enable access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. Options B and C are incorrect Cloudwatch is used for metrics and logging and cannot be used to track access requests. Option D is incorrect since this can be used for Configuration management but for not for tracking S3 bucket requests. For more information on S3 server logs, please refer to below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html

You need to create a linux EC2 instance in AWS. Which of the following steps is used to ensure secure authentication to the EC2 instance. Choose 2 answers from the options given below.

Options are :

  • Ensure to create a strong password for logging into the EC2 Instance
  • Create a key pair using putty (Correct)
  • Use the private key to log into the instance (Correct)
  • Ensure the password is passed securely using SSL

Answer : Create a key pair using putty Use the private key to log into the instance

Explanation Answer – B and C The AWS Documentation mentions the following You can use Amazon EC2 to create your key pair. Alternatively, you could use a third-party tool and then import the public key to Amazon EC2. Each key pair requires a name. Be sure to choose a name that is easy to remember. Amazon EC2 associates the public key with the name that you specify as the key name. Amazon EC2 stores the public key only, and you store the private key. Anyone who possesses your private key can decrypt your login information, so it's important that you store your private keys in a secure place. Options A and D are incorrect since you should use key pairs for secure access to Ec2 Instances For more information on EC2 key pairs, please refer to below URL https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

You have just developed a new mobile application that handles analytics workloads on large scale datasets that are stored on Amazon Redshift. Consequently, the application needs to access Amazon Redshift tables. Which of the below methods would be the best, both practically and security-wise, to access the tables? Choose the correct answer from the options below

Options are :

  • Create an IAM user and generate encryption keys for that user. Create a policy for RedShift read-only access. Embed the keys in the application.
  • Create an HSM client certificate in Redshift and authenticate using this certificate.
  • Create a RedShift read-only access policy in IAM and embed those credentials in the application.
  • Use roles that allow a web identity federated user to assume a role that allows access to the RedShift table by providing temporary credentials. (Correct)

Answer : Use roles that allow a web identity federated user to assume a role that allows access to the RedShift table by providing temporary credentials.

Explanation Answer – D The AWS Documentation mentions the following Option A,B and C are all automatically incorrect because you need to use IAM Roles for Secure access to services For more information on web identity federation please refer to the below link http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

Your team is designing a web application. The users for this web application would need to sign in via an external ID provider such as facebook or Google. Which of the following AWS service would you use for authentication?

Options are :

  • AWS Cognito (Correct)
  • AWS SAML
  • AWS IAM
  • AWS Config

Answer : AWS Cognito

Explanation Answer – A The AWS Documentation mentions the following Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, or Google. Option B is incorrect since this is used for identity federation Option C is incorrect since this is pure Identity and Access management Option D is incorrect since AWS is a configuration service For more information on AWS Cognito please refer to the below link https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html

Your application currently use AWS Cognito for authenticating users. Your application consists of different types of users. Some users are only allowed read access to the application and others are given contributor access. How would you manage the access  effectively?

Options are :

  • Create different cognito endpoints , one for the readers and the other for the contributors.
  • Create different cognito groups, one for the readers and the other for the contributors. (Correct)
  • You need to manage this within the application itself
  • This needs to be managed via Web security tokens

Answer : Create different cognito groups, one for the readers and the other for the contributors.

Explanation Answer – B The AWS Documentation mentions the following You can use groups to create a collection of users in a user pool, which is often done to set the permissions for those users. For example, you can create separate groups for users who are readers, contributors, and editors of your website and app. Option A is incorrect since you need to create cognito groups and not endpoints Options C and D are incorrect since these would be overheads when you can use AWS Cognito For more information on AWS Cognito user groups please refer to the below link https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html

DDoS attacks that happen at the application layer commonly target web applications with lower volumes of traffic compared to infrastructure attacks. To mitigate these types of attacks, you should probably want to include a WAF (Web Application Firewall) as part of your infrastructure. To inspect all HTTP requests, WAFs sit in-line with your application traffic. Unfortunately, this creates a scenario where WAFs can become a point of failure or bottleneck. To mitigate this problem, you need the ability to run multiple WAFs on demand during traffic spikes. This type of scaling for WAF is done via a “WAF sandwich.? Which of the following statements best describes what a “WAF sandwich" is? Choose the correct answer from the options below

Options are :

  • The EC2 instance running your WAF software is placed between your private subnets and any NATed connections to the Internet.
  • The EC2 instance running your WAF software is placed between your public subnets and your Internet Gateway.
  • The EC2 instance running your WAF software is placed between your public subnets and your private subnets.
  • The EC2 instance running your WAF software is included in an Auto Scaling group and placed in between two Elastic load balancers. (Correct)

Answer : The EC2 instance running your WAF software is included in an Auto Scaling group and placed in between two Elastic load balancers.

Explanation Answer – D The below diagram shows how a WAF sandwich is created. It’s the concept of placing the Ec2 instance which hosts the WAF software in between 2 elastic load balancers. Option A,B and C are incorrect since the EC2 Instance with the WAF software needs to be placed in an Autoscaling Group For more information on a WAF sandwich please refer to the below link https://www.cloudaxis.com/2016/11/21/waf-sandwich/

An auditor needs access to logs that record all API events on AWS. The auditor only needs read-only access to the log files and does not need access to each AWS account. The company has multiple AWS accounts, and the auditor needs access to all the logs for all the accounts. What is the best way to configure access for the auditor to view event logs from all accounts?  Choose the correct answer from the options below

Options are :

  • Configure the CloudTrail service in each AWS account, and have the logs delivered to an AWS bucket on each account, while granting the auditor permissions to the bucket via roles in the secondary accounts and a single primary IAM account that can assume a read-only role in the secondary AWS accounts.
  • Configure the CloudTrail service in the primary AWS account and configure consolidated billing for all the secondary accounts. Then grant the auditor access to the S3 bucket that receives the CloudTrail log files.
  • Configure the CloudTrail service in each AWS account and enable consolidated logging inside of CloudTrail.
  • Configure the CloudTrail service in each AWS account and have the logs delivered to a single AWS bucket in the primary account and grant the auditor access to that single bucket in the primary account. (Correct)

Answer : Configure the CloudTrail service in each AWS account and have the logs delivered to a single AWS bucket in the primary account and grant the auditor access to that single bucket in the primary account.

Explanation Answer – D Given the current requirements, assume the method of "least privilege" security design and only allow the auditor access to the minimum amount of AWS resources as possible. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting Option A is incorrect since the auditor should only be granted access in one location Option B is incorrect since consolidated billing is not a key requirement as part of the question Option C is incorrect since there is not consolidated logging For more information on Cloudtrail please refer to the below url https://aws.amazon.com/cloudtrail/

Your company has a hybrid environment , with on-premise servers and servers hosted in the AWS cloud. They are planning to use the Systems Manager for patching servers. Which of the following is a pre-requisite for this to work?

Options are :

  • Ensure that the on-premise servers are running on Hyper-V.
  • Ensure that an IAM service role is created (Correct)
  • Ensure that an IAM User is created
  • Ensure that an IAM Group is created for the on-premise servers

Answer : Ensure that an IAM service role is created

Explanation Answer – B You need to ensure that an IAM service role is created for allowing the on-premise servers to communicate with the AWS Systems Manager. Option A is incorrect since it is not necessary that servers should only be running Hyper-V Options C and D are incorrect since it is not necessary that IAM users and groups are created For more information on the Systems Manager role please refer to the below url https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-service-role.html

You have several S3 buckets defined in your AWS account. You need to give access to external AWS accounts to these S3 buckets. Which of the following can allow you to define the permissions for the external accounts? Choose 2 answers from the options given below

Options are :

  • IAM policies
  • Buckets ACL’s (Correct)
  • IAM users
  • Bucket policies (Correct)

Answer : Buckets ACL’s Bucket policies

Explanation Answer – B and D The AWS Security whitepaper gives the type of access control and to what level the control can be given Options A and C are incorrect since for external access to buckets , you need to use either Bucket policies or Bucket ACL’s For more information on Security for storage services role please refer to the below url https://d1.awsstatic.com/whitepapers/Security/Security_Storage_Services_Whitepaper.pdf

A large organization is planning on AWS to host their resources. They have a number of autonomous departments that wish to use AWS. What could be the strategy to adopt for managing the accounts.

Options are :

  • Use multiple VPC’s in the account, each VPC for each department
  • Use multiple IAM groups , each group for each department
  • Use multiple IAM roles , each group for each department
  • Use multiple AWS accounts , each account for each department (Correct)

Answer : Use multiple AWS accounts , each account for each department

Explanation Answer – D A recommendation for this is given in the AWS Security best practises Option A is incorrect since this would be applicable for resources in a VPC Options B and C are incorrect since operationally it would be difficult to manage For more information on AWS Security best practises please refer to the below url https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

An employee keeps terminating EC2 instances on the production environment. You've determined the best way to ensure this doesn't happen is to add an extra layer of defence against terminating the instances. What is the best method to ensure the employee does not terminate the production instances? Choose the 2 correct answers from the options below

Options are :

  • Tag the instance with a production-identifying tag and add resource-level permissions to the employee user with an explicit deny on the terminate API call to instances with the production tag. (Correct)
  • Tag the instance with a production-identifying tag and modify the employees group to allow only start, stop, and reboot API calls and not the terminate instance call. (Correct)
  • Modify the IAM policy on the user to require MFA before deleting EC2 instances and disable MFA access to the employee
  • Modify the IAM policy on the user to require MFA before deleting EC2 instances

Answer : Tag the instance with a production-identifying tag and add resource-level permissions to the employee user with an explicit deny on the terminate API call to instances with the production tag. Tag the instance with a production-identifying tag and modify the employees group to allow only start, stop, and reboot API calls and not the terminate instance call.

Explanation Answer – A and B Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you've assigned to it. Each tag consists of a key and an optional value, both of which you define Options C and D are incorrect since tagging the IAM Policy will not resolve the issue For more information on tagging aws resources please refer to the below url http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

You have been given a new brief from your supervisor for a client who needs a web application set up on AWS. The most important requirement is that MySQL must be used as the database, and this database must not be hosted in the public cloud, but rather at the client's data center due to security risks. Which of the following solutions would be the best to assure that the client’s requirements are met? Choose the correct answer from the options below

Options are :

  • Build the application server on a public subnet and the database at the client’s data center. Connect them with a VPN connection which uses IPsec. (Correct)
  • Use the public subnet for the application server and use RDS with a storage gateway to access and synchronize the data securely from the local data center.
  • Build the application server on a public subnet and the database on a private subnet with a NAT instance between them.
  • Build the application server on a public subnet and build the database in a private subnet with a secure ssh connection to the private subnet from the client's data center.

Answer : Build the application server on a public subnet and the database at the client’s data center. Connect them with a VPN connection which uses IPsec.

Explanation Answer – A Since the database should not be hosted on the cloud all other options are invalid. The best option is to create a VPN connection for securing traffic as shown below Option B is invalid because this is the incorrect use of the Storage gateway Option C is invalid since this is the incorrect use of the NAT instance Option D is invalid since this is an incorrect configuration For more information on VPN connections , please visit the below url http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

You are planning on using the AWS KMS service for managing keys for your application. For which of the following can the KMS CMK keys be used for encrypting? Choose 2 answers from the options given below

Options are :

  • Image Objects
  • Large files
  • Password (Correct)
  • RSA Keys (Correct)

Answer : Password RSA Keys

Explanation Answer – C and D The CMK keys themselves can only be used for encrypting data that is maximum 4KB in size. Hence it can be used for encrypting information such as passwords and RSA keys. Option A and B are invalid because the actual CMK key can only be used to encrypt small amounts of data and not large amount of data. You have to generate the data key from the CMK key in order to encrypt high amounts of data For more information on the concepts for KMS, please visit the following URL: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html

A company has been using the AWS KMS service for managing its keys. They are planning on carrying out housekeeping activities and deleting keys which are no longer in use. What are the ways that can be incorporated to see which keys are in use? Choose 2 answers from the options given below

Options are :

  • Determine the age of the master key
  • See who is assigned permissions to the master key (Correct)
  • See Cloudtrail for usage of the key (Correct)
  • Use AWS cloudwatch events for events generated for the key

Answer : See who is assigned permissions to the master key See Cloudtrail for usage of the key

Explanation Answer – B and C The direct ways that can be used to see how the key is being used is to see the current access permissions and cloudtrail logs Option A is invalid because seeing how long ago the key was created would not determine the usage of the key Option D is invalid because Cloudtrail is better for seeing for events generated by the key This is also mentioned in the AWS Documentation For more information on determining the usage of CMK keys, please visit the following URL: https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys-determining-usage.html

Which of the following is the correct sequence of how KMS manages the keys when used along with the Redshift cluster service

Options are :

  • The master keys encrypts the cluster key. The cluster key encrypts the database key. The database key encrypts the data encryption keys. (Correct)
  • The master keys encrypts the database key. The database key encrypts the data encryption keys.
  • The master keys encrypts the data encryption keys. The data encryption keys encrypts the database key
  • The master keys encrypts the cluster key, database key and data encryption keys

Answer : The master keys encrypts the cluster key. The cluster key encrypts the database key. The database key encrypts the data encryption keys.

Explanation Answer – A This is mentioned in the AWS Documentation Amazon Redshift uses a four-tier, key-based architecture for encryption. The architecture consists of data encryption keys, a database key, a cluster key, and a master key. Data encryption keys encrypt data blocks in the cluster. Each data block is assigned a randomly-generated AES-256 key. These keys are encrypted by using the database key for the cluster. The database key encrypts data encryption keys in the cluster. The database key is a randomly-generated AES-256 key. It is stored on disk in a separate network from the Amazon Redshift cluster and passed to the cluster across a secure channel. The cluster key encrypts the database key for the Amazon Redshift cluster. Option B is incorrect because the master key encrypts the cluster key and not the database key Option C is incorrect because the master key encrypts the cluster key and not the data encryption keys Option D is incorrect because the master key encrypts the cluster key only For more information on how keys are used in Redshift, please visit the following URL: https://docs.aws.amazon.com/kms/latest/developerguide/services-redshift.html

A company wants to use Cloudtrail for logging all API activity. They want to segregate the logging of data events and management events. How can this be achieved? Choose 2 answers from the options given below

Options are :

  • Create one Cloudtrail log group for data events
  • Create one trail that logs data events to an S3 bucket (Correct)
  • Create another trail that logs management events to another S3 bucket (Correct)
  • Create another Cloudtrail log group for management events

Answer : Create one trail that logs data events to an S3 bucket Create another trail that logs management events to another S3 bucket

Explanation Answer – B and C The AWS Documentation mentions the following You can configure multiple trails differently so that the trails process and log only the events that you specify. For example, one trail can log read-only data and management events, so that all read-only events are delivered to one S3 bucket. Another trail can log only write-only data and management events, so that all write-only events are delivered to a separate S3 bucket. Options A and D are invalid because you have to create a trail and not a log group For more information on managing events with cloudtrail, please visit the following URL: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html

Your company has been using AWS for the past 2 years. They have separate S3 buckets for logging the various AWS services that have been used. They have hired an external vendor for analyzing their log files. They have their own AWS account. What is the best way to ensure that the partner account can access the log files in the company account for analysis. Choose 2 answers from the options given below

Options are :

  • Create an IAM user in the company account
  • Create an IAM Role in the company account (Correct)
  • Ensure the IAM user has access for read-only to the S3 buckets
  • Ensure the IAM Role has access for read-only to the S3 buckets (Correct)

Answer : Create an IAM Role in the company account Ensure the IAM Role has access for read-only to the S3 buckets

Explanation Answer – B and D The AWS Documentation mentions the following To share log files between multiple AWS accounts, you must perform the following general steps. These steps are explained in detail later in this section. • Create an IAM role for each account that you want to share log files with. • For each of these IAM roles, create an access policy that grants read-only access to the account you want to share the log files with. • Have an IAM user in each account programmatically assume the appropriate role and retrieve the log files. Options A and C are invalid because creating an IAM user and then sharing the IAM user credentials with the vendor is a direct ‘NO’ practise from a security perspective. For more information on sharing cloudtrail logs files, please visit the following URL: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs.html

Your company has been using AWS for hosting EC2 Instances for their web and database applications. They want to have a compliance check to see the following

  • Whether any ports are left open other than admin ones like SSH and RDP
  • Whether any ports to the database server other than ones from the web server security group are open 

Which of the following can help achieve this in the easiest way possible. You don’t want to carry out an extra configuration changes?

Options are :

  • AWS Config
  • AWS Trusted Advisor (Correct)
  • AWS Inspector
  • AWS GuardDuty

Answer : AWS Trusted Advisor

Explanation Answer – B Trusted Advisor checks for compliance with the following security recommendations: • Limited access to common administrative ports to only a small subset of addresses. This includes ports 22 (SSH), 23 (Telnet) 3389 (RDP), and 5500 (VNC). • Limited access to common database ports. This includes ports 1433 (MSSQL Server), 1434 (MSSQL Monitor), 3306 (MySQL), Oracle (1521) and 5432 (PostgreSQL). Option A is partially correct but then you would need to write custom rules for this. The AWS trusted advisor can give you all of these checks on its dashboard Options C and D are invalid because these services don’t provide these details For more information on the Trusted Advisor, please visit the following URL: https://aws.amazon.com/premiumsupport/trustedadvisor/

A company is planning on using AWS for hosting their applications. They want complete separation and isolation of their production , testing and development environments. Which of the following is an ideal way to design such a setup?

Options are :

  • Use separate VPC’s for each of the environments
  • Use separate IAM Roles for each of the environments
  • Use separate IAM Policies for each of the environments
  • Use separate AWS accounts for each of the environments (Correct)

Answer : Use separate AWS accounts for each of the environments

Explanation Answer – D A recommendation from the AWS Security Best practises highlights this as well Option A is partially valid , you can segregate resources , but a best practise is to have multiple accounts for this setup. Options B and C are invalid because from a maintenance perspective this could become very difficult For more information on the Security Best practices, please visit the following URL: https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

An application is designed to run on an EC2 Instance. The applications needs to work with an S3 bucket. From a security perspective , what is the ideal way for the EC2 instance/ application to be configured?

Options are :

  • Use the AWS access keys ensuring that they are frequently rotated.
  • Assign an IAM user to the application that has specific access to only that S3 bucket
  • Assign an IAM Role and assign it to the EC2 Instance (Correct)
  • Assign an IAM group and assign it to the EC2 Instance

Answer : Assign an IAM Role and assign it to the EC2 Instance

Explanation Answer – C The below diagram from the AWS whitepaper shows the best security practise of allocating a role that has access to the S3 bucket Options A,B and D are invalid because using users , groups or access keys is an invalid security practise when giving access to resources from other AWS resources. For more information on the Security Best practices, please visit the following URL: https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

Your company has an EC2 Instance hosted in AWS. This EC2 Instance hosts an application. Currently this application is experiencing a number of issues. You need to inspect the network packets to see what the type of error that is occurring? Which one of the below steps can help address this issue?

Options are :

  • Use the VPC Flow Logs.
  • Use a network monitoring tool provided by an AWS partner. (Correct)
  • Use another instance. Setup a port to “promiscuous mode? and sniff the traffic to analyze the packets.
  • Use Cloudwatch metric

Answer : Use a network monitoring tool provided by an AWS partner.

Explanation Answer – B Since here you need to sniff the actual network packets , the ideal approach would be to use a network monitoring tool provided by an AWS partner. The AWS documentation mentions the following Multiple AWS Partner Network members offer virtual firewall appliances that can be deployed as an in-line gateway for inbound or outbound network traffic. Firewall appliances provide additional application-level filtering, deep packet inspection, IPS/IDS, and network threat protection features. Option A and D are invalid because these services cannot be used for packet inspection. Option C is invalid because “promiscuous mode? is not supported in AWS For more information on the security capabilities, please visit the below URL: https://aws.amazon.com/answers/networking/vpc-security-capabilities/

Which of the below services can be integrated with the AWS Web application firewall service. Choose 2 answers from the options given below

Options are :

  • AWS Cloudfront (Correct)
  • AWS Lambda
  • AWS Application Load Balancer (Correct)
  • AWS Classic Load Balancer

Answer : AWS Cloudfront AWS Application Load Balancer

Explanation Answer – A and C The AWS documentation mentions the following on the Application Load Balancer AWS WAF can be deployed on Amazon CloudFront and the Application Load Balancer (ALB). As part of Amazon CloudFront it can be part of your Content Distribution Network (CDN) protecting your resources and content at the Edge locations and as part of the Application Load Balancer it can protect your origin web servers running behind the ALBs. Options B and D are invalid because only Cloudfront and the Application Load Balancer services are supported by AWS WAF. For more information on the web application firewall please refer to the below URL: https://aws.amazon.com/waf/faq/

A company hosts critical data in an S3 bucket. Even though they have assigned the appropriate permissions to the bucket , they are still worried about data deletion. What measures can be taken to restrict the risk of data deletion on the bucket. Choose 2 answers from the options given below

Options are :

  • Enable versioning on the S3 bucket (Correct)
  • Enable data at rest for the objects in the bucket
  • Enable MFA Delete in the bucket policy (Correct)
  • Enable data in transit for the objects in the bucket

Answer : Enable versioning on the S3 bucket Enable MFA Delete in the bucket policy

Explanation Answer – A and C One of the AWS Security blogs mentions the following Versioning keeps multiple versions of an object in the same bucket. When you enable it on a bucket, Amazon S3 automatically adds a unique version ID to every object stored in the bucket. At that point, a simple DELETE action does not permanently delete an object version; it merely associates a delete marker with the object. If you want to permanently delete an object version, you must specify its version ID in your DELETE request. You can add another layer of protection by enabling MFA Delete on a versioned bucket. Once you do so, you must provide your AWS account’s access keys and a valid code from the account’s MFA device in order to permanently delete an object version or suspend or reactivate versioning on the bucket. Option B is invalid because enabling encryption does not guarantee risk of data deletion. Option D is invalid because this option does not guarantee risk of data deletion. For more information on AWS S3 versioning and MFA please refer to the below URL: https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/

You company has mandated that all data in AWS be encrypted at rest. How can you achieve this for EBS volumes? Choose 2 answers from the options given below

Options are :

  • Use Windows Bitlocker for windows-based instances (Correct)
  • Use TreuEncrypt for Linux based instances (Correct)
  • Enable encryption on existing EBS volumes
  • Use AWS KMS to encrypt the existing EBS volumes

Answer : Use Windows Bitlocker for windows-based instances Use TreuEncrypt for Linux based instances

Explanation Answer – A and B EBS encryption can also be enabled when the volume is created and not for existing volumes. One can use existing tools for OS level encryption. Options C and D are invalid because volumes cannot be encrypted from AWS after they have been created For more information on the Security Best practices, please visit the following URL: https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

You are designing a connectivity solution between on-premises infrastructure and Amazon VPC. Your server’s on-premises will be communicating with your VPC instances. You will be establishing IPSec tunnels over the internet. You will be using VPN gateways and terminating the IPsec tunnels on AWS-supported customer gateways. Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? Choose 4 answers form the options below

Options are :

  • End-to-end protection of data in transit
  • End-to-end Identity authentication
  • Data encryption across the Internet (Correct)
  • Protection of data in transit over the Internet (Correct)
  • Peer identity authentication between VPN gateway and customer gateway (Correct)
  • Data integrity protection across the Internet (Correct)

Answer : Data encryption across the Internet Protection of data in transit over the Internet Peer identity authentication between VPN gateway and customer gateway Data integrity protection across the Internet

Explanation Answer – C,D,E and F IPSec is a widely adopted protocol that can be used to provide end to end protection for data Options A and B are invalid because there is no complete guarantee of end to end encryption using IPSec For more information on IPSec, please visit the following URL: https://en.wikipedia.org/wiki/IPsec

A user has created a VPC with the public and private subnets using the VPC wizard. The VPC has CIDR 20.0.0.0/16. The public subnet uses CIDR 20.0.1.0/24. The user is planning to host a web server in the public subnet with port 80 and a Database server in the private subnet with port 3306. The user is configuring a security group for the public subnet (WebSecGrp) and the private subnet (DBSecGrp). which of the below mentioned entries is required in the private subnet database security group DBSecGrp?

Options are :

  • Allow Inbound on port 3306 for Source Web Server Security Group WebSecGrp. (Correct)
  • Allow Inbound on port 3306 from source 20.0.0.0/16
  • Allow Outbound on port 3306 for Destination Web Server Security Group WebSecGrp.
  • Allow Outbound on port 80 for Destination NAT Instance IP

Answer : Allow Inbound on port 3306 for Source Web Server Security Group WebSecGrp.

Explanation Answer – A Since the Web server needs to talk to the database server on port 3306 that means that the database server should allow incoming traffic on port 3306. The below table from the aws documentation shows how the security groups should be set up. Option B is invalid because you need to allow incoming access for the database server from the WebSecGrp security group. Options C and D are invalid because you need to allow Outbound traffic and not inbound traffic For more information on security groups please visit the below Link: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

A user has enabled versioning on an S3 bucket. The user is using server side encryption for data at Rest. If the user is supplying his own keys for encryption SSE-C., which of the below mentioned statements is true?

Options are :

  • The user should use the same encryption key for all versions of the same object
  • It is possible to have different encryption keys for different versions of the same object (Correct)
  • AWS S3 does not allow the user to upload his own keys for server side encryption
  • The SSE-C does not work when versioning is enabled

Answer : It is possible to have different encryption keys for different versions of the same object

Explanation Answer – B If you are managing your own encryption keys, you can encrypt the object and send it across to S3 Option A is invalid because ideally you should use different encryption keys Option C is invalid because you can use you own encryption keys Option D is invalid because encryption works even if versioning is enabled For more information on client side encryption please visit the below Link: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

You are planning to use AWS Config to check the configuration of the resources in your AWS account. You are planning on using an existing IAM role and using it for the AWS Config resource. Which of the following is required to ensure the AWS config service can work as required?

Options are :

  • Ensure that there is a trust policy in place for the AWS Config service within the role (Correct)
  • Ensure that there is a grant policy in place for the AWS Config service within the role
  • Ensure that there is a user policy in place for the AWS Config service within the role
  • Ensure that there is a group policy in place for the AWS Config service within the role

Answer : Ensure that there is a trust policy in place for the AWS Config service within the role

Explanation Answer – A You need to ensure that there is a trust policy in place for the AWS Config service as shown below { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Options B,C and D are invalid because you need to ensure a trust policy is in place and not a grant, user or group policy For more information on the IAM role permissions please visit the below Link: https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html

Your developer is using the KMS service and an assigned key in their Java program. They get the below error when running the code

 

arn:aws:iam::113745388712:user/UserB is not authorized to perform: kms:DescribeKey Which of the following could help resolve the issue?

Options are :

  • Ensure that UserB is given the right IAM role to access the key
  • Ensure that UserB is given the right permissions in the IAM policy
  • Ensure that UserB is given the right permissions in the Key policy (Correct)
  • Ensure that UserB is given the right permissions in the Bucket policy

Answer : Ensure that UserB is given the right permissions in the Key policy

Explanation Answer – C You need to ensure that UserB is given access via the Key policy for the Key Option A is invalid because you don’t assign roles to IAM users Options B and D are invalid because the permissions are not pertinent to the IAM or bucket policy For more information on Key policies please visit the below Link: https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html

Your company has an external web site. This web site needs to access the objects in an S3 bucket. Which of the following would allow the web site to access the objects in the most secure manner?

Options are :

  • Grant public access for the bucket via the bucket policy
  • Use the aws:Referer key in the condition clause for the bucket policy (Correct)
  • Use the aws:sites key in the condition clause for the bucket policy
  • Grant a role that can be assumed by the web site

Answer : Use the aws:Referer key in the condition clause for the bucket policy

Explanation Answer – B An example of this is given in the AWS Documentation Option A is invalid because giving public access is not a secure way to provide access Option C is invalid because aws:sites is not a valid condition key Option D is invalid because IAM roles will not be assigned to web sites For more information on example bucket policies please visit the below Link: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html

Your IT Security team has identified a number of vulnerabilities across critical EC2 Instances in the company’s AWS Account. Which would be the easiest way to ensure these vulnerabilities are remediated?

Options are :

  • Create AWS Lambda functions to download the updates and patch the servers.
  • Use AWS CLI commands to download the updates and patch the servers.
  • Use AWS Inspector to patch the servers
  • Use AWS Systems Manager to patch the servers (Correct)

Answer : Use AWS Systems Manager to patch the servers

Explanation Answer – D The AWS Documentation mentions the following You can quickly remediate patch and association compliance issues by using Systems Manager Run Command. You can target either instance IDs or Amazon EC2 tags and execute the AWS-RefreshAssociation document or the AWS-RunPatchBaseline document. If refreshing the association or re-running the patch baseline fails to resolve the compliance issue, then you need to investigate your associations, patch baselines, or instance configurations to understand why the Run Command executions did not resolve the problem Options A and B are invalid because even though this is possible , still from a maintenance perspective it would be difficult to maintain the Lambda functions Option C is invalid because this service cannot be used to patch servers For more information on using Systems Manager for compliance remediation please visit the below Link: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-fixing.html

An organization has launched 5 instances: 2 for production and 3 for testing. The organization wants that one particular group of IAM users should only access the test instances and not the production ones. How can the organization set that as a part of the policy?

Options are :

  • Launch the test and production instances in separate regions and allow region wise access to the group
  • Define the IAM policy which allows access based on the instance ID
  • Create an IAM policy with a condition which allows access to only small instances
  • Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags (Correct)

Answer : Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags

Explanation Answer – D Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you've assigned to it. Option A is invalid because this is not a recommended practise Option B is invalid because this is an overhead to maintain this in policies Option C is invalid because the instance type will not resolve the requirement For information on resource tagging, please visit the below URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

Your company is planning on AWS on hosting its AWS resources. There is a company policy which mandates that all security keys are completed managed within the company itself. Which of the following is the correct measure of following this policy?

Options are :

  • Using the AWS KMS service for creation of the keys and the company managing the key lifecycle thereafter.
  • Generating the key pairs for the EC2 Instances using puttygen (Correct)
  • Use the EC2 Key pairs that come with AWS
  • Use S3 server-side encryption

Answer : Generating the key pairs for the EC2 Instances using puttygen

Explanation Answer – B By ensuring that you generate the key pairs for EC2 Instances, you will have complete control of the access keys. Options A,C and D are invalid because all of these processes means that AWS has ownership of the keys. And the question specifically mentions that you need ownership of the keys For information on security for Compute Resources, please visit the below URL: https://d1.awsstatic.com/whitepapers/Security/Security_Compute_Services_Whitepaper.pdf

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions