Questions and Answer : AWS Certified Security Specialty

You are planning on hosting a web application on AWS. You create an EC2 Instance in a public subnet. This instance needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be followed to ensure a secure setup is in place

Options are :

  • Place the EC2 Instance with the Oracle database in the same public subnet as the Web server for faster communication.
  • Place the EC2 Instance with the Oracle database in a separate private subnet (Correct)
  • Create a database security group and ensure the web security group to allowed incoming access (Correct)
  • Ensure the database security group allows incoming traffic from 0.0.0.0/0

Answer : Place the EC2 Instance with the Oracle database in a separate private subnet Create a database security group and ensure the web security group to allowed incoming access

Explanation Answer – B and C The best secure option is to place the database in a private subnet. The below diagram from the AWS Documentation shows this setup. Also ensure that access is not allowed from all sources but just from the web servers. Option A is invalid because databases should not be placed in the public subnet Option D is invalid because the database security group should not allow traffic from the internet For more information on this type of setup, please refer to the below URL https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

AWS Certified Security Specialty

An EC2 Instance hosts a Java based application that access a DynamoDB table. This EC2 Instance is currently serving production based users. Which of the following is a secure way of ensuring that the EC2 Instance access the DynamoDB table

Options are :

  • Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance (Correct)
  • Use KMS keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance
  • Use IAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance
  • Use IAM Access Groups with the right permissions to interact with DynamoDB and assign it to the EC2 Instance

Answer : Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance

Explanation Answer – A To always ensure secure access to AWS resources from EC2 Instances, always ensure to assign a Role to the EC2 Instance Option B is invalid because KMS keys are not used as a mechanism for providing EC2 Instances access to AWS services. Option C is invalid Access keys is not a safe mechanism for providing EC2 Instances access to AWS services. Option D is invalid because there is no way access groups can be assigned to EC2 Instances. For more information on IAM Roles, please refer to the below URL https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern? 

Options are :

  • Access the data through an Internet Gateway.
  • Access the data through a VPN connection.
  • Access the data through a NAT Gateway.
  • Access the data through a VPC endpoint for Amazon S3 (Correct)

Answer : Access the data through a VPC endpoint for Amazon S3

Explanation Answer – D The AWS Documentation mentions the following A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Option A,B and C are all invalid because the question specifically mentions that access should not be provided via the Internet For more information on VPC endpoints, please refer to the below URL https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

Development teams in your organization use S3 buckets to store the log files for various application hosted in development environments in AWS.  The developers want to keep the logs for one month for troubleshooting purposes, and then purge the logs. 

What feature will enable this requirement?

Options are :

  • Adding a bucket policy on the S3 bucket.
  • Configuring lifecycle configuration rules on the S3 bucket. (Correct)
  • Creating an IAM policy for the S3 bucket.
  • Enabling CORS on the S3 bucket.

Answer : Configuring lifecycle configuration rules on the S3 bucket.

Explanation Answer – B The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: • Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. • Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf. Option A and C are invalid because neither bucket policies neither IAM policy’s can control the purging of logs Option D is invalid CORS is used for accessing objects across domains and not for purging of logs For more information on AWS S3 Lifecycle policies, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 10

A company is using a Redshift cluster to store their data warehouse. There is a requirement from the Internal IT Security team to ensure that data gets encrypted for the Redshift database. How can this be achieved?

Options are :

  • Encrypt the EBS volumes of the underlying EC2 Instances
  • Use AWS KMS Customer Default master key (Correct)
  • Use SSL/TLS for encrypting the data
  • Use S3 Encryption

Answer : Use AWS KMS Customer Default master key

Explanation Answer – B The AWS Documentation mentions the following Amazon Redshift uses a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys. Option A is invalid because its the cluster that needs to be encrypted Option C is invalid because this encrypts objects in transit and not objects at rest Option D is invalid because this is used only for objects in S3 buckets For more information on Redshift encryption, please visit the following url https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

A company has resources hosted in their AWS Account. There is a requirement to monitor all API activity for all regions. The audit needs to be applied for future regions as well. Which of the following can be used to fulfil this requirement.

Options are :

  • Ensure Cloudtrail for each region. Then enable for each future region.
  • Ensure one Cloudtrail trail is enabled for all regions. (Correct)
  • Create a Cloudtrail for each region. Use Cloudformation to enable the trail for all future regions.
  • Create a Cloudtrail for each region. Use AWS Config to enable the trail for all future regions.

Answer : Ensure one Cloudtrail trail is enabled for all regions.

Explanation Answer – B The AWS Documentation mentions the following You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail in the new region. As a result, you will receive log files containing API activity for the new region without taking any action. Option A and C is invalid because this would be a maintenance overhead to enable cloudtrail for every region Option D is invalid because this AWS Config cannot be used to enable trails For more information on this feature, please visit the following url https://aws.amazon.com/about-aws/whats-new/2015/12/turn-on-cloudtrail-across-all-regions-and-support-for-multiple-trails/

A customer has an instance hosted in the AWS Public Cloud. The VPC and subnet used to host the Instance have been created with the default settings for the Network Access Control Lists. They need to provide an IT Administrator secure access to the underlying instance. How can this be accomplished.

Options are :

  • Ensure the Network Access Control Lists allow Inbound SSH traffic from the IT Administrator’s Workstation
  • Ensure the Network Access Control Lists allow Outbound SSH traffic from the IT Administrator’s Workstation
  • Ensure that the security group allows Inbound SSH traffic from the IT Administrator’s Workstation (Correct)
  • Ensure that the security group allows Outbound SSH traffic from the IT Administrator’s Workstation

Answer : Ensure that the security group allows Inbound SSH traffic from the IT Administrator’s Workstation

Explanation Answer – C Options A & B are invalid as default NACL rule will allow all inbound and outbound traffic. The requirement is that the IT administrator should be able to access this EC2 instance from his workstation. For that we need to enable the Security Group of EC2 instance to allow traffic from the IT administrator's workstation.Hence choice C is correct. Option D is incorrect as we need to enable the Inbound SSH traffic on the EC2 instance Security Group since the traffic originates from the IT admin's workstation.

Certification : Get AWS Certified Solutions Architect in 1 Day (2018 Update) Set 1

A company is planning to run a number of Admin related scripts using the AWS Lambda service. There is a need to understand if there are any errors encountered when the script run. How can this be accomplished in the most effective manner.

Options are :

  • Use Cloudwatch metrics and logs to watch for errors (Correct)
  • Use Cloudtrail to monitor for errors
  • Use the AWS Config service to monitor for errors
  • Use the AWS Inspector service to monitor for errors

Answer : Use Cloudwatch metrics and logs to watch for errors

Explanation Answer – A The AWS Documentation mentions the following AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs. Option B,C and D are all invalid because these services cannot be used to monitor for errors. For more information on Monitoring Lambda functions , please visit the following url https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-logs.html

A company hosts data in S3. There is now a mandate that going forward all data in the S3 bucket needs to encrypt at rest. How can this be achieved?

Options are :

  • Use AWS Access keys to encrypt the data
  • Use SSL certificates to encrypt the data
  • Enable server side encryption on the S3 bucket (Correct)
  • Enable MFA on the S3 bucket

Answer : Enable server side encryption on the S3 bucket

Explanation Answer – C The AWS Documentation mentions the following Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. Options A and B are invalid because neither Access Keys nor SSL certificates can be used to encrypt data. Option D is invalid because MFA is just used as an extra level of security for S3 buckets For more information on S3 server side encryption, please refer to the below link https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

You are responsible to deploying a critical application onto AWS. Part of the requirements for this application is to ensure that the controls set for this application met PCI compliance. Also there is a need to monitor web application logs to identify any malicious activity. Which of the following services can be used to fulfil this requirement. Choose 2 answers from the options given below

Options are :

  • Amazon Cloudwatch Logs (Correct)
  • Amazon VPC Flow Logs
  • Amazon AWS Config
  • Amazon Cloudtrail (Correct)

Answer : Amazon Cloudwatch Logs Amazon Cloudtrail

Explanation Answer – A and D The AWS Documentation mentions the following about these services AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Option B is invalid because this is only used for VPC’s Option C is invalid because this is a configuration service and cannot be used for logging purposes For more information on Cloudtrail, please refer to below URL https://aws.amazon.com/cloudtrail/

AWS Certification

You need to have a cloud security device which would allow to generate encryption keys based on FIPS 140-2 Level 3. Which of the following can be used for this purpose?

Options are :

  • AWS KMS
  • AWS Customer Keys
  • AWS managed keys
  • AWS Cloud HSM (Correct)

Answer : AWS Cloud HSM

Explanation Answer – D The AWS Documentation mentions the following AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries. CloudHSM is also standards-compliant and enables you to export all of your keys to most other commercially-available HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM also enables you to scale quickly by adding and removing HSM capacity on-demand, with no up-front costs. All other options are invalid since AWS Cloud HSM is the prime service that offers FIPS 140-2 Level 3 compliance For more information on CloudHSM, please visit the following url https://aws.amazon.com/cloudhsm/

Your company currently has a set of EC2 Instances hosted in a VPC. The IT Security department is suspecting a possible DDos attack on the instances. What can you do to zero in on the IP addresses which are receiving a flurry of requests.

Options are :

  • Use VPC Flow logs to get the IP addresses accessing the EC2 Instances (Correct)
  • Use AWS Cloud trail to get the IP addresses accessing the EC2 Instances
  • Use AWS Config to get the IP addresses accessing the EC2 Instances
  • Use AWS Trusted Advisor to get the IP addresses accessing the EC2 Instances

Answer : Use VPC Flow logs to get the IP addresses accessing the EC2 Instances

Explanation Answer – A With VPC Flow logs you can get the list of IP addresses which are hitting the Instances in your VPC. You can then use the information in the logs to see which external IP addresses are sending a flurry of requests which could be the potential threat for a DDos attack. Option B is invalid this is an API monitoring service and will not be able to get the IP addresses Option C is invalid this is a config service and will not be able to get the IP addresses Option D is invalid because this is a recommendation service and will not be able to get the IP addresses For more information on VPC Flow Logs, please visit the following url https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

A company has an existing AWS account and a set of critical resources hosted in that account. The employee who was in-charge of the root account has left the company. What must be now done to secure the account. Choose 3 answers from the options given below.

Options are :

  • Change the access keys for all IAM users.
  • Delete all custom created IAM policies
  • Delete the access keys for the root account (Correct)
  • Confirm MFA to a secure device (Correct)
  • Change the password for the root account (Correct)
  • Change the password for all IAM users

Answer : Delete the access keys for the root account Confirm MFA to a secure device Change the password for the root account

Explanation Answer – C,D and E Now if the root account has a chance to be compromised , then you have to carry out the below steps 1. Delete the access keys for the root account 2. Confirm MFA to a secure device 3. Change the password for the root account This will ensure the employee who has left has no change to compromise the resources in AWS. Option A is invalid because this would hamper the working of the current IAM users Option B is invalid because this could hamper the current working of services in your AWS account Option F is invalid because this would hamper the working of the current IAM users For more information on IAM root user, please visit the following url https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html

AWS SCS-C01 Certified Security Speciality Practice Exam Set 1

You have a set of application , database and web servers hosted in AWS. The web servers are placed behind an ELB. There are separate security groups for the application, database and web servers. The network security groups have been defined accordingly. There is an issue with the communication between the application and database servers. In order to troubleshoot the issue between just the application and database server, what is the ideal set of MINIMAL steps you would take

Options are :

  • Check the Inbound security rules for the database security group Check the Outbound security rules for the application security group (Correct)
  • Check the Outbound security rules for the database security group Check the Inbound security rules for the application security group
  • Check the both the Inbound and Outbound security rules for the database security group Check the Inbound security rules for the application security group
  • Check the Outbound security rules for the database security group Check the both the Inbound and Outbound security rules for the application security group

Answer : Check the Inbound security rules for the database security group Check the Outbound security rules for the application security group

Explanation Answer – A Here since the communication would be established inward to the database server and outward from the application server , you need to ensure that just the Inbound rules for the application server security groups are checked. And then just the Outbound rules for the database server security groups are checked. Option B is invalid because the communication needs to be checked for the Inbound traffic for Database security Groups and Inbound for the application security groups. Option C is invalid because you don’t need to check for Outbound security rules for the database security group Option D is invalid because you don’t need to check for Inbound security rules for the application security group For more information on Security Groups, please refer to below URL https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

A company is planning on extending their on-premise AWS Infrastructure to the AWS Cloud. They need to have a solution that would give core benefits of traffic encryption and ensure latency is kept to a minimum. Which of the following would help fulfil this requirement? Choose 2 answers from the options given below

Options are :

  • AWS VPN (Correct)
  • AWS VPC Peering
  • AWS NAT gateways
  • AWS Direct Connect (Correct)

Answer : AWS VPN AWS Direct Connect

Explanation Answer – A and D The AWS Documentation mentions the following which supports the above requirement Option B is invalid because VPC peering is only used for connection between VPC’s and cannot be used to connect On-premise infrastructure to the AWS Cloud. Option C is invalid because NAT gateways is used to connect instances in a private subnet to the Internet For more information on VPN Connections, please visit the following url https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html

How can you ensure that instance in an VPC does not use AWS DNS for routing DNS requests. You want to use your own managed DNS instance. How can this be achieved?

Options are :

  • Change the existing DHCP options set
  • Create a new DHCP options set and replace the existing one. (Correct)
  • Change the route table for the VPC
  • Change the subnet configuration to allow DNS requests from the new DNS Server

Answer : Create a new DHCP options set and replace the existing one.

Explanation Answer – B In order to use your own DNS server , you need to ensure that you create a new custom DHCP options set with the IP of the custom DNS server. You cannot modify the existing set , so you need to create a new one. Option A is invalid because you cannot make changes to an existing DHCP options Set. Option C is invalid because this can only be used to work with Routes and not with a custom DNS solution. Option D is invalid because this needs to be done at the VPC level and not at the Subnet level For more information on DHCP options set, please visit the following url https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html

Practical : AWS Certified Solutions Architect Associate

A windows machine in one VPC needs to join the AD domain in another VPC. VPC Peering has been established. But the domain join is not working. What is the other step that needs to be followed to ensure that the AD domain join can work as intended

Options are :

  • Change the VPC peering connection to a VPN connection
  • Change the VPC peering connection to a Direct Connect connection
  • Ensure the security groups for the AD hosted subnet has the right rule for relevant subnets (Correct)
  • Ensure that the AD is placed in a public subnet

Answer : Ensure the security groups for the AD hosted subnet has the right rule for relevant subnets

Explanation Answer – C In addition to VPC peering and setting the right route tables , the security groups for the AD EC2 instance needs to ensure the right rules are put in place for allowing incoming traffic. Option A and B is invalid because changing the connection type will not help. This is a problem with the Security Groups. Option D is invalid since the AD should not be placed in a public subnet For more information on allowing ingress traffic for AD, please visit the following url https://docs.aws.amazon.com/quickstart/latest/active-directory-ds/ingress.html

You need to have a requirement o store objects in an S3 bucket with a key that is automatically managed and rotated. Which of the following can be used for this purpose?

Options are :

  • AWS KMS
  • AWS S3 Server side encryption (Correct)
  • AWS Customer Keys
  • AWS Cloud HSM

Answer : AWS S3 Server side encryption

Explanation Answer – B The AWS Documentation mentions the following Server-side encryption protects data at rest. Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) uses strong multi-factor encryption. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data. All other options are invalid since here you need to ensue the keys are manually rotated since you manage the entire key set. Using AWS S3 Server side encryption , AWS will manage the rotation of keys automatically. For more information on Server side encryption, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html

A company stores critical data in an S3 bucket. There is a requirement to ensure that an extra level of security is added to the S3 bucket. In addition , it should be ensured that objects are available in a secondary region if the primary one goes down. Which of the following can help fulfil these requirements? Choose 2 answers from the options given below

Options are :

  • Enable bucket versioning and also enable CRR (Correct)
  • Enable bucket versioning and enable Master Pays
  • For the Bucket policy add a condition for { "Null": { "aws:MultiFactorAuthAge": true }} (Correct)
  • Enable the Bucket ACL and add a condition for { "Null": { "aws:MultiFactorAuthAge": true }}

Answer : Enable bucket versioning and also enable CRR For the Bucket policy add a condition for { "Null": { "aws:MultiFactorAuthAge": true }}

Explanation Answer – A and C The AWS Documentation mentions the following Option B is invalid because just enabling bucket versioning will not guarantee replication of objects Option D is invalid because the condition for the bucket policy needs to be set accordingly For more information on example bucket policies, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html Also versioning and Cross Region replication can ensure that objects will be available in the destination region in case the primary region fails. For more information on CRR, please visit the following url https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

AWS Devops Engineer Professional Certified Practice Exam Set 2

Your company manages thousands of EC2 Instances. There is a mandate to ensure that all servers don’t have any critical security flaws. Which of the following can be done to ensure this? Choose 2 answers from the options given below.

Options are :

  • Use AWS Config to ensure that the servers have no critical flaws.
  • Use AWS Inspector to ensure that the servers have no critical flaws. (Correct)
  • Use AWS Inspector to patch the servers
  • Use AWS SSM to patch the servers (Correct)

Answer : Use AWS Inspector to ensure that the servers have no critical flaws. Use AWS SSM to patch the servers

Explanation Answer – B and D The AWS Documentation mentions the following on AWS Inspector Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API. Option A is invalid because the AWS Config service is not used to check the vulnerabilities on servers Option C is invalid because the AWS Inspector service is not used to patch servers For more information on AWS Inspector, please visit the following url https://aws.amazon.com/inspector/ Once you understand the list of servers which require critical updates , you can rectify them by installing the required patches via the SSM tool. For more information on the Systems Manager, please visit the following url https://docs.aws.amazon.com/systems-manager/latest/APIReference/Welcome.html

You need to inspect the running processes on an EC2 Instance that may have a security issue. How can you achieve this in the easiest way possible. Also you need to ensure that the process does not interfere with the continuous running of the instance.

Options are :

  • Use AWS Cloudtrail to record the processes running on the server to an S3 bucket.
  • Use AWS Cloudwatch to record the processes running on the server
  • Use the SSM Run command to send the list of running processes information to an S3 bucket. (Correct)
  • Use AWS Config to see the changed process information on the server

Answer : Use the SSM Run command to send the list of running processes information to an S3 bucket.

Explanation Answer – C The SSM Run command can be used to send OS specific commands to an Instance. Here you can check and see the running processes on an instance and then send the output to an S3 bucket. Option A is invalid because this is used to record API activity and cannot be used to record running processes. Option B is invalid because Cloudwatch is a logging and metric service and cannot be used to record running processes. Option D is invalid because AWS Config is a configuration service and cannot be used to record running processes. For more information on the Systems Manager Run command, please visit the following url https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html

You are trying to use the Systems Manager to patch a set of EC2 systems. Some of the systems are not getting covered in the patching process. Which of the following can be used to troubleshoot the issue? Choose 3 answers from the options given below.

Options are :

  • Check to see if the right role has been assigned to the EC2 Instances (Correct)
  • Check to see if the IAM user has the right permissions for EC2
  • Ensure that agent is running on the Instances. (Correct)
  • Check the Instance status by using the Health API. (Correct)

Answer : Check to see if the right role has been assigned to the EC2 Instances Ensure that agent is running on the Instances. Check the Instance status by using the Health API.

Explanation Answer – A,C and D For ensuring that the instances are configured properly you need to ensure the following 1) You installed the latest version of the SSM Agent on your instance 2) Your instance is configured with an AWS Identity and Access Management (IAM) role that enables the instance to communicate with the Systems Manager API 3) You can use the Amazon EC2 Health API to quickly determine the following information about Amazon EC2 instances The status of one or more instances • The last time the instance sent a heartbeat value • The version of the SSM Agent • The operating system • The version of the EC2Config service (Windows) • The status of the EC2Config service (Windows) Option B is invalid because IAM users are not supposed to be directly granted permissions to EC2 Instances For more information on troubleshooting AWS SSM, please visit the following url https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-remote-commands.html

AWS DVA-C01 Certified Developer Associate Practice Exam Set 4

A company has a large set of keys defined in AWS KMS. Their developers frequently use the keys for the applications being developed. What is one of the ways that can be used to reduce the cost of accessing the keys in the AWS KMS service.

Options are :

  • Enable rotation of the keys
  • Use Data key caching (Correct)
  • Create an alias of the key
  • Use the right key policy

Answer : Use Data key caching

Explanation Answer – B The AWS Documentation mentions the following Data key caching stores data keys and related cryptographic material in a cache. When you encrypt or decrypt data, the AWS Encryption SDK looks for a matching data key in the cache. If it finds a match, it uses the cached data key rather than generating a new one. Data key caching can improve performance, reduce cost, and help you stay within service limits as your application scales. Option A,C and D are all incorrect since these options will not impact how the key is used. For more information on data key caching, please refer to below URL https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/data-key-caching.html

You are trying to use the AWS Systems Manager run command on a set of Instances. The run command is not working on a set of Instances. What can you do to diagnose the issue? Choose 2 answers from the options given below

Options are :

  • Ensure that the SSM agent is running on the target machine (Correct)
  • Check the /var/log/amazon/ssm/errors.log file (Correct)
  • Ensure the right AMI is used for the Instance
  • Ensure the security groups allow outbound communication for the Instance

Answer : Ensure that the SSM agent is running on the target machine Check the /var/log/amazon/ssm/errors.log file

Explanation Answer – A and B The AWS Documentation mentions the following If you experience problems executing commands using Run Command, there might be a problem with the SSM Agent. Use the following information to help you troubleshoot the agent. View Agent Logs The SSM Agent logs information in the following files. The information in these files can help you troubleshoot problems. On Windows • %PROGRAMDATA%\Amazon\SSM\Logs\amazon-ssm-agent.log • %PROGRAMDATA%\Amazon\SSM\Logs\error.log Note The default filename of the seelog is seelog.xml.template. If you modify a seelog, you must rename the file to seelog.xml. On Linux • /var/log/amazon/ssm/amazon-ssm-agent.log • /var/log/amazon/ssm/errors.log Option C is invalid because the right AMI has nothing to do with the issues. The agent which is used to execute run commands can run on a variety of AMI’s Option D is invalid because security groups does not come into the picture with the communication between the agent and the SSM service For more information on troubleshooting AWS SSM, please visit the following url https://docs.aws.amazon.com/systems-manager/latest/userguide/troubleshooting-remote-commands.html

You are working for a company and been allocated the task for ensuring that there is a federated authentication mechanism setup between AWS and their On-premise Active Directory. Which of the following are important steps that need to be covered in this process? Choose 2 answers from the options given below.

Options are :

  • Ensure the right match is in place for On-premise AD Groups and IAM Roles. (Correct)
  • Ensure the right match is in place for On-premise AD Groups and IAM Groups.
  • Configure AWS as the relying party in Active Directory
  • Configure AWS as the relying party in Active Directory Federation services (Correct)

Answer : Ensure the right match is in place for On-premise AD Groups and IAM Roles. Configure AWS as the relying party in Active Directory Federation services

Explanation Answer – A and D The AWS Documentation mentions some key aspects with regards to the configuration of On-premise AD with AWS One is the Groups configuration in AD And next is the configuration of the relying party which is AWS ADFS federation occurs with the participation of two parties; the identity or claims provider (in this case the owner of the identity repository – Active Directory) and the relying party, which is another application that wishes to outsource authentication to the identity provider; in this case Amazon Secure Token Service (STS). The relying party is a federation partner that is represented by a claims provider trust in the federation service. Option B is invalid because AD groups should not be matched to IAM Groups Option C is invalid because the relying party should be configured in Active Directory Federation services For more information on the federated access, please visit the following url https://aws.amazon.com/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/

AWS SAP-C00 Certified Solution Architect Professional Exam Set 7

Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?

Options are :

  • Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
  • Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP. (Correct)
  • Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
  • Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.

Answer : Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.

Explanation Answer – B On the AWS Blog site the following information is present to help on this context The newly released whitepaper, Single Sign-On: Integrating AWS, OpenLDAP, and Shibboleth, will help you integrate your existing LDAP-based user directory with AWS. When you integrate your existing directory with AWS, your users can access AWS by using their existing credentials. This means that your users don’t need to maintain yet another user name and password just to access AWS resources. Option A,B and D are all invalid because in this sort of configuration, you have to use SAML to enable single sign on. For more information on integrating AWS with LDAP for Single Sign-On, please visit the following url https://aws.amazon.com/blogs/security/new-whitepaper-single-sign-on-integrating-aws-openldap-and-shibboleth/

You have an EBS volume attached to an EC2 Instance which uses KMS for Encryption. Someone has now gone ahead and deleted the Customer Key which was used for the EBS encryption. What should be done to ensure the data can be decrypted.

Options are :

  • Create a new Customer Key using KMS and attach it to the existing volume
  • Copy the data from the EBS volume before detaching it from the Instance (Correct)
  • Request AWS Support to recover the key
  • Use AWS Config to recover the key

Answer : Copy the data from the EBS volume before detaching it from the Instance

Explanation Answer – B The AWS Documentation mentions the series of steps that are followed when EBS uses KMS for Encryption The following explains how Amazon EBS uses your CMK: 1. When you create an encrypted EBS volume, Amazon EBS sends a GenerateDataKeyWithoutPlaintext request to AWS KMS, specifying the CMK that you chose for EBS volume encryption. 2. AWS KMS generates a new data key, encrypts it under the specified CMK, and then sends the encrypted data key to Amazon EBS to store with the volume metadata. 3. When you attach the encrypted volume to an EC2 instance, Amazon EC2 sends the encrypted data key to AWS KMS with a Decryptrequest. 4. AWS KMS decrypts the encrypted data key and then sends the decrypted (plaintext) data key to Amazon EC2. 5. Amazon EC2 uses the plaintext data key in hypervisor memory to encrypt disk I/O to the EBS volume. The data key persists in memory as long as the EBS volume is attached to the EC2 instance. Option A is invalid because you cannot attach customer master keys after the volume is encrypted https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html

You work as an administrator for a company. The company hosts a number of resources using AWS. There is an incident of a suspicious API activity which occurred 11 days ago. The Security Admin has asked to get the API activity from that point in time. How can this be achieved?

Options are :

  • Search the Cloud Watch logs to find for the suspicious activity which occurred 11 days ago
  • Search the Cloudtrail event history on the API events which occurred 11 days ago. (Correct)
  • Search the Cloud Watch metrics to find for the suspicious activity which occurred 11 days ago
  • Use AWS Config to get the API calls which were made 11 days ago.

Answer : Search the Cloudtrail event history on the API events which occurred 11 days ago.

Explanation Answer – B The Cloud Trail event history allows to view events which are recorded for 90 days. So one can use a metric filter to gather the API calls from 11 days ago. Option A and C is invalid because Cloudwatch is used for logging and not for monitoring API activity Option D is invalid because AWS Config is a configuration service and not for monitoring API activity For more information on AWS Cloudtrail, please visit the following url https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html

AWS SCS-C01 Certified Security Speciality Practice Exam Set 5

You need to ensure that the cloudtrail logs which are being delivered in your AWS account is encrypted. How can this be achieved in the easiest way possible?

Options are :

  • Don’t do anything since Cloud trail logs are automatically encrypted. (Correct)
  • Enable S3-SSE for the underlying bucket which receives the log files
  • Enable S3-KMS for the underlying bucket which receives the log files
  • Enable KMS encryption for the logs which are sent to Cloudwatch

Answer : Don’t do anything since Cloud trail logs are automatically encrypted.

Explanation Answer – A The AWS Documentation mentions the following By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3) Option B,C and D are all invalid because by default all logs are encrypted when they sent by Cloudtrail to S3 buckets For more information on AWS Cloudtrail log encryption, please visit the following url https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html

You have a requirement to serve up private content using the keys available with Cloudfront. How can this be achieved?

Options are :

  • Add the keys to the backend distribution.
  • Add the keys to the S3 bucket
  • Create pre-signed URL’s (Correct)
  • Use AWS Access keys

Answer : Create pre-signed URL’s

Explanation Answer – C Option A and B are invalid because you will not add keys to either the backend distribution or the S3 bucket. Option D is invalid because this is used for programmatic access to AWS resources You can use Cloudfront key pairs to create a trusted pre-signed URL which can be distributed to users For more information on Cloudfront private trusted content, please visit the following url https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-trusted-signers.html

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publicly accessible from S3 directly?

Options are :

  • Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. (Correct)
  • Add the CloudFront account security group “amazon-cf/amazon-cf-sg? to the appropriate S3 bucket policy.
  • Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
  • Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

Answer : Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

Explanation Answer – A The AWS Documentation mentions the following You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it. To require that users access your content through CloudFront URLs, you perform the following tasks: • Create a special CloudFront user called an origin access identity. • Give the origin access identity permission to read the objects in your bucket. • Remove permission for anyone else to use Amazon S3 URLs to read the objects. Option B,C and D are all automatically invalid , because the right way is to ensure to create Origin Access Identity (OAI) for CloudFront and grant access accordingly. For more information on serving private content via Cloudfront, please visit the following url https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

AWS Solutions Architect Associate 2019 with Practice Test Set 5

Your company has a requirement to work with a DynamoDB table. There is a security mandate that all data should be encrypted at rest. What is the easiest way to accomplish this for DynamoDB.

Options are :

  • Use the AWS SDK to encrypt the data before sending it to the DynamoDB table
  • Encrypt the table using AWS KMS before it is created (Correct)
  • Encrypt the table using AWS KMS after it is created
  • Use S3 buckets to encrypt the data before sending it to DynamoDB

Answer : Encrypt the table using AWS KMS before it is created

Explanation Answer – B The most easiest option is to enable encryption when the DynamoDB table is created. The AWS Documentation mentions the following Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data. Option A is partially correct, you can use the AWS SDK to encrypt the data , but the easier option would be to encrypt the table before hand. Option C is invalid because you cannot encrypt the table after it is created Option D is invalid because encryption for S3 buckets is for the objects in S3 only. For more information on securing data at rest for DynamoDB please refer to below URL https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html

Your company hosts critical data in an S3 bucket. There is a requirement to ensure that all data is encrypted. There is also metadata about the information stored in the bucket that needs to be encrypted as well. Which of the below measures would you take to ensure this requirement is fulfilled?

Options are :

  • Put the metadata as metadata for each object in the S3 bucket and then enable S3 Server side encryption.
  • Put the metadata as metadata for each object in the S3 bucket and then enable S3 Server KMS encryption.
  • Put the metadata in a DynamoDB table and ensure the table is encrypted during creation time. (Correct)
  • Put the metadata in the S3 bucket itself.

Answer : Put the metadata in a DynamoDB table and ensure the table is encrypted during creation time.

Explanation Answer – C Option A ,B and D are all invalid because the metadata will not be encrypted in any case and this is a key requirement from the question. One key thing to note is that when the S3 bucket objects are encrypted , the meta data is not encrypted. So the best option is to use an encrypted DynamoDB table Option A ,B and C are all invalid because the metadata will not be encrypted in any case and this is a key requirement from the question. For more information on using KMS encryption for S3, please refer to below URL https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

One of the EC2 Instances in your company has been compromised. What steps would you take to ensure that you could apply digital forensics on the Instance. Select 2 answers from the options given below

Options are :

  • Remove the role applied to the Ec2 Instance
  • Create a separate forensic instance (Correct)
  • Ensure that the security groups only allow communication to this forensic instance (Correct)
  • Terminate the instance

Answer : Create a separate forensic instance Ensure that the security groups only allow communication to this forensic instance

Explanation Answer – B and C Option A is invalid because removing the role will not help completely in such a situation Option D is invalid because terminating the instance means that you cannot conduct forensic analysis on the instance One way to isolate an affected EC2 instance for investigation is to place it in a Security Group that only the forensic investigators can access. Close all ports except to receive inbound SSH or RDP traffic from one single IP address from which the investigators can safely examine the instance. For more information on security scenarios for your EC2 Instance, please refer to below URL https://d1.awsstatic.com/Marketplace/scenarios/security/SEC_11_TSB_Final.pdf

AWS Solutions Architect Associate 2019 with Practice Test Set 3

One of your company’s EC2 Instances have been compromised. The company has strict policies and needs a thorough investigation on to finding the culprit for the security breach. What would you do in this case. Choose 3 answers from the options given below.

Options are :

  • Take a snapshot of the EBS volume (Correct)
  • Isolate the machine from the network (Correct)
  • Ensure logging and audit is enabled for all services (Correct)
  • Ensure all passwords for all IAM users are changed
  • Ensure that all access keys are rotated.

Answer : Take a snapshot of the EBS volume Isolate the machine from the network Ensure logging and audit is enabled for all services

Explanation Answer – A,B and C Some of the important aspects in such a situation are 1) First isolate the instance so that no further security harm can occur on other AWS resources 2) Take a snapshot of the EBS volume for further investigation. This is incase if you need to shutdown the initial instance and do a separate investigation on the data 3) Ensure that logging is enabled for all services. Here you could investigate any abnormal behavior which could have been caused by the security breach. Option D and E are invalid because they could have adverse effects for the other IAM users. For more information on adopting a security framework, please refer to below URL https://d1.awsstatic.com/whitepapers/compliance/NIST_Cybersecurity_Framework_CSF.pdf

Your company has a set of EC2 Instances that are placed behind an ELB. Some of the applications hosted on these instances communicate via a legacy protocol. There is a security mandate that all traffic between the client and the EC2 Instances need to be secure. How would you accomplish this?

Options are :

  • Use an Application Load balancer and terminate the SSL connection at the ELB
  • Use a Classic Load balancer and terminate the SSL connection at the ELB
  • Use an Application Load balancer and terminate the SSL connection at the EC2 Instances
  • Use a Classic Load balancer and terminate the SSL connection at the EC2 Instances (Correct)

Answer : Use a Classic Load balancer and terminate the SSL connection at the EC2 Instances

Explanation Answer – D Since there are applications which work on legacy protocols, you need to ensure that the ELB can be used at the network layer as well and hence you should choose the Classic ELB. Since the traffic needs to be secure till the EC2 Instances , the SSL termination should occur on the Ec2 Instances. Option A and C are invalid because you need to use a Classic Load balancer since this is a legacy application. Option B is incorrect since encryption is required until the EC2 Instance For more information on HTTPS listeners for classic load balancers, please refer to below URL https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-https-load-balancers.html

AWS Solutions Architect Associate 2019 with Practice Test Set 7

A company has hired a third-party security auditor, and the auditor needs read-only access to all AWS resources and logs of all VPC records and events that have occurred on AWS. How can the company meet the auditor's requirements without comprising security in the AWS environment? Choose the correct answer from the options below

Options are :

  • Create a role that has the required permissions for the auditor.
  • Create an SNS notification that sends the CloudTrail log files to the auditor's email when CloudTrail delivers the logs to S3, but do not allow the auditor access to the AWS environment.
  • The company should contact AWS as part of the shared responsibility model, and AWS will grant required access to the third-party auditor.
  • Enable CloudTrail logging and create an IAM user who has read-only permissions to the required AWS resources, including the bucket containing the CloudTrail logs. (Correct)

Answer : Enable CloudTrail logging and create an IAM user who has read-only permissions to the required AWS resources, including the bucket containing the CloudTrail logs.

Explanation Answer – D AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting. Option A and C are incorrect since Cloudtrail needs to be used as part of the solution Option B is incorrect since the auditor needs to have access to Cloudtrail For more information on cloudtrail , please visit the below url https://aws.amazon.com/cloudtrail/

Your company has a set of EC2 Instances defined in AWS. They need to ensure that all traffic packets are monitored and inspected for any security threats. How can this be achieved? Choose 2 answers from the options given below

Options are :

  • Use a host based intrusion detection system (Correct)
  • Use a third party firewall installed on a central EC2 Instance (Correct)
  • Use VPC Flow logs
  • Use Network Access control lists logging

Answer : Use a host based intrusion detection system Use a third party firewall installed on a central EC2 Instance

Explanation Answer – A and B If you want to inspect the packets themselves , then you need to use custom based software A diagram representation of this is given in the AWS Security best practises Option C is invalid because VPC Flow logs cannot conduct packet inspection. Option D is invalid because logging is not available for Network Access control lists For more information on AWS Security best practises, please refer to below URL https://d1.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

Comment / Suggestion Section
Point our Mistakes and Post Your Suggestions