Quantcast
Channel: sysforensics.org
Viewing all 57 articles
Browse latest View live

AWS Security Overview - Part II - IAM

$
0
0

Overview

Before I get too far ahead in this series, I am going to pause quickly on the networking side and discuss Identity Access Management (IAM). IAM is a critical component of AWS and will play into future blog posts.

I will not be discussing security architecture around IAM.

For all other posts in this series see: AWS Security Overview Table of Contents

Let's begin...


NOTE:  For any definitions, I am going to use it more or less verbatim from the AWS documentation. Their documentation is robust and usually pretty current.  So rather than citing everything, just assume I have gotten it directly from the respective AWS documentation.  

Identity Access Management (IAM)

Let me first start off by saying, if you really want to understand IAM you should read the 689 page IAM User Guide. It's your friend. It's also filled with labs that you can do on your own.

If you don't want to read the entire User Guide at least take a quick look at the IAM Features so you know what features IAM provides.

At the end of the day, identity and access management is a very important requirement. It's also included in every compliance standard I personally know of and if you're using AWS, you will use IAM.

Definition

Amazon's definition is, "AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and how they can use resources (authorization)."

It's centralized and allows for fine grained controls. Within IAM you have; Users, Groups, and Roles.

IAM API

More technical, but another required reading should be the 425 page IAM API documentation (and the API docs for other AWS services) as well.

Every action within AWS is an API call. As a result, the API documentation seems like a logical place to begin when you want to start building alerting, monitoring and auditing around IAM. It's also an ideal starting point for creating IAM policies.

I would suggest first meeting with your governance team and figure out what kind of actions you must monitor at a minimum, then go from there creating neat correlation rules.

Consider this entry from the PCI DSS Effective Daily Log Monitoring document.

10.2.5 Use of and changes to identification and authentication mechanisms—including but not limited to creation of new accounts and elevation of privileges—and all changes, additions, or deletions to accounts with root or administrative privileges

As an example, let's say an admin wanted to gain access to billing information for some reason. They would need to attach a Billing IAM policy to their account.

This action would make an API call to AttachUserPolicy. This would be the eventName in the log.

An example of the log file would look like this; however, I did remove a few things for privacy and brevity.

{
    "eventVersion": "1.02",
    "userIdentity": {
        "type": "IAMUser",
        "arn": "arn:aws:iam::3xxxxxxxxxx8:user/polsen",
        "userName": "polsen",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2017-11-07T21:18:20Z"
            }
        },
        "invokedBy": "signin.amazonaws.com"
    },
    "eventTime": "2017-11-07T21:18:41Z",
    "eventSource": "iam.amazonaws.com",
    "eventName": "AttachUserPolicy",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "76.26.x.x",
    "userAgent": "signin.amazonaws.com",
    "requestParameters": {
        "userName": "polsen",
        "policyArn": "arn:aws:iam::aws:policy/job-function/Billing"
    }

So now you could setup alerting on this eventName, and the policyArn of, "Billing". Or prevent it altogether. Or if you don't want to prevent, you could take action to remove the policy from said user automatically (via code) when an alert triggers.

In either case, there are a myriad of API calls for each of the AWS services, which is why your team needs to understand them in detail.

I will revisit APIs at a later time.


AWS Access

There are multiple ways to access your AWS account and/or its resources/services.

  • Email and password (root user creds. - see below)
  • IAM user name and password via the aws console
  • Access Keys - Used with the command line interface and progromatically via code
  • Key Pairs - Used with AWS specific services
  • SSH Keys with CodeCommit
  • Server certificates, which can be used to authenticate to some AWS services
  • You also can enable Multi Factor Authentication (MFA)

You also have, Security Token Service (STS), which allows you to request temporary, limited-privilege credentials. It's useful in scenarios such as; identity federation, delegation, cross-account access, and IAM roles.

Root User Credentials

When you create your Amazon AWS account you will have to set an email address and a password for your account. This combination is called your, root user credentials. This account gives you unrestricted access to all resources, to include billing.

NO ONE should be using this account for daily operations. If the person claims they need to use this account "to get their job done", and it's not one of these items listed here they should really be fired.

You should also setup multi-factor authentication (MFA) on the root account (really all elevated and critical accounts at a minimum). The MFA token for the root account should then be locked away someplace so it can be used in the case of an emergency.

You could even have the email and password in one location, and the MFA token in another area to ensure, two-man procedures.

AWS gives you a "check box" to follow here once you create an AWS account.

Check Box

So if you're not supposed to use your root account, then what? Well, the best method is to create a new IAM Group, assign it AdministrativeAccess, create an IAM User, and assign that User to the new group. Log out of the root account, and manage your AWS environment via that new user.

An IAM user with administrator permissions is not the same thing as the AWS account root user.

IAM Users

With AWS, a user, does not have to represent an actual person. It's just an identity. An IAM User may be a service for example. An IAM user is just an identity with credentials and permissions.

If you use IAM service account, they are really no different than service accounts you have been using with active directory.

An IAM user is associated with one and only one AWS account and when you create a user within IAM the user by default cannot access anything in that account.

For example, here is a new user, blogtest. As you can see, there are no permissions assigned to this user. He/she can't do anything.

NewUser

AWS identifies an IAM User by the following:

  • Friendly Name (UserName) - "meh"
  • Amazon Resource Name (Arn) - arn:aws:iam::3xxxxxxxxxx8:user/meh
  • You use the ARN when you want to uniquely identify the person across all of AWS. For example, if you were writing an IAM policy and wanted to specify this user as a Principal within the IAM policy.
  • Unique Identifier (UserId) when you create a user via the API and/or CLI tools

Here is an example:

aws iam create-user --user-name meh

{
    "User": {
        "UserName": "meh",
        "Path": "/",
        "CreateDate": "2017-11-09T14:34:07.488Z",
        "UserId": "AIDAISW5HE5JJWFML3H6Y",
        "Arn": "arn:aws:iam::3xxxxxxxxxx8:user/meh"
    }
}

To change a user's name or path, you must use the AWS CLI. IAM does not automatically update policies that refer to the user as a resource.

And example of this is here. You can compare it with the code snippet above. You will see the UserID stayed the same, and the UserName and Arn were updated with, mehmeh.

aws iam update-user --user-name meh --new-user-name mehmeh

aws iam get-user --user-name mehmeh

{
    "User": {
        "UserName": "mehmeh",
        "Path": "/",
        "CreateDate": "2017-11-09T14:34:07Z",
        "UserId": "AIDAISW5HE5JJWFML3H6Y",
        "Arn": "arn:aws:iam::3xxxxxxxxxx8:user/mehmeh"
    }
}

You can only have 5,000 users in an AWS account. Likewise, MFA devices are equal to the user quota for the account (5,000). Users can only be a member of 10 groups.

IAM Groups

An IAM Group is a collection of IAM Users.

Groups are a way to more easily manage users. It's recommended that IAM Groups be assigned IAM policies vs. attaching them to specific users. When you assign a user to a particular group, the user automatically has the permissions, which are are assigned to said group.

A group is not an identity. You can not assign a group as a Principal in an access policy. When creating policies, the Principal element is used to specify the; IAM, federated, or assumed-role user.

A group can contain many users, users can belong to many (limit is 10) groups. You cannot nest groups. This means groups cannot contain more groups in a parent/child like relationship hierarchy.

You can only have 300 groups in one AWS account.

You can list your groups via cli:

aws iam list-groups
{
    "Groups": [
        {
            "Path": "/",
            "CreateDate": "2017-11-06T18:16:07Z",
            "GroupId": "AGxxxxxxxxxxxxxxxxxZ2",
            "Arn": "arn:aws:iam::3xxxxxxxxxx8:group/Administrators",
            "GroupName": "Administrators"
        },
        {
            "Path": "/",
            "CreateDate": "2017-11-08T00:39:37Z",
            "GroupId": "AGxxxxxxxxxxxxxxxxxBU",
            "Arn": "arn:aws:iam::3xxxxxxxxxx8:group/Billing",
            "GroupName": "Billing"
        },
        {
            "Path": "/",
            "CreateDate": "2017-11-08T00:41:20Z",
            "GroupId": "AGxxxxxxxxxxxxxxxxxAU",
            "Arn": "arn:aws:iam::3xxxxxxxxxx8:group/BillingView",
            "GroupName": "BillingView"
        }
    ]
}

IAM Roles

A role is intended to be assumable by anyone who needs it. It's not associated to a particular person (as in IAM User). Roles can be temporary, and it can be assigned to federated users who use something other than IAM as their identity provider.

You can also configure federated users using AWS Directory Services. This would be used if you're using Microsoft Active Directory within your current environment and was to allow users to access AWS services and resources.

It also supports SAML 2.0 to provide SSO as well as web identify federation using something like Facebook or Google authentication.

Federated users are not traditional IAM users. They are assigned roles and then permissions are assigned to those roles. Unlike a traditional user, a role is intended to be assumable by anyone who needs it. More on roles later.

Temporary Security Credentials

You can use AWS Security Token Service (STS) to provide trusted users with temporary credentials, that allow them to access AWS resources. These can be used to log into the AWS console, or make API requests.

They are short lived (as in they will expire) access key ID, secret access key and a session token. You can configure the expiration times. Once expired they cannot be reused.

Amazon defines the expiration as, "Credentials that are created by IAM users are valid for the duration that you specify, from 900 seconds (15 minutes) up to a maximum of 129600 seconds (36 hours), with a default of 43200 seconds (12 hours); credentials that are created by using account credentials can range from 900 seconds (15 minutes) up to a maximum of 3600 seconds (1 hour), with a default of 1 hour."

There is no AWS identity associated with STS. It's Global so the credentials will work globally. You can log STS usage via CloudTrail logging.

It's used with Enterprise and Web (Ex. Facebook login) identity federation.

Cross Account Access

Cross account access allows an IAM User from another AWS account to perform actions against another AWS account. This of course is determined by what permissions you allow when setting it up. Deny by default still holds true here.

A good example of this may be, Production and Development. It could also be, APAC and Americas, or Husband and Wife. It's possible to provide third-parties cross-account access as well.

This saves you from not having to create UserProd and UserDev * n users. It reduces the number of IAM Users. It's accomplished via Roles and the IAM User assuming said role to accomplish their task(s).

This could make for interesting lateral movement cases where one AWS account gets compromised (or a third party) and then able to perform actions on the other AWS account via the cross-account access.

It's important to know what cross accounts exist and the level of access allowed against critical systems.

You could also monitor these cross accounts via the AssumeRole API call and then filter by the arn IAM policy name you grated the IAM User(s). Anytime they "Assume" (Ex. aws sts assume-role) that role, it will log it into CloudTrail. You could alert on this for auditing purposes.

Identity & Resource-based IAM Policies

With a identity-based IAM Policy, you're creating (or attaching a built-in) a policy to a user, group or role, permitting the user, group or role to perform a set of actions against a particular resource. For example, listing buckets within S3.

There are also resource-based policies. This is something you would set on the resource. Again, for example, S3 Bucket. You specify what actions are permitted against said resource. They also like you specify who has access to the resource.

Another difference between User-based and Resource-based polices is that the, "who" in a user-based possibly is the user it's attached to. In a resource-based policy the "who can perform actions against me" is detailed within the policy attached to that particular resource.

In a resource-based policy, the Principal element within the JSON specifies the user, account, service, or other entity that is allowed or denied access to said resource.

IAM Policies

To assign permissions to a user, group, role, or resource, you create an IAM policy.

The policies are JSON documents. They use JSON formatting to define the policy. More than one statement can be included in a policy document.

Within this sample policy below we define; Effect, Action, Resource.

You can specify many other Elements for example, Conditions, Principals, etc. You can find them here.

In this sample policy:

  • Effect: We are allowing this to happen (it could be Deny)
  • By default, access to resources is denied. To allow access to a resource, you must set the Effect element
    to Allow.
  • Action: The actions that are allowed (effect) by this policy. In this example, listing all of my S3 buckets.
  • Resource: The resources on which the actions can occur. In my example, everything in s3, hence the *.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

When evaluating a policy, Amazon uses the following IAM Policy Evaluation Logic:

AWS Logic

Let's say you created a user and they want to list S3 buckets, but they do not have a S3 policy assigned.

aws s3 --profile s3test ls

An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

So I created a mock S3 custom policy to allow a user to ListAllMyBuckets and attached it to user, s3test.

NOTE: Amazon recommends that you use the AWS defined policies to assigned permissions when possible.

Here is the JSON showing what this simple policy looks like.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

Now we can attach it to the user, s3test.

S3 Policy

Now we can try and rerun the same command we were previously denied. Yay, it works. This is just a test policy to highlight a point. I am not endorsing it for your environment.

aws s3 --profile s3test ls

2017-11-06 17:08:03 polsen-1234
2017-11-06 17:08:33 sysforensics-blogtest

So why this rabbit hole? Well, policies are what give your users permissions.
This is what allows them to do bad things within your environment.

Let's say you want to see what permissions a particular user has attached to them.

Using the aws cli tool, you can do the following:

aws iam list-attached-user-policies --user-name s3test

{
    "AttachedPolicies": [
        {
            "PolicyName": "S3BucketAccessByIAMUser",
            "PolicyArn": "arn:aws:iam::31xxxxxxxx48:policy/S3BucketAccessByIAMUser"
        }
    ]
}

This command will only show the policies attached to an IAM User. You need to see if this user is part of a group and if so, what policies is he/she inheriting via that group.

aws iam list-groups-for-user --user-name s3test

{
    "Groups": [
        {
            "Path": "/",
            "CreateDate": "2017-11-06T18:16:07Z",
            "GroupId": "AGPAxxxxxxxxxxxxxT5Z2",
            "Arn": "arn:aws:iam::31xxxxxxxx48:group/Administrators",
            "GroupName": "Administrators"
        }
    ]
}

So now you know that s3test belongs to a GroupName called, Administrators. Now we can list the policies attached to that particular group.

So now we know user, s3test has AdministratorAccess.

aws iam list-attached-group-policies --group-name Administrators

{
    "AttachedPolicies": [
        {
            "PolicyName": "AdministratorAccess",
            "PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
        }
    ]
}

There will be a lot more on IAM policies in future posts as well. It's the bedrock for AWS.

Summary

I will leave the IAM overview at that. Again, I really suggest you take a look at the IAM user-guide. It's 689 pages. It's very hard to summarize that much information in a single blog post.

Hopefully this gave you a bit of an overview.

I will touch on IAM event logging within AWS CloudTrail in Part III.

Lastly, I will leave you with Amazon's IAM best practices.

  • Lock away your AWS Account Root User Access Keys
  • Create individual IAM users
  • Use AWS Defined Policies to Assign Permissions Whenever Possible
  • Use Groups to Assign Permissions to IAM Users
  • Grant Least Privilege
  • Use Access Levels to Review IAM Permissions
  • Configure a Strong Password Policy for Your Users
  • Enable MFA for Privileged Users (at a minimum)
  • Use Roles for Applications That Run on Amazon EC2 Instances
  • Delegate by Using Roles Instead of by Sharing Credentials
  • Rotate Credentials Regularly
  • Remove Unnecessary Credentials
  • Use Policy Conditions for Extra Security
  • Monitor Activity in Your AWS Account

Enjoy!


AWS Security Overview - Part 0 - What is Cloud

$
0
0

Overview

This is just a quick overview of cloud in general. I wasn't going to write this one, but figured I would make my AWS Cloud Security Blog Series complete by backing up and creating a Part 0 to lay out the foundation of cloud.

I will use National Institute of Standards and Technology (NIST) Special Publication (SP) 800-145 and 800-146 as the standards for cloud computing definitions.

Cloud Computing

So, the idea of a cloud isn't new.

It turns out that in the 1960s there was a computing concept called, time-sharing, which was used to share computing resources with many users. An example of this was, Compatible Time-Sharing System (CTSS), which was developed by MIT and put in use in 1961.

The concepts of time-sharing compute resources goes back to the mid-1950s. During MIT's Summer Session 1954 - Digital Computers there was a discussion topic on the, Effects of Automatic Coding Machine Design.

The session kicked off by discussing the IBM 704 versus the IBM 701, and how there will be three index registers with the IBM 704 and how size will be a problem.

Dr. Grace Hopper, John Backus, Mr. W.A. Ramshaw, Mr. J. R Belford and others go on to propose ideas of solving the computer sizing concerns. As in, there is no room physically and you reach a point of diminishing returns.

Dr. Hooper's says, "the possibility of using several small computers in parallel"

John Backus stated that, "since increased speed costs little more, a large computer is cheaper to use than a small one."

He goes onto say, "that by time sharing, a big computer could be used as several small ones; there would need to be a reading station for each user.".

If you apply, both Dr. Hopper's comment on small(er) parallel computers and John's comments about scale ("large computer" and/or "big computer") you start to have the makings of a cloud.

The irony of these comments is that in the book, The Everything Store: Jeff Bezos and the Age of Amazon. Bezo's, when discussing AWS, was quoted by an employee for saying, "He had a vision of literally tens of thousands of cheap, two-hundred-dollar machines piling up in racks, exploding. And it had to expand forever".

Jeff Bar (Chief Evangelist for the Amazon Web Services) posted the first blog entry on the Amazon Web Services blog on November 09, 2004.

Fast forward to 2006, "Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing." In comparison, "Azure was announced in October 2008 and released on February 1, 2010 as "Windows Azure" before being renamed "Microsoft Azure" on March 25, 2014."

So, what makes cloud so special today if it's existed conceptually and/or physically for decades?

Let's take a look.


Cloud Definition: NIST states it, "is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

These are some of the characteristics that make today's cloud different from previous attempts.

  • Massive Scale: Sure, data centers and large mainframes existed previously, but nothing on the massive scale they do today. With this massive scale comes more or less an unlimited amount of computing power in a relatively small footprint compared with older computing systems, and with it, insane levels of computing power. So large, that humans (including myself) really can't grasp the magnitude of it all. Oh, and it's all accessible literally a few mouse button clicks.
  • Self-Service: A cloud consumer can, at anytime, for whatever reason, spin up/down compute resources. This can be accomplished, "without requiring human interaction with each service provider".
  • On-demand: This also includes pay-as-you-go on-demand services. Think of on-demand pricing models with Azure and/or AWS.
  • New Services: MongoDB, Artificial Intelligence (AI) and Machine Learning platforms, MapReduce/Hadoop, etc. that simply did not exist like they do today and if they did, not at the scale as they do today.
  • Resource Pooling: The providers computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to cloud consumer demand
  • Rapid Elasticity: Resources and services can be elastically provisioned and released. In some instances, a cloud consumer might leverage auto-scaling. A good example of this is Black Friday shopping season. Cloud provides the elasticity to auto-scale during peak times and meet the expected/unexpected demand without pre-purchasing a bunch of hardware that will sit stale until the next peak demand. This elasticity also help organizations engineer solutions for cost savings.
  • Measured Service: Cloud systems automatically control and optimize resource use by leveraging a metering capability. An example of this is how Amazon charges by the second for some computing resources, or the amount of data in a storage service like S3. Not unlike how you're charged for Gas and Water in your home.

Cloud Service Models

Clouds today can be deployed in differnt service models.

There are generally three or four cloud computing service models. Starting from the lowest levels of control.

Hardware as a Service (HaaS): You're essentially given access to physical hardware and then you pay a service to rent it. Think of this as using Uber vs. buying a car. You're paying a service fee to Uber to use their hardware (the Car/Person).

Infrastructure as a Service (IaaS): Install your own software/operating systems onto hardware without having to manage the hardware.

NIST states, "The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications."

This includes technologies like; virtual machines (EC2), storage (Amazon S3), networks (VPC), etc.

Platform as a Service (PaaS): NIST states it's the, "capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider."

Examples include, AWS Elastic Beanstalk, RDS, Lambda, etc.

Software as a Service (SaaS): This is when the cloud consumer (you), leverage the cloud service providers infrastructure via applications (Ex. a Web Browser). You don't manage or control any of the cloud infrastructure.

Examples are; Gmail, Dropbox, Google Docs, etc.

Deployment Models

Private Cloud: Private to your organization served up to multiple business units. It's not always cheaper to move to AWS, Azure, etc. In instances where this is true, an organization may decide to invest in their own cloud.

Public Cloud: The cloud infrastructure that is provisioned for and used by the public. This is AWS and Azure.

Hybrid Cloud: This could be a combination of one or all of the other clouds. You could have a public cloud that's integrated into your on-site private cloud, and/or data center.

That is all I will touch on what makes cloud, cloud.

From here on out, you should assume I am talking about Public Cloud. More specifically, Amazon Web Services (AWS).

Birth of AWS

In the book, The Everything Store: Jeff Bezos and the Age of Amazon, it says that AWS, as it's known today, started to gain traction around 2003.

At the time, Amazon's internal systems were broken down into more individual components. There was a single team that strictly controlled who could access Amazon's services. Teams within Amazon had to plead for resources and as a result, it slowed people down and leadership (and employees) felt like it stifled creativity.

Amazon realized they, "needed to break their infrastructure down into the smallest, simple atomic components, and allow developers to freely access them with as much flexibility as possible."

In 2004, Chris Pinkham moved back to South Africa so he could be closer to family. In doing so, he brought with him, Chris Brown. They setup shop in Cape Town, where their efforts would become, Elastic Computer Cloud, or EC2.

The rest is history. The book is a good read if you care to read more about Bezo's and the early years of Amazon. It's not specific to AWS, but it's discussed.

One part I find kind of funny is that South Africa isn't a region you can select within AWS (as of this writing), even though it's the birth place of AWS. I guess it comes down to simple economics.

Cloud Shared Responsibility Model

The idea of the shared responsibility model is that you, the cloud consumer, as well as the cloud service provider (CSP), share the responsibility of securing the cloud.

Amazon depicts this model as the following; however you should keep in mind that it will vary depending on the services you choose.

For example, with AWS Container Services (RDS, EMR, Beanstalk), AWS manages the infrastructure, Operating Systems, Application Platforms where the cloud consumer is responsible for data encryption, identity and access management (IAM), firewall configurations.

1

Security in the Cloud: The the cloud consumer is responsible for. This will shift depending on the services you choose and the vendor you choose as well. Some full managed services for example will reduce the responsibility on you (the cloud consumer), and other services that are not fully managed by the CSP will increase the your requirements.

So what does "in the cloud" mean, really? Well, let's take an Amazon Elastic Compute Cloud (EC2), which is the virtual machine rental service that AWS provides. When you rent an EC2 system, let's say, Linux. You are the cloud consumer are responsible for management of that Linux machine, which includes software updates as well as security patches.

Security of the Cloud: The general rule of thumb is the CSP has the responsibility of the CSP. As in, the CSP secures the infrastructure where all of the services run, to include the data centers, physical security, placement of data centers, etc.

Controls: With the shared responsibility you have different control types. Amazon classifies them as the following. I've included a couple examples for each categorization.

  • Inherited - Inherited from AWS
    • Physical security
  • Hybrid - AWS provides partial implementation, customer is responsible to fully implement
    • Account management (IAM)
  • Shared - AWS provides security around certain services/layers, the customer needs to provide it at other services/layers.
    • Patch management
    • Configuration management
  • Customer - Responsibility of the cloud consumer.
    • Creating protection zones around PCI-DSS data

I should mention there are far fewer inherited controls, than customer (cloud consumer), hybrid, and shared controls, so if you're in AWS now and haven't done anything, and you think, "you're good", you're not.

NIST 800-53 rev 4 Control Example:

Here is an example of a control using the Security and Privacy Controls for Federal Information Systems and Organizations - NIST 800-53 rev 4. Rev 5 is in draft at the time of writing.

NIST 800-53 Definition: "This publication provides a catalog of security and privacy controls for federal information systems and organizations and a process for selecting controls to protect organizational operations (including mission, functions, image, and reputation), organizational assets, individuals, other organizations, and the Nation from a diverse set of threats including hostile cyber attacks, natural disasters, structural failures, and human errors. The controls are customizable and implemented as part of an organization-wide process that manages information security and privacy risk."

Within NIST 800-53 you have the following IDs and Control Families.

Id - Family
AC - Access Control
MP - Media Protection
AT - Awareness and Training
PE - Physical and Environmental Protection
AU - Audit and Accountability
PL - Planning
CA - Security Assessment and Authorization
PS - Personnel Security
CM - Configuration Management
RA - Risk Assessment
CP - Contingency Planning
SA - System and Services Acquisition
IA - Identification and Authentication
SC - System and Communications Protection
IR - Incident Response
SI - System and Information Integrity
MA - Maintenance
PM - Program Management

Here is a simple control Access Control we can look at.

Account Management - AC-2(1) - The organization employs automated mechanisms to support the management of information system accounts.

This is an example of a shared responsibility control. AWS Identity and Access Management (IAM) is going to be the automated mechanism that the cloud consumer will use to fulfill this control. The cloud consumer would also then use CloudTrail as the mechanism to logging activity of said IAM users.

The cloud consumer would then own the responsibility of documenting their internal standard operations procedures (SOPs) on how it employs these automated mechanisms.

Compliance

AWS has received multiple compliance certifications for its infrastructure. So it's possible when using AWS to achieve PCI, certify against something like ISO27001, etc.

I recommend you take a look at all of the certifications they have received here

Summary

So now we all know what a cloud is.

Future blog posts for this series are maintained here.

Enjoy!

References:

NIST 800-145 - The NIST Definition of Cloud Computing
NIST 800-146 - Cloud Computing Synopsis and Recommendations
NIST 800-53 rev4 - Security and Privacy Controls for Federal Information Systems and Organizations
Amazon Compliance Page
MIT's Summer Session 1954 - Digital Computers

AWS Security Overview - S3 Data Exposure

$
0
0

Overview

With the recent news about the, Pentagon exposing some of its data on Amazon server I figured I would write a quick post on S3 and recreate a similar misconfiguration.

NOTE: I have NO knowledge of this data exposure.  I do not know if the below is what caused the data exposure.  

Again, I do not know if this is what lead to the data exposure.  This is one of multiple ways it could have happened.

UpGuard Article

UpGuard, the organization that released the exposure, goes on to say in the article, "On September 6th, 2017, UpGuard Director of Cyber Risk Research Chris Vickery discovered three Amazon Web Services S3 cloud storage buckets configured to allow any AWS global authenticated user to browse and download the contents".

AuthenticatedUsers

Based on the comment above it's possible the S3 buckets were configured to allow; the AuthenticatedUsers group via ACLs.

AuthenticatedUsers is an Amazon S3, predefined group, which you can assign access via a S3 ACL by specifying the URI (http://acs.amazonaws.com/groups/global/AuthenticatedUsers) instead of a canonical user ID.

The Authenticated Users group represents all authenticated users within AWS. This group account allows ANY AWS account to access your resources, not just the authenticated IAM users within your AWS account.

There is a clear warning in the S3 Developer documentation. It says, "When you grant access to the Authenticated Users group any AWS authenticated user in the world can access your resource."

Maybe they didn't read the documentation clearly? It is also possible the admins saw the words, "AWS authenticated user" and assume it's just their AWS authenticated users and not every AWS Authenticated user? Or maybe it was something else...

It's not something you can screwup bia a typo, it's not a sophisticated attack, no one circumvented AWS security protections (based on the information from the article). It appears to have been a misconfiguration on the part of the cloud admins (or someone in a similar role).

In either case, this stuff happens. That's life. I decided to try and recreate a similar misconfiguration as I wasn't aware of this group and level of access prior to the article.

I am not a AWS Cloud Architect or Cloud SysAdmin so take the below with a grain of salt.

My Test Bucket

I created a test bucket called, fasdfgsgdfagdaggb-test using all of the default options.

When you create a bucket or an object, there is a default access control list (ACL) created that grants the resource owner full control over the resource.

ACLs enable you to manage access to both buckets and objects. They define which AWS accounts or groups are granted access and the type of access they have.

After creating the bucket I wanted to see the ACLs for the new bucket so we had a before and after.

To see what bucket ACLs are assigned to a bucket, use the following command:

aws s3api get-bucket-acl --bucket fasdfgsgdfagdaggb-test

You can see here the Grantee, po, has FULL_CONTROL over the bucket. As mentioned above, when you create a bucket, AWS grants the resource owner full control over the resource.

An ACL can have up to 100 grants.

{
    "Owner": {
        "DisplayName": "po",
        "ID": "0fb<snip>dad"
    },
    "Grants": [
        {
            "Grantee": {
                "Type": "CanonicalUser",
                "DisplayName": "po",
                "ID": "0fb<snip>dad"
            },
            "Permission": "FULL_CONTROL"
        }
    ]
}

AuthenticatedUsers Example

If we try and list the contents of that bucket using Account B (s3test) we are unable to do so.

aws s3 ls s3://fasdfgsgdfagdaggb-test --profile s3test

You can see here when we attempt to ListObjects it's denied.

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Now we are going to assign an ACL and grant AuthenticatedUsers read access to our, fasdfgsgdfagdaggb-test bucket.

aws s3api put-bucket-acl --bucket fasdfgsgdfagdaggb-test --grant-read uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers

Now we want to get the bucket ACL again and compare it with the before get-bucket-acl.

You can see we now have two grants. Our new one, Type: Group and the URI pointing to, AuthenticatedUsers.

aws s3api get-bucket-acl --bucket fasdfgsgdfagdaggb-test

A Grantee can be an AWS account or one of the predefined Amazon S3 groups. The three predefined groups are:

  • Authenticated Users group
  • All Users group
  • Log Delivery group
{
    "Owner": {
        "DisplayName": "po",
        "ID": "0fb<snip>dad"
    },
    "Grants": [
        {
            "Grantee": {
                "Type": "Group",
                "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
            },
            "Permission": "READ"
        },
        {
            "Grantee": {
                "Type": "CanonicalUser",
                "DisplayName": "po",
                "ID": "0fb<snip>dad"
            },
            "Permission": "FULL_CONTROL"
        }
    ]
}

You can also verify this within the AWS console. After we added those permissions, we now can see that a Group, Any AWS user was created and given List Objects of, Yes.

1

AuthenticatedUsers - Read Only

To test the AWS AuthenticatedUsers access I setup a profile using my second AWS Account (Account B). The user for the second AWS account is, s3test.

You can see here that s3test is now able to list the items within this S3 bucket from AWS Account A, where as before this ACL was granted they could not.

aws s3 ls s3://fasdfgsgdfagdaggb-test --profile s3test

                           PRE NotEncrypted/
                           PRE SSE-S3Encryption/

We can also do a recursive list and see everything in this sample S3 bucket.

aws s3 ls s3://fasdfgsgdfagdaggb-test --recursive --profile s3test

2017-11-18 19:01:24          0 NotEncrypted/
2017-11-18 19:02:20         12 NotEncrypted/Test.txt
2017-11-18 19:01:37          0 SSE-S3Encryption/
2017-11-18 19:02:05         12 SSE-S3Encryption/Test.txt

Now let's try to download them.

As you can see, the GetObject operation is denied here. So we are unable to download these files even though we can list them.

The reason for this is we haven't set a bucket policy and/or an object ACL giving us permissions to do so.

aws s3 cp s3://fasdfgsgdfagdaggb-test . --recursive --profile s3test

download failed: s3://fasdfgsgdfagdaggb-test/SSE-S3Encryption/Test.txt to SSE-S3Encryption\Test.txt An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
download failed: s3://fasdfgsgdfagdaggb-test/NotEncrypted/Test.txt to NotEncrypted\Test.txt An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

Bucket Policies and Object Access

I'm not sure what their bucket policy was set for obviously so I will just add a simple object acl to show you how it's possible to download the files.

Initially I set the put-object-acl only for my avatar.png file inside of People.

aws s3api put-object-acl --bucket fasdfgsgdfagdaggb-test --key People/avatar.png --acl authenticated-read

Then I attempt to copy all of the files again.

aws s3 cp s3://fasdfgsgdfagdaggb-test . --recursive --profile s3test

You can see here that two of them were denied because I had not setup object ACLs on them, and then the avatar.png downloaded successfully.

download failed: s3://<snip>iam-ug.pdf An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

download failed: s3://<snip>iam-api.pdf An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

download: s3://fasdfgsgdfagdaggb-test/People/avatar.png to People\avatar.png

At the same time, if you were to navigate to this via the web you would get denied.

2

Summary

I would suspect the misconfiguration was similar in nature to the above; however, I really have no idea.

Even if it isn't exactly the same, it highlights a similar situation of where poor Bucket and Object ACLs can lead you down a dark path.

If you're using ACLs to manage access to S3 you should move away from it unless you have a snowflake case why you need it. It's recommended by AWS that, bucket policies and/or IAM policies be used for S3 access control.

The article mentions that some of the web scraping occurred back in 2009, but the most recent was 2017. Back in 2009 (I wasn't an AWS user back then), from what I gather (looking at older screen shots and what not), it was easier to misconfigure permissions. It appears at one time you could configure ACLs permissions in the GUI where now you cannot. Maybe this is what lead to the misconfigurations? Who knows. If so, maybe it's time go back back and revisit your permissions on your S3 buckets.

I would also suggest not using company identifiable information in your bucket names to the extent possible. There are scripts that will brute force company names in an attempt to find company named s3 buckets.

Object access logging could have helped to identify any brute force successful downloading of objects within a particular bucket.

I have been writing a series on AWS Security here

Enjoy!

References

S3 Developer Guide
S3 API Documentation

AWS Security Overview - Part III - VPC Security Groups

$
0
0

Overview

I discussed AWS Security Overview - Part I - Networking Terminology in part one. Now that we have the terminology out of the way for the most part I wanted to discuss Security Groups.

The next few AWS Security Overview Parts will be focused on security and logging.

Security Groups

Security Groups (SGs) are a virtual firewall for instances that are used to control ingress and egress traffic. SGs operate at the instance level, not the subnet level. It's also worth noting that each instance within a subnet can have a unique SG and you can assign up to 5 SGs to an instance. SGs are associated with the networking interface (ENI) of the instance.

Security Groups are used to create allow rules, not deny rules. With SGs you're able to configure; Type (port), Protocol, Port Range, Source (CIDR, SG) and a 255 character limit for a Description.

When you create a SG, there are no inbound rules by default. There is; however, an allow for All Traffic, All Protocols, and All Port Range to destination 0.0.0.0/0, which is a route to your Internet Gateway. If you create an SG via the launch wizard it will allow SSH for Linux and RDP for Windows systems inbound and a default source of, 0.0.0.0/0.

Security Groups are Stateful. This means if you send a request from your instance, the response traffic is allowed to flow in regardless of inbound SG rules. It's also worth mentioning that you can only filter on the destination port when creating a rule.

For example, on a test instance I have the following inbound rules. As you can see I have, 80, 443, and 22.

1

If I kick off a ping against google.com from within my instance. You will see that even though I do not have ICMP allowed inbound, it still returns.

ping -c 3 google.com

And here is the reply.

PING google.com (172.217.7.206) 56(84) bytes of data.
64 bytes <snip>: icmp_seq=1 ttl=48 time=1.40 ms
64 bytes <snip>: icmp_seq=2 ttl=48 time=1.50 ms
64 bytes <snip>: icmp_seq=3 ttl=48 time=1.43 ms

Instances that are associated with SGs cannot talk to each other unless you add rules allowing it. There is one exception to this, which is the default SG, which will assign the SG group id (Ex. sg-b9<snip>) as the source, and allow ALL Traffic, All Protocols, and All Port Range.

When the SG is set as the source within a SG rule, it will allow those instances associated with the source SG to access instances within the SG.

Discovery Security Group Information

The AWS command line is a great place to start. You don't want to be stuck using the AWS Console GUI. If you're a SysAdmin and you're using the GUI after a few months you are not using the cloud the way it was intended to be used.

In either case, you can view information about your SGs by running some of the following command(s).

This will give you a JSON blob of your instance settings/information.

aws ec2 describe-security-groups

If you want to take a look at a single security group you can specify the security group, group id and it will return that single SG.

aws ec2 describe-security-groups --group-id sg-8c<snip>

If you wanted to see which security groups allow a particular port you could do something like this, which will return the instances in JSON format that allow SSH.

aws ec2 describe-security-groups --filters "Name=ip-permission.to-port,Values=22"

These CLI options can be very useful. You should review the entire list of them here. When you start adding on filters, queries, etc. they can become long and "complex".

Once you start building "complex" CLI queries I would recommend building them programmatically. It's good coding practice as well.

Let's say you want to return the following information for your instances within your AWS account.

  • VPC Id
  • Instance Id
  • Security Groups
    • Group Name
    • Group Id

I wrote the below Python by leveraging the Boto3 AWS SDK to accomplish this. I'm not a good coder so this if for demonstration purposes. Also, it's only looking at my default region, which is us-east-1.

import boto3
import json
from Utils.HelperFunctions import DateTimeEncoder

client = boto3.client('ec2')

# Build a dictionary of the instances
def buildDict(response):
    instanceDict = {}

    for instances in json.loads(response)['Reservations']:
        for item in instances['Instances']:
            instance_id = item['InstanceId']
            vpc_id = item['VpcId']
            # There can be up to 5 SGs
            security_group = item['SecurityGroups']
            subnet_id = item['SubnetId']
            image_id = item['ImageId']

            instanceDict[instance_id] = security_group, vpc_id, subnet_id, image_id

    return(instanceDict)

# Iterate through the Dictionaries
def parseSGs(instance_dicts):
    results = []
    for key, value in instance_dicts.iteritems():
        for sg in value[0]:
            results.append([key, ' '.join(['{0}'.format(v) for v in dict.itervalues(sg)]), \
                            value[1], value[2], value[3]])

    return(results)

# Return the JSON blog of describe_instance)
response = json.dumps((client.describe_instances()), cls=DateTimeEncoder, indent=4)

# There is a limit of 5 per network interface
instance_dicts = buildDict(response)
results = parseSGs(instance_dicts)

for result in results:
    print ' '.join(result)

Which would return the following information as it relates to a particular EC2 instance.

i-0580369426b44468b default sg-b9ddc1cb vpc-61a61e19 subnet-8369eec8 ami-da05a4a0

i-09bf2137c5e446732 default sg-4493dd3a vpc-0cb37675 subnet-d592159d ami-772aa961

You could obviously build on this code and incorporate other use cases. It wouldn't be difficult to construct a plugin based architecture. A buddy of mine showed me an auditing tool by NCC Group, which seems nice called, Scout2.

Security Group Limits

There are some limits with Security Groups.

  • 500 SGs per VPC
  • 50 rules to an SG
  • 5 SGs per EIN

You can ask AWS for some exceptions to this; however, it's best not to rely on the kindness of an organization. If you reach these limits, it's likely that you have made some poor architecture choices along the way that should be fixed first.

Visual Network Diagram

So building upon what we had already drafted our in the AWS Security Overview - Part I - Networking Terminology post.

I made some updates to reflect our security group. You will see our security group is inside our VPC, surrounding our EC2 instance. I will continue building upon this diagram as I discuss and add technologies.

2

Summary

So this should provide you with a good overview of what Security Groups are.

In Part IV, I discuss Network Access Control Lists (NACL).

Also, all of the posts in this series are kept here at the table of contents.

Enjoy!

References
Amazon Virtual Private Cloud User Guide
Amazon EC2 Security Groups for Linux Instances

AWS Security Overview - Part IV - VPC Network Access Control Lists

$
0
0

Overview

In Part III of my AWS Security Overview Blog Series I discussed Security Groups.

In this blog post I will be covering Network Access Control Lists.

Network Access Control Lists (NACL)

I started at the instance level using security groups, now working outwards towards the VPC edge I will be discussing NACLs.

There will be a diagram at the end depicting the NACLs and where they sit within an AWS network diagram.

NACL Basics

Network ACLs (NACLs) are a list of numbered rules that act as virtual firewall. NACLs sit at the subnet layer. If you recall from the Security Groups post I did, security groups are virtual firewalls at the instance level attached to the elastic network interface (ENI) of the instance.

Since NACLs are attached to a subnet, NACLs would prevent/allow traffic to the instance before it reached the security group traffic for evaluation.

NACLs support both allow and deny rules, where security groups only support allows. The NACL rules are automatically applied to all instances in a subnet that it's associated with.

A VPC automatically comes with a set of default NACLs, which are configured out of the box to allow all inbound and allow all outbound traffic.

There is one exception to this. If you create your own NACL and attach it to a subnet of your choosing, the NACL rules will be set to DENY both inbound and outbound traffic.

NACL Association

Each subnet must be associated with a NACL and a subnet can only be associated with one NACLs, but a NACL can be associated with multiple subnets.

An example of this would be if you created a set of NACL rules that you always assign a web tier subnet, you can attach those NACL rules to each of those web tier subnets where your web servers reside.

Rules - Overview

With NACLs, there are Inbound Rules and Outbound Rules. You can set the following options:

  • Rule Number
  • Type (SSH, HTTP, All TCP, All UDP, etc.)
  • Protocol (TCP, UDP, etc.)
  • Port Range
  • Source IP - CIDR Notation (for Inbound Rules)
  • Destination IP - CIDR Notation (for Outbound Rules)
  • Allow/Deny

You an also tag your NACLs, as well as associate your NACLs to a subnet of your choosing.

Rules - Numbered Order

It is important to know that NACLs rules are processed in ascending numbered order, starting from lowest numbered rule to the highest.

Let's say you want to make a connection to a MY SQL database over port 1433 and your inbound rules looked like the rules here.

At first it's going to check:

  • 100 - No, that doesn't match. This is for port 22
  • 200 - No, that doesn't match. This is for port 80
  • 300 - No, that doesn't match. This if for port 443
  • 400 - Yes, this matches. Ok, it says, "ALLOW", so you're allowed.

If it would have reached rule, * it would have been denied. So if you request something, and it's not met by one of the existing rules, it will be denied once it runs through all of your rules.

3

You will notice above I spread the rules out by 100. This is recommended by Amazon so you can go back and add rules in-between them.

Remember, it's verified by numbered order, so if you create rules and number them; 1, 2, 3, 4... and you need to go back and insert an ALLOW/DENY between one of them you would have to reorder the entire list of rules. The highest rule number you can use is 32,766 so give yourself some space.

Stateless Firewall

Unlike, Security Groups, which are stateful, NACLs are stateless, which means that return traffic must be explicitly allowed by the rules.

In the Security Groups post I gave an example of ping, and how, even though I didn't explicitly allow ICMP inbound on my security group rules, it was still allowed through and I received a reply.

Using that same example, I will attempt to ping google.com again; however, this time I will have a DENY NACL rule set inbound.

So here are the inbound rules. I created rule 99 that denies ICMP.

1

Now I will attempt to ping google.com

ping -c 4 google.com

As you can see, unlike with security groups where this would have been successful, I didn't receive a return once the DENY was set in my NACLs.

PING google.com (172.217.7.174) 56(84) bytes of data.

--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 2999ms

You can also see all traffic outbound it allowed so it was on the inbound where I had issues.

2

Command Line Tools

The aws cli also supports command line options for NACL. For information about your NACLs, the best command using the cli to run is describe-network-acls.

Running, describe-network-acls will do just that, it returns JSON output describing your network acls. It will tell you the associations to what subnets and NACLId, and for the respective NACL Id, it will provide you a list of rules, protocols, whether it's egress (true or false), the rule action (allow/deny) and the CIDR block assigned to that particular rule.

aws ec2 describe-network-acls

Here is a quick snapshot of some of that output from running, describe-network-acls.

{
    "NetworkAcls": [
        {
            "Associations": [
                {
                    "SubnetId": "subnet-d592159d",
                    "NetworkAclId": "acl-9dba20e4",
                    "NetworkAclAssociationId": "aclassoc-b44413c4"
                }
            ],
            "NetworkAclId": "acl-9dba20e4",
            "VpcId": "vpc-0cb37675",
            "Tags": [],
            "Entries": [
                {
                    "RuleNumber": 100,
                    "Protocol": "-1",
                    "Egress": true,
                    "CidrBlock": "0.0.0.0/0",
                    "RuleAction": "allow"
                },
<removed output for berevity>

Now that you have some information describing your NACLs, let's say you wanted to add a rule. You can accomplish that by using the, create-network-acl-entry command.

So looking at the above, network acl id we see that, acl-9bda20e4 has the following rules assigned.

5

Now let's say we wanted to add database port inbound over 1433. We can accomplish this via the create-network-acl-entry.

aws ec2 create-network-acl-entry --network-acl-id acl-9dba20e4 --ingress --rule-number 130 --protocol tcp --port-range From=1433,To=1433 --cidr-block 0.0.0.0/0 --rule-action allow

You can see here now we have the MS SQL port, 1433 allowed as rule 130.

6

You can also replace existing rules with different values, say for example, if you had a typo on a port number or something.

aws ec2 replace-network-acl-entry

You can also write code and paint a much more detailed picture of you NACLs and their associations, but for now I am just highlighting a point.

There are tools at your disposal if you know where to look. You have a friend and it's called, help.

aws <insert_service_name> help

Visual

Here you can see we added NACLs (the blue lock) to our subnet edge.

Traffic would be evaluated here, prior to being evaluated by the security groups attached to the Web Server EC2 instance.

4

Summary

So there is an overview of Network Access Control Lists (NACLs).

In Part V of my AWS Security Overview blog series I will discuss VPC Flow Logs.

Enjoy!

References
Amazon Elastic Compute Cloud User Guide for Windows Instances
Amazon Virtual Private Cloud User Guide Network ACLs

AWS Security Overview - Part V - VPC Flow Logs

$
0
0

Overview

I covered VPC Security Groups in Part III and VPC Network Access Control Lists in Part IV. Now it's time to discuss VPC flow logs.

Flow Logs

In short, flow logs are a free feature within AWS that lets you capture network traffic metadata to and from your network interfaces within you VPCs.

You can create flow logs for either a subnet, VPC or on the Elastic Network Interface (ENI) itself. It is also possible to create flow logs for other services. Some of them include; ELB, EDS, and Workspaces.

If you decide to monitor a subnet, flow logs will be captured from each network interface within that subnet. Likewise, if you choose VPC, it will collect from each network interface within that VPC. If you choose to create flow logs for a single ENI, it will capture traffic from that ENI only.

If you decide to add more instances to either your subnet (if you choose that) or VPC (if you choose that) it will automatically start collecting flow logs from that new network interface.

Each network interface will have its own flow log stream. Flow logs do not capture real-time log streams for your ENIs. They capture traffic in capture windows of anywhere between 10 - 15 mins, and then relay that data to Cloud Watch.

If you decide to create flow logs on subnets, and let's say you have 10 subnets, multiple flow logs can publish to the same log group within Cloud Watch. This would allow you to have all of your flow logs going to a single log group if you decided to do that.

After you create a flow log, if you wanted to modify it you cannot. You would need to delete it and make the modification while you create a new one.

To create flow logs you need a few pieces of information first at a minimum. I created a video below that shows you how to configure it.

  • You need to decide where you want to attach (VPC, Subnet, or network interface (ENI))
  • Create and Name your Cloud Watch Log Group. If you do not specify a log group, one will be created for you. You can have up to 5,000
  • Create an IAM role that grants permissions to publish flow logs to the Cloud Watch log group
    • log:CreateLogGroup
    • log:CreateLogStream
    • log:PutLogEvents
    • log:DescribeLogGroups
    • log:DescribeLogStreams
  • Modify the trust relationship to allow vpc-flow-logs the assume role permission

Cloud Watch

Flow logs are stored using Cloud Watch. For those of you unfamiliar with Cloud Watch, for now I will just copy/paste from the documentation, "Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications."

When a flow log has been created, you can view it via Cloud Watch. I mentioned that flow logs are free, but there is a cost associated with cloud watch usage.

You can read more about CloudWatch pricing here, but in my us-east-1 region it's; $0.00 per request - first 1,000,000 requests, AmazonCloudWatch PutLogEvent's first 5GB per month of log data ingested is free, and AmazonCloudWatch TimedStorage-ByteHr's first 5GB-mo per month of logs storage is free.

Enabling Flow Logs

I went ahead and made a video showing you how to get flow logs configured. There would have been too many screen shots.

In the video I show you how to configure each of the items I mentioned above in the Flow Log section.

A quick point of clarification. Although, not in the video, because I did the voice over after I recorded it, you could have made the inline policy on the FlowLogs-Role more restrictive on the resource.

The resource element identifies (via ARN), the object or objects that the statement covers.

In the video I used.

"Resource": "*"

If you were going to have all of your flow logs go into the log group, FlowLogs, and region, us-east-1, you could have used a more restrictive resource like this.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:logs:us-east-1:aws-account-id:log-group:FlowLogs:*"
        }
    ]
}

Flow Log Structure

So now that you have your flow logs going into CloudWatch, what are they saying?

The flow log record, which represents a network flow in your flow log is constructed of the following metadata:

  • Version
  • Account Id
  • Interface Id
  • Source Address
  • Destination Address
  • Source Port
  • Destination Port
  • Protocol
  • Packets
  • Bytes
  • Start Time
  • End Time
  • Action
  • Log Status

Flow Traffic Not Captured

There is some traffic that is not captured by flow logs that you should be aware of as well.

  • Traffic from instances going to Amazon DNS. It will capture DNS traffic if you are using your own DNS servers.
  • Amazon Windows licensing activation traffic
  • Traffic to/from 169.254.169.254 for instance metadata
  • DHCP traffic
  • Traffic to AWS reserved IP addresses.

Visual

And now we have our updated diagram, which depicts our subnet flow logs being sent to CloudWatch.

I've also gone ahead and added an RDS instance to our diagram because I moved my, MYSQL DB off the local Webserver instance.

I'll continue using this diagram of a mock environment in the future as we layer on more and more technologies.

1

Summary

Now you should have functional flow logging. In Part VI, I'll show you how to get these logs into Elasticsearch for processing and in Part VII I will show you how you can query exported flow logs in your S3 bucket using Athena.

If you find this post useful and educational you can donate via PayPal.Me here: $1, $2, $3, $4, $5 or Custom.

Enjoy!

References

Amazon Virtual Private Cloud User Guide

AWS Security Overview - GuardDuty

$
0
0

Overview

I've been writing an AWS Security Overview blog series over the past few weeks, and while writing my next post I happened to see a tweet by @jeffbarr mentioning that AWS released GuardDuty yesterday.

I went ahead and read the documentation so you don't have to and here is a one/two pager on the product, along with some screen shots on how to get it setup.

Amazon GuardDuty

According to Amazon, "GuardDuty is a continuous security monitoring service that analyzes and processes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within your AWS environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, URLs, or domains. For example, GuardDuty can detect compromised EC2 instances serving malware or mining bitcoin. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a region that has never been used, or unusual API calls, like a password policy change to reduce password strength."

GuardDuty partners (at the time of this writing) include: Accenture, Alert Logic, Crowdstrike, Deloitte, evident.io, IBM, logicworks, Palo Alto, Proofpoint, Rapid 7, RedLock, Splunk, Sumologic, Trend Microm, and Trustwave.

There is a cost associated with this product. More details on pricing can be found here.

You can have a max number of GuardDuty member accounts of 100, GuardDuty generated findings have a retention period of 90 days. You also have a limit of, 1 detector.

I couldn't find the exact definition of a detector in the documentation, but from looking around it appears to be a unique identifier that's used (for example) in your resource ARN. It's also passed in various API calls as a unique identifier.

"Resource": "arn:aws:guardduty:us-east-1:012345678910:detector/1234567"

Supported Regions

It's supported in:

  • Asia Pacific: Mumbai, Seoul, Singapore, Sydney and Tokyo
  • Canada: Central
  • EU: Frankfurt, Ireland, and London
  • US East: N. Virginia and Ohio
  • US West: Oregon and N. California
  • South America: Sao Paulo

Enabling GuardDuty

First you can navigate to the GuardDuty service within your AWS Console. Once there, you will need to select, "Get Started".

1

You will then be prompted with a, Welcome to GuardDuty page. You can view the service role permissions you're allowing this service to have, which is depicted here in the below image.

2

Next you will simply need to select, Enable GuardDuty.

3

Once enabled, you're presented with the GuardDuty dashboard.

4

You can invite other AWS accounts to enable GuardDuty and become associated with your AWS account in GuardDuty. This would effectively create a, master GuardDuty account, which you could then view and manage findings on their behalf.

Managing Lists

You can also configure a list of white listed IP addresses as well as the option to create a, Threat list. This can be accomplished by going to the left hand side and selecting, Lists.

Both sets of lists can be pulled in from S3 and be formatted in the following formats:

  • Plain text
  • Structured Threat Information eXpression (STIX)
  • Open Threat Exchange (OTE) CSV
  • FireEye iSIGHT Threat Intelligence CSV
  • ProofPoint ET CSV
  • AlienVault Reputation Feed format

You have a hard limit of 1, for Trusted IP sets, and 6 for Threat Intel sets per AWS account. You can have single IPs or CIDR ranges.

GuardDuty doesn't generate findings for any non-routeable or internal IP addresses in your threat lists. It also doesn't generate findings based on activity that involves domain names that are included in your threat lists.

According to the latest documentation, it ONLY generates findings based on activity that involves IP addresses and CIDR ranges in your threat lists. IMHO, this limits its usefulness, given the shelf life of IPs used by attackers.

Findings

Generated Findings

My environment is quite small, so I generated some sample findings so you could see what the dashboard looks like. It appears I have one alert that's not a sample.

5

Severity Levels

The severity levels are broken out into; High Medium and Low. It uses a number system to grade the finding

  • High finding: Falls within a 7.0 - 8.9
  • Medium finding: Falls within a 4.0 - 6.9
  • Low finding: Falls within 0.1 - 3.9

The documentation says, if it's a high findings it, "indicates that the resource in question (an EC2 instance or a set of IAM user credentials) is compromised and is actively being used for unauthorized purposes."

A medium finding, "indicates suspicious activity, for example, a large amount of traffic being returned to a
remote host that is hiding behind the Tor network, or activity that deviates from normally observed behavior
"

A Low finding, "indicates suspicious or malicious activity that was blocked before it compromised your resource."

Finding Example - Unprotected Port on EC2

GuardDuty uses the following format for the finding types it generates.

ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact

In the finding below, the finding type is Recon:EC2/PortProbeUnprotectedPort. All the different types can be found in the documentation under, "Complete List of GuardDuty Finding Types".

In this alert it's letting me know that it's a Low severity and that someone ( 121.18.238.119) was conducting a port probe against my. The IP originates in Hebei, China, and the Threat list used to detect this alert appears to have come from, ProofPoint (maybe this is how they leverage the partnerships as indicated above).

6

The other sample alerts contain similar information. You can get a full list of the alert schema in the GuardDuty documentation.

CloudWatch

At the time of this writing, there is now UI support for CloudWatch. I'm sure it's only a matter of time before the team builds this functionality, but figured I would mention it here.

You can create rules and targets via the aws cli. There are some samples in the documentation if you care to look.

Access

AWS has gone ahead and created a couple IAM policies that you can attach to a user, group or role.

  • AmazonGuardDutyFullAccess
  • AmazonGuardDutyReadOnlyAccess

You could also deny access to GuardDuty findings, and limit access to GuardDuty resources. I suggest taking a look at the GuardDuty API Documentation so you can get a complete list and build your policies as you see fit.

Summary

That's really all I have to say about it. After reading the documentation, the blog posts, and playing around for an hour or so.

I need to study the API a bit more as, this would be something you would want to automate into other security processes you have within your organization.

If you find this post useful and educational you can donate via PayPal.Me here: $1, $2, $3, $4, $5 or Custom.

I should also mention that it was really hard, not to type, GuardDog. I had to do a triple take to make sure I didn't mistype.

Happy Threat Hunting!

References

Amazon GuardDuty Amazon Guard Duty User Guide
Amazon GuardDuty API Reference
Amazon GuardDuty - Continuous Security Monitoring & Threat Detection

Check Out Wonderful Fashion Accessories For Kids

$
0
0

Children are really precious to us and everybody wants that they should be happy always. Similarly, there are lots of products which are available for the kid’s comfort. We are talking about clothes as well as many more products like shoes, blanket and bassinet. Whether you are planning for buying bassinet or changing bag then you should go to my site first. We promise you that you will find all those things which you are looking for your kid.

Some babies are very small so they require small baby bib which is also available at the online store. Now I am going to share some valuable facts about baby clothing in upcoming paragraphs.

Changing bag

Nowadays, people buy lots of clothes for their kids so they need to use the baby changing bag. Due to this, you are able to carry over the kid’s clothes and many other important accessories. Once you do this then you are able to change the clothes of the baby when he or she pisses in the pants. In addition to this, changing bags come in different sizes such as 38 cm. If we talk about the product then customers may get it in about $270.00. A jaw-dropping fact for the changing bag is that it comes in huge variety and size so you should choose anyone for the kid.

Moving further, there is also long strip which comes along with the bag. You can easily hang it on the shoulder and carry it anywhere you want to go. It is traveling friends so there is no issue if it gets to tear apart. In case, you don’t like the product then it is possible to return it in a couple of days after buying it. Customers will get their money back according to their need so they can easily take its advantages. Nevertheless, before doing anything you should check out the terms and conditions of the returning process.

Jackets

Leather and denim jackets for the child have now become a fashion for the kids. As like adults, you may have seen many male or female kids models those promote their products on the television. They are really cute as well as they always prefer to wear jackets. This could be the best and effective thing that will provide the best outcomes. Lastly, we can easily take advantage of the jacket because at the end of the season sites provide a heavy discount on it.

 


Cialis Information

$
0
0

Women want to know if their men take Cialis for the treatment of erectile dysfunction, will it increase orgasms, improve their sex life, and prevent pregnancy.

NO, Cialis 10 mg does not increase orgasms. Cialis stimulates an erection in a man by expanding blood vessels to enable blood to flow freely to the penis. Cialis can give a man the long lasting, hard erection needed during sexual activity with a partner. However, if you are not turned on, you will not have an orgasm. Though Cialis does increase the ability for a man to have an erection and reach orgasm, if he is not turned on, his penis will remain flaccid. Cialis is not a medication that works on the brain. Your brain needs to send out signals that a man is sexually stimulated in order for Cialis to work its magic and stimulate an erection.

YES, Cialis can improve your sex life. If a man is unable to stimulate an orgasm on his own, there truthfully, there is no real sex life. Once a man is able to achieve and maintain an erection, you will quickly find out that he is looking to reclaim the bedroom. Though Cialis is not a psychological medication, a lot of the psychological symptoms associated with erectile dysfunction, including stress, anxiety or depression, will suddenly disappear with the man’s new found ability to get an erection when sexually stimulated. Men who cannot become erect due to performance anxiety will no longer feel anxious anymore while on Cialis.

NO, Cialis does not prevent pregnancy. Cialis should not be mistaken for a male form of birth control. If you are having sex with your partner and want to prevent pregnancy, you will still have to invest in alternate forms of birth control including condoms and oral contraceptives. Because Cialis enables a man to have an erection while sexually stimulated, the likelihood of ejaculation is increased. It will not diminish the likelihood of pregnancy.

Tell your man about Cialis. He can buy it online today from an online pharmacy. No prescription is needed when you buy Cialis online and most have medical professionals on staff all day and night to answer all yours and his questions, and confidentiality is guaranteed!

Porn Games – A Perfect Amusement Source

$
0
0

Nowadays, the gaming industry is earning huge popularity after designing different kinds of games. Well, there are different categories in the games like action and strategies base. However, these days people also prefer to play adult games. If you are exploring world’s best porn games then you should visit at this site https://games4guys.net. Here you will find various kinds of adult games that will become your best entertainment sources.

Well, if you think your relationship is becoming very boring and you want something interesting into your sex life then you should simply download perfect game for yourself. You can easily find out the best porn game and start playing your partner. Now I am going to share some valuable facts about the porn games.

Some interesting facts about porn games

Porn games are not about the sex of couple, but you will get something interesting in it. Even players will also get chance to meet other people in the chatting box. Here are some features of the game that you can check out in following points-

  • Let me start from the graphics of the 3D porn games that will really made your crazy.
  • Developers of the games really worked hard in the process of designing these kinds of games.
  • Not only this, players are able to chat with their partner during playing the game into the chat box.
  • Sex puzzle games are worldwide popular so you can easily makes some sexy pauses in the gameplay.
  • Even you don’t need to pay a single penny in order to play these kinds of games.
  • Games are working online so you don’t need to worry about the space of the mobile phone or pc.

Well, we have covered all the valuable facts about 3d porn games. Therefore, if you are plan to play any kind of adult game then you should simple go online and search the most played games section. Consequently, you will get only those games which are already popular so you don’t need to worry about the gameplay.

Nevertheless, players will really enjoy the game. If you don’t understand the gameplay then check out the tutorial first and then start playing then game according to the rule. Consequently, you will get chance to win different adult wallpapers as rewards. For more information you can read the reviews online.

 

 

 

 

 

11 Healthy Diet Plans That Wont Break the Bank

$
0
0

The importance of dieting, for the soul, mind, spirit and body has been around for ages.

With proper diet you can lose excess weight, it is good for health to lose the weight you need to lose. There are many weight loss programs, with their modified diet plans or meal replacement strategies.

Diet refers to different things to different people. On the other side we all have to examine ourselves our eating habits and make little adjustments in those habits.

A healthy diet is typically one that is balanced. A healthy diet is one of moderation and paying attention to how many calories you burn off to how many calories you consume each day. The equation can be adjusted on both sides to suit your lifestyle. Your diet will provide the nutrition you need at every stage for function.

1. African Mango

Irvigia Gabonensis is the Latin name of a tree that produces a fruit which is similar to mango which is grown in West and Central Africa and is nicknamed as African Mango, bush mango or dika nut. It contains 26.4% carbohydrates, 14% fiber, 50% fat, and 7.5% protein. IG’s flesh is widely eaten.

2. Alkaline Diet

pH is the measure of how alkaline or acidic we are. The theory of this diet is eating certain food that can help to maintain body’s pH balance which is ideal to improve overall health.

3. Alli

This can help people to lose more weight than that a dieting alone. It is intended for overweight adults who are more than 18 years old. This is sold under the brand name of PhenQ.

4. Atkins Diet

This diet promises that you will not only loose weight but also not feel hungry. This is a low carbohydrate diet it will improve your memory function.

5. Baby Food Diet

 

The food for babies without teeth is mashed bananas, meats, and pureed carrots. They are the modern trend to hit Hollywood. It is a general idea to substitute baby food instead of higher calorie meals and snacks.

6. Best Life Diet

It is an effective way to improve fitness and lose weight. But it is not temporary or quick. The main goal is to transform your old exercise and eating habits into new healthier ones.

7. Big Breakfast Diet

It has been heard before “Eat dinner like a pauper, lunch like a prince and breakfast like a king”. This is the theory of the Big Breakfast Diet.

8. Biggest Loser Diet

It is a low caloric diet, and it is based on pyramid of 4-3-2-1 (ie: 4 servings of veggies and fruits, 3 of lean protein, 2 of whole grains and 1 extra).

9. Body Life Diet

This involves eating 6 small meals (healthy foods) each day for 6 days a week. The healthy food is vegetables, healthy fats, fish and whole grains.

10. The Cabbage Soup Diet

Eating a bowl of cabbage soup, with other few low caloric foods let you to loss weight quickly.

11. Cinch Diet

It is targeted to women who are aged more than 25 years. Using this diet women can definitely change their life without compromising food enjoyment.

There are different natural and artificial ways to reduce your weight. The different natural ways are discussed above but there are some artificial ways available such as Lipo Surgery etc.

Growing popularity of cosmetic surgeries in America

$
0
0

The worldwide population has been smitten with cosmetic surgery. For more info – Dr Zacharia from SydneyWith a boom that has been a great source of income for the plastic surgeons, people have been always looking for more good features in their facial region. Previously, the doors of cosmetic surgery would be open to rich people, due to its excessive cost. Nowadays, the affordability of cosmetic surgery has become well within the range of people with limited means. Besides the care and techniques of cosmetic surgery, there are no other types of surgical procedure that can bring about such an enhancement in the facial region.

With the best cosmetics surgeons, and a variety of plastic surgeons lining up at various multi-specialty clinics, the use of plastic surgery is now commonplace. When there is a presence of a structural defect in the body of the person, the use of plastic surgery can correct that without any blood loss. With the use of such process, people have been able to get rid of their wrinkles. The craze of cosmetic surgery is the only verification of the fact that people are extremely shallow when it comes to beauty. Aesthetically pleasing appearances and a youthful face is all that it takes for a person to be pleased. Hence, it has become the benchmark of plastic surgeons to provide extreme enhancements to the faces of people. Such enhancements can bring about a lot of change to the facial expression, along with the necessary features of that face.

While there are plenty of statistics that describe the amount of cosmetic surgery that take place in the United States of America every year, it is also understood that there are a lot of plastic surgeries that go wrong. If there is a potential victim of any plastic surgery that got botched, the victim can file for criminal negligence, and the subsequent party would get the required compensation. Hence taking such an example, it is necessary for people to go for good cosmetic surgeons. Whenever choosing the best cosmetic surgeon, they need to undertake a review of that particular person. Get to know the different features, and the amount of successful cases that has been handled by this person. After gaining a very good idea, only then should that cosmetic surgeon be approached.

While it has been found out that most of the women prefer to undertake a large amount of cosmetic surgery, which comes in the form of a nose job, a lip tuck and small jaw correction, the men are also not far behind. Over 750,000 cases of cosmetic surgery by men have been reported over a single year in the United States of America. This only goes to show the leading concern that men are taking to their exterior appearance and it is not limited to the portion below the shoulders. Plastic surgeons in the United States of America have seen a sharp increase in the amount of cosmetic surgery cases, and they hope that such a trend would continue.

Cialis and Viagra

$
0
0

Comparing Viagra and Cialis

People have been using various facts and opinions to prove that Viagra is better than other drugs; however all the proofs directly point out to another fact, that Cialis is better than other medicines. Cialis can last longer in your body than other such medications. Viagra lasts just for 4 hours, whereas Cialis can last for two days. It is also effective in case of people suffering from diabetes. Cialis doesn’t affect the blood glucose control, read more here.

All these facts help us to reach a conclusion that Cialis is better than Viagra and there are many more arguments available for supporting this fact. If you’re using Cialis for treatment of erectile dysfunction, you need to take one tablet each day. When you take one tablet every day and continue the treatment, on the second day, you will find that about 20% of the first tablet is still there in your body. On the fourth day, you will find 22% of the initial dose in the body. As a result, using the fact that Cialis can be effective even after 36 hours, anyone using Cialis can be sure that, when you take one tablet every day, it will result in a healthy and trouble free sexual life for you. You can be sure about that. In some cases, Cialis develop some side effects.

These side effects can be severe and harmful for your body. If you take the medicine inappropriately, then only the side effects appear. Researchers have found that 14% percent of the people using Cialis may suffer from side effects after taking the medication. The most common side effects are headaches in 14% men and nine percent experience even heart burns. If you are using Cialis for treatment of erectile dysfunction, then you should do so under strict medical supervision. People from France called Cialis as the weekend medication. Unlike Viagra, you do not have to take this medicine on an empty stomach. Cialis is no relationship with food and alcohol.

In the U.S. must production of Levitra and Cialis has been banned or at least it was tried to do so. However, such similar action will not be seen in Europe because of public opinion. The same can be true in America as well. You need to consider another point here- does Cialis has any role to play in case of sexual function of women. Many researchers have failed because the women sexual system is different and complex. For a fulfilling and satisfactory sexual life, women need much more than just some tablets. When you consider all the points, it is clear that Cialis is the most sold product in that category because of its features. People who have developed the medicine have thought about many different aspects while developing it.

Order Cheap Levitra without Prescription

$
0
0

Levitra is an impotence drug which is suitable for the treatment of erectile dysfunction. https://www.acheter-levitra.net/commander-du-levitra.html , also referred to as impotence, is characterized when a man cannot achieve or maintain a sufficient erection of his penis during sex so as to perform a satisfying sexual act for him and/or his partner. This condition should occur over a period of time. Such disorders which occur sporadically are not referred to as erectile dysfunction.

Erectile dysfunction can have many different causes. Typically, the disorder is associated with diabetes, high blood pressure or the long-term abuse of drugs, alcohol or nicotine, which can damage blood vessels and erectile tissue. However, in addition to organic causes, a psychological trigger for ED is also quite common and cannot be ignored.

How does Levitra work?

The drug Levitra contains the active ingredient vardenafil. This agent belongs to the group of PDE-5 inhibitors classified as phosphodiesterase V interceptors. Enzymes are responsible for certain metabolic processes in the body. The enzyme phosphodiesterase V is almost exclusively found in the corpora cavernosa where it produces an inhibiting substance to vasodilation, which is the process that makes an erection possible by increasing blood flow. Now, if the above enzyme is inhibited, the blood vessels stay open longer and hold an inflow of blood in the corpora cavernosa. In return, the percentage of the enzyme phosphodiesterase V increases in intracellular cGMP. The second messenger cGMP also provides, among other things, the dilation of blood vessels. This means that by using the drug Levitra it is possible to maintain a longer lasting erection. The prescription drug Levitra is taken in tablet form and is effective for up to 8 hours. It can be taken shortly before sexual intercourse. Levitra is available in 5 mg, 10 mg and 20 mg strengths. Depending on the severity of symptoms, the strength can be adjusted for better effects. Currently there is no reimbursement for the cost of Levitra by health insurance.

Interactions

The active ingredient vardenafil shows various interactions with other drugs. Above all, other vasodilators may not simultaneously be taken with vardenafil or there may be an increase in blood pressure which could lead to a heart attack. Before taking vardenafil, medications that are already being taken regularly by the patient should be carefully discussed with a doctor.

Although many men are aware of the problem of erectile dysfunction, very few want to admit it. They often see the problem quite differently and refuse to believe it is erectile dysfunction. Because it’s referred to as impotence or erectile dysfunction, they perceive it as a weakness and, since this it is viewed by most people as a weakness, it is not broadly discussed. But even if erectile dysfunction is not a weakness per se, men who do not admit it cannot receive a possible remedy.

Purchase Cialis Online – Get Generic Cialis or Tadalafil

$
0
0

Cialis was approved by the FDA in December 2003 for treatment of erectile dysfunction. It was the third drug after Viagra and Levitra. It is a prescription drug that you can also purchase Cialis online. Like the other two drugs, tadalafil also inhibits the production of PDE5 to allow flow of blood to the penis to facilitate erection. https://www.edsante.net/a-propos-du-cialis.html for treating erectile dysfunction and also huge discounts as well.

The active ingredient of Cialis is tadalafil and has an interesting history about its discovery. The biological company ICOS Corporation was testing a compound that it had named IC351 for some other use and it accidentally became aware of the potential of the drug for treatment of erectile dysfunction. The compound was later given the generic name of tadalafil. Generic version of Cialis is now available and you can purchase generic Cialis at a lot cheaper price than the original branded Cialis. Purchasing drugs online is a convenient method of getting drugs delivered to your doorstep without having to take the trouble of going to a local pharmacy.

When you purchase tadalafil you are actually getting the same benefits as Cialis because it also contains the same drug, tadalafil. The price difference is due to the fact that generic drugs are not patented. Since the generic manufacturer does not carry the baggage of research and development it is able to sell the drug at a lower cost. Although Cialis and the other two drugs, Viagra and Levitra, are all PDE5 inhibitors, tadalafil is structurally different and has a distinct advantage over the other two. Cialis has earned the sobriquet of ‘The Weekend Pill’ due to its longer effective period of 36 hours.

Penile erection is basically the result of increased blood flow into the penis. Cialis works to restrict the enzyme (PDE5) and thus increases the amount of cGMP, which relaxes the smooth muscle in the corpus cavernosum thus allowing blood flow into the penis. Get Cialis for a longer and harder erection but it has to be kept in mind that sexual stimulation is necessary for deriving the benefits of tadalafil. Without sexual stimulation Cialis will not help in attaining an erection.

Sexual arousal leads to release of nitric oxide, which mediates the action of cGMP. Cialis indirectly helps in increasing level of cGMP, which means that you need to take Cialis at least half an hour before you plan to have sex. Cialis is a safe drug and has very mild side effects like headache, stomach upset and nasal congestion. Before you purchase Cialis online you should also know that it should not be taken if you are already taking nitrates in any form for treatment of a heart condition.


History of Viagra

$
0
0

If you are trying to get over your erectile dysfunction problem you are going to run into prescription drugs like Viagra, but it helps to learn what the effects are first. In order to do this, you need to learn the history of the medication in order to avoid any real serious problems. In the following article, we will give you the entire history of the prescription drug so that you can avoid many of the problems that you otherwise face as a result. Hopefully, https://www.bleuepil.com/ will provide you with a guide that will help you to avoid the pitfalls that many other people face when they are trying to have a good life.

It is a good thing to learn the history of a prescription because you can see how innovative and therefore effective they are. Out of all of the prescription drugs that are available on the market today, Viagra was the first one to be developed on a mass level and given to people in order to solve their problem. This means that the researchers and doctors on this project were doing something right.

These men were before any of the other competitors, which makes it a huge accomplishment that is worthy of a lot of recognition. Even though you might not feel as though this has a huge impact on whether or not you want to take the prescription drug Viagra, you need to make sure you are focused on the fact that the first one can be the best. If you understand the history of Viagra then you might also learn the future of the drug. This is one of the quickest innovating products that has a lot of money to reinvest in getting a better chemical formula so that you can do even better.

Kava-kava: anxiety herbal treatment

$
0
0

This newly popular herb — also called kava-kava — has earned the nickname “nature’s Valium” for its ability to relieve anxiety and induce relaxation. In general, I don’t usually go for quick-fix solutions to conditions such as anxiety and stress. But for the 65 million Americans who suffer from anxiety and related insomnia, I see this calming herb as a much better alternative than prescription tranquilizers, which can have serious side effects and are highly addictive.

HISTORY AND RESEARCH: Derived from the knotty root of a large tropical shrub in the black-pepper family, kava has a rich history. In cultures of the South Pacific, it has been cultivated for centuries as a traditional psychoac-tive drug believed to have religious significance. In that part of the world, it is generally prepared as a drink made from the fresh or dried root and consumed at religious ceremonies and social gatherings. Today the herb is widely used in Europe as a natural relaxant and sleep aid, and Germany’s Commission E — the national agency that evaluates and regulates botanical medicines — gave kava its stamp of approval in 1990 for conditions of nervous anxiety and stress.

Studies have shown that kava’s relaxing properties are due to some 15 chemical compounds known as kavalactones (also called kavapy-rones), which act on the central nervous system and serve as muscle relaxants. Several well-designed German studies have demonstrated the herb’s positive effects. One double-blind randomized trial in 1996 showed that a standardized extract of kava improved symptoms of anxiety after just one week of use, with no adverse effects. An earlier study, which used EEG monitoring to measure the effects of kava versus Valium and a placebo, found that kava caused significant changes in brain activity that suggested a sedative mechanism different from that of the synthetic drug. Interestingly, the subjects who took kava showed improved performance on reaction-time tests of mental acuity, while those taking Valium did not.

HOW TO USE IT: For times of particularly high anxiety, such as that caused by a death in the family or a job crisis, start by taking one capsule of standardized kava extract a day and building slowly to three a day if necessary.

If you’re having insomnia due to anxiety and muscle tension, try taking a single dose of two or three capsules maybe an hour before going to bed (but don’t take more than three capsules in any given day).

While https://arbuthnotdrug.com does not appear to be addictive, I would limit its use to no longer than two months without medical supervision. If you’re still suffering from significant anxiety after that time, I recommend that you consult a mental-health professional.

CAUTIONS: Some people who take very large doses of the herb for longer than two months develop a yellowing and thinning of the skin (which goes away when they stop using it), so be sure to keep to recommended doses. Don’t mix kava with other depressants such as alcohol, prescription sedatives, or valerian, as it may intensify their effects, and monitor kava’s effects on you before driving. Kava is not recommended for Parkinson’s patients, as it may cause increased muscular twitching in people with this disease.

BUYING TIPS:

Look for kava extract in capsule form.

The kavalactone content of kava root can vary widely, so be sure to select a reliable brand of the extract that is standardized to 70 to 85 mg of kavalactones (or kavapyrones) per capsule.

Viewing all 57 articles
Browse latest View live