The GLBA Safeguard Rule has changed, and it isn't just banks that need to understand it.

Back in 1999, the Gramm-Leach-Bliley Act was passed in the United States. Its main purpose was to allow banks to offer services that previously were forbidden by laws passed even farther back in 1933. In doing so, the scope of these new rules surrounding these services not only applied to banks, but also to any organization that offered them.

A primary component of this Act, Section 501, requires the protection of non-public personal information. It states, "...each financial institution has an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers' nonpublic personal information."

Privacy, Security, Confidentiality. Could you identify a hot topic in information security today that doesn't involve one or all three of those areas? Couple that intense interest with the changes in technology that have occurred over the past 20 years, and the current pace that they continue to change, and you can understand why amendments to the GLBA were needed.

The main rule we will discuss here is the Standards for Safeguarding Customer Information, commonly called the Safeguards Rule. Originally published in 2001, this rule was just amended (January 10, 2022) and some of the most important provisions will become effective on December 9, 2022. The overlying goal of this rule is the requirement to have, "the administrative, technical, or physical safeguards you use to access, collect, distribute, process, protect, store, use, transmit, dispose of, or otherwise handle customer information."

Does this apply to me?

You can't duck the issue based on size. Nearly all rules apply, regardless of size, except for some new elements which apply to entities that maintain fewer than 5,000 consumer records. The most important qualifier is:

  1. You are considered to be a "financial institution" under the GLBA's definitions, or
  2. You receive information about customers of financial institutions.

If either of these are true, then the GLBA rules apply to you.

What is a financial institution according to the GLBA? The exact definition is, "any institution the business of which is engaging in an activity that is financial in nature or incidental to such financial activities as described in section 4(k) of the Bank Holding Company Act of 1956, 12 U.S.C. 1843(k)."In case you don't have the Bank Holding Company Act handy, here is a list of examples of financial institutions that the GLBA applies to, as noted in 16 CFR 314.2(h)(2)(iv):

These are just some examples, and this list is not all inclusive. Note that simply letting someone run a tab or accepting payments in the form of a credit card that was not issued by the seller does not make an entity a financial institution.

Ok, it applies to me. Now what?

At the heart of the Safeguards Rule are a number of key elements involving the development, maintenance, and enforcement of a written information security plan (ISP). The keys aspects and notable amendments:

  1. A single qualified individual must be designated to oversee, implement, and enforce the ISP. This is a change from the original language, which allowed for one or more employees to coordinate the program.If your organization doesn't have a qualified individual on staff, a third-party company can be utilized for this function. This does, however, require the designation of a senior member of the organization to direct and oversee the third-party representative(s) and all compliance obligations remain with the hiring organization.
  2. A risk assessment process must be in place. This process must identify and assess risks to customer information in each relevant company area and evaluate the effectiveness of current controls implemented to mitigate those risks. This is not a new requirement, however, for companies maintaining information on 5,000 or more customers, the following elements must be part of the risk assessment documentation:
    1. The criteria used to evaluate and categorize risks and threats to information systems
    2. The criteria used to assess the confidentiality, integrity, and availability of information and systems used to process customer information and adequacy of the existing controls
    3. A description of how identified risks will be mitigated or accepted, and how the ISP will address those risks
  3. Design and implement a safeguards program, and regularly monitor and test it. This is not a new requirement, however, the amendments added eight specific types of safeguards that must be part of this program:
    1. Physical and technical access controls, including a review of authorized users
    2. Identification and evaluation of the data, personnel, devices, and systems used that interact with customer data
    3. Encryption of all customer information, both in transit and at rest
    4. Secure development practices and security testing for applications used for transmitting, accessing, or storing customer information
    5. Implementation of multi-factor authentication for any information system that contains customer information accessed by any individual. This requirement can also be met if the qualified individual noted in item 1 has approved an equivalent or stronger control.
    6. Procedures for the secure disposal of customer information no later than two years after the last date the information is used unless retention is otherwise required or necessary for legitimate business purposes
    7. Implementation of change management policies
    8. Implementation of policies, procedures, and controls to monitor and log authorized user activity and detect unauthorized use
  4. Routine testing and monitoring of controls enforcing the safeguards program must be conducted to evaluate their effectiveness. This is not a new addition; however, two specific control tests are now required for companies maintaining information on 5,000 or more customers:
    1. Conduct vulnerability scanning at least every six months
    2. Undergo penetration testing at least annually
  5. Specific policy requirements for training of information systems personnel and general security awareness training. The amendments add specificity to the existing training requirements that were already in place and require formal documentation of the policies. These specific elements include:
    1. Security updates and training procedures to address new risks specific to systems that are running in the enterprise's environment
    2. Verification that key personnel are maintaining their knowledge of threats and available defenses against those threats
    3. General security awareness training requirements and procedures for all employees and engaged third parties utilizing the enterprise's information systems
  6. The requirement to oversee service providers that assist in the preparation, maintenance, and use of the environment handling consumer data was part of the original rule. This requires the selection of service providers capable of maintaining appropriate safeguards, and that contract language mandates these safeguards. The amendments add an additional requirement that the service providers must be periodically assessed on the risks associate with their use, and the adequacy of the safeguards they have implemented.
  7. A new requirement for entities handling more than 5,000 consumer records is for the existence of a written incident response plan. There are seven requirements for this plan in the new amendments:
    1. Stated goals of the response plan
    2. A description of internal procedures for responding to a security event
    3. The definition of roles, responsibilities, and levels of decision-making authority for individuals involved in the incident response process
    4. Plans for handling internal and external communications, and details on the use of information sharing resources
    5. Procedures for the remediation of identified weaknesses in information systems and associated controls
    6. Requirements for documenting and reporting of security events, procedures classifying incidents, and the activation of the incident response plan
    7. A defined process for post-incident performance, evaluation, and revision of the incident response plan following an event.
  8. Another new requirement for entities handling more than 5,000 consumer records is for a written report, presented to the enterprise's governing body or senior/executive level individual, done on at least an annual basis. This report is to be created by the qualified individual responsible for oversight of the ISP as noted in item number one. There are two elements required to be in the report:
    1. The overall status of the ISP, including its compliance with the updated Safeguards Rule
    2. Recommendations for changes or improvements, and any other material matters related to the ISP

That's a lot of stuff! How long do I have to comply?

Covered financial institutions should be in compliance with the non-amended components of the Safeguards Rule already, since the formal effective date of the rule was January 10, 2022. The FTC has allowed for an effective date of December 9, 2022, for the amended provisions due to the length of time required to implement them.

Are there penalties for non-compliance?

Besides the potential costs associated with breaches, successful malware attacks, ransomware, and the like, there are penalties that can be assessed by the FTC for non-compliance. These penalties can apply to the enterprise and/or individuals responsible for compliance as follows:

So, if this does apply to you and your organization, hopefully you are already compliant and none of this was a surprise to you. If this doesn't apply to you, I commend you for reading on. And if it applies and you are completely surprised by the requirements and amendments, the clock is ticking!

As covered in our prior post, the current shared experience with COVID-19 presents an opportunity to improve an organization’s contingency planning and continuity of operations plan (COOP) using a “lessons learned” exercise. So, what about areas that are unique to a pandemic, like this COVID-19 event? Some people may be asking – doesn’t typical contingency planning just apply to the computers and technology equipment?

Well, yes and no. Although contingency planning has a healthy focus on technology, it still requires people to interface with that technology, configure and program the technology so that it will perform some productive task, as well as a number of other roles. In truth, due to the ubiquity of technology within any business, contingency planning is a company-wide effort. Not only the planning, but the execution of the plan at any level will require the cooperation of business managers and technology managers. What needs to be understood is that contingency planning, from a business perspective, is a vital part of COOP. Within COOP and information security contingency planning is where the procedures on addressing a pandemic should be placed. Information system contingency plans, as well as COOP, cannot be created in a vacuum, as their scope impacts the entire organization. This is a primary driver for the need to ensure these plans are officially recognized and distributed to all parts of the company. A good source of information on how to address contingency planning can be found in the National Institute of Standards and Technology (NIST) publications, which is where much of the following guidance can be found.

Pandemic Contingency Plan

Pandemic contingency actions, as it may appear obvious now, focus on protecting the workforce while still conducting some form of business operations. When an incident occurs that impacts organization’s personnel, it likely will impact the information system operations. A prime example of this, seen with COVID-19, was the sudden, immediate need for staff to work remotely. This step is clearly linked to proper considerations for the safety, security, and well-being of personnel during a disruptive event, which is a goal of contingency planning. Organizations should also have in place methods and standards for sending out responsive messages to personnel, as well as considerations for responding to media inquiries on the topic of staff safety and ongoing operations. Considering the heightened awareness of these issues due to COVID-19 and general increased security throughout our society, personnel considerations for staff warrant discussion in all contingency planning related areas.

To help define the planning scope, we need to understand that pandemic influenza (like COVID-19) is a global outbreak of disease that occurs when a new influenza virus emerges in human populations and causes serious illness. Because there is little natural immunity, the disease can spread easily from person to person, rapidly moving across the country and around the world. The organization’s COOP and contingency plan should contain the steps and details to address how the organization will:

  1. Protect employees wellbeing during a pandemic
  2. Sustain essential business functions during significant times of absenteeism
  3. Support the overall national and global response during a pandemic
  4. Communicate guidance and support to stakeholders during a pandemic

Pandemic Unique Considerations

As we have seen with the COVID-19 response, common strategies to protect personnel health during a pandemic outbreak include more strict hygiene precautions and a reduction in the number of personnel working in close contact with one another through the implementation of “social distancing.” To address this challenge, organizations need to have in place approved telework arrangements to facilitate social distancing through working at home while sustaining productivity.

In some situations, organizations may need to use personnel from associated organizations or contract with vendors or consultants if staff are unavailable or unable to fulfill responsibilities. Preparations should be made during contingency planning development for this possibility to ensure that the vendors or consultants can achieve the same access as staff in the event of a pandemic. Once personnel are ready to return to work, if the facility is unsafe or unavailable for use, arrangements should be made for them to work at an alternate site or at home. This should be an alternate space in addition to the alternate site for information system recovery. Personnel with home computers or laptops should be given instruction, if appropriate, on how to access the organization’s network from home.

Significant events like COVID-19 take a heavy psychological toll on personnel, especially if there has been loss of life or extensive daily disruption. Organizations should be prepared to provide grief counseling and other mental health support. Employee Assistance Programs (EAP) should be considered as a useful and confidential resource to address these issues. Nonprofit organizations, such as the American Red Cross, also provide referrals for counseling services as well as food, clothing, and other assistance programs. Personnel generally will be most interested in the status of the health benefits and payroll. It is very important that the organization communicate this status.

The Key – Prior Planning

In addition to the above, the best way to prepare for a possible pandemic health crisis really comes down to planning carefully. Once a plan has been assembled, not only do you want to be sure that it is stored in a secure location, but also have copies appropriately distributed. A crucial component of these contingency plans is that they are reviewed on an annual basis to address changes that occur over time. Be sure that your contingency plan includes:

  1. Reviewing relevant policies and practices from authoritative sources, such as government agencies. In the case of COVID, reviewing information from the Centers for Disease Control and Prevention (CDC), would be pertinent
  2. Developing human resources management strategies to deal with circumstances that may arise during a pandemic health crisis
  3. Testing plans of action and telecommunication systems to ensure readiness
  4. Communicating with employees, managers, and other stakeholders prior to, during, and after the pandemic health crisis

When planning, one of the first, and an important element that can be difficult to get your arms around, is “who will be responsible for what?”. Generally speaking, organizations should rely on their business unit structure to help identify where specific tasks should fall. This straight-forward approach should be a first step and will likely identify that most operations will remain within the same unit – it will be critical to review those operations to ensure that inter-departmental support from other areas are not required. There are additional overarching principles for roles and responsibilities that will need to be clearly defined for this plan. When planning for overall roles and responsibilities, areas to consider here are:

Organization Roles and Responsibilities

  1. Provide resources for training and testing
  2. Ensure communication systems work
  3. Develop guidance on protecting sensitive information and providing for contingency hiring

Supervisory Roles and Responsibilities

  1. Plan for short and long-term disruptions
  2. Stay in constant touch with employees and leadership
  3. Develop guidance on protecting sensitive information and providing for contingency hiring

Employee Roles and Responsibilities

  1. Be ready for alternative work arrangements
  2. Protect sensitive information
  3. Stay in constant touch with management

If these considerations are not part of your overall contingency plan for pandemic response, review and see where they might fit best in the existing framework. If you were one of the many organizations that were caught off-guard by the needed actions to address COVID-19, this should help as a starting point for structuring future plans. What can not be over-stated is that the time to act and produce a relevant contingency plan and COOP is now.

Contact SynerComm to find out how our consultants can assist with not only the pandemic contingency planning, but with technical support and guidance in the areas of hardware, software and networking.

From a quick assessment on what has been published thus far on the CMMC regulation and its overall goal, it appears that contractors lack of information security will no longer be tolerated by the DoD. Beginning with the introduction of the new regulation to the public in January of 2020, it is expected that new contractual requirements will include CMMC starting in June of 2020, and enforcement for current contractors starting in September of 2020. The current proposed structure for achieving the CMMC level of security is somewhat advanced, but not unprecedented.  One the more significant moves for this effort is the requirement that entities will be audited by an independent 3rd party, prior to any certification being awarded. The audit will likely require evidence to be presented to show that the correct level security controls are present and functioning as required.  Despite this regulation being new, it will likely be comprised of current NIST controls, as chosen by the DoD.

Given the nature of the Federal Information Security Modernization Act (FISMA), which is to protect all federal data, by means of the NIST controls, it is hard to conceive of any other security framework being used to meet the goals of CMMC. Even here, at the assurance level for the security controls, we find an interesting item for auditors, as they will be required to attest to the accuracy of their findings.  This step is likely in place to link auditors directly to an organization in the event of control failure or data breach. As such, it appears that the audit process will be evidence intensive, with audit artifacts and audit trails being required to demonstrate compliance with the selected controls.

So, how did we get here?  After a review by the DoD, it was determined that only 1% of contractors actually have some form of proper data protection in place, which naturally gives rise to concerns over the military’s highly sensitive data being secured against other nation-states that wish to obtain it.  These nation states and their activities are collectively known as the ‘advanced persistent threat’ (APT), as they are looking to obtain the targeted data, at almost any cost, including working to infiltrate systems for years. Additionally, there is the threat from criminal actors who are pursuing this data so that it can be sold on the black market to the highest bidder.  Either of these attackers represent a significant threat to military contractors, mainly due to the lack of appropriate information security controls being put in place.

Recently, the Department of Defense (DoD) announced a new initiative for the information security component of defense contractors, sub-contractors and the supply chain for DoD projects. This regulation is coming forward with the goal of securing the complete supply chain for the DoD which has had historical issues with keeping sensitive data secure.  Currently, DoD contractors and subcontractors are under obligations to protect the data they are entrusted with by having an information security program in place which deploys the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-171 security controls. Despite those obligations, contractors have consistently had issues with protecting the military data entrusted to them, resulting in data exposure and breaches.

The concerns over data security materialized in stark reality when a civilian contactor was breached early in 2018, resulting in the exposure of more than 600 gigabytes of highly sensitive information to China by their cyberattack efforts. This breach significantly impacted the US Navy’s Sea Dragon project for the submarine fleet and the overall capability for conducting subsurface warfare operations. The exposure also included the breach of the electronic warfare library for that project, which contains a notable amount of highly classified data, as the name implies. What cannot be understated is the value of that data loss, as it represents untold years of accumulated United States hard won knowledge and expertise in several matters of science, research, and advancement from associated discoveries. It appears that, due to this breach and others like it, and the assessment of the poor computer security posture of DoD contractors, the DoD has been forced to take a stance of “no tolerance” for gaps within information security programs.

This breach and other incidents like it demonstrate that civilian contactors have not taken appropriate actions to properly deploy information security controls to protect DoD data. This is not a defense sector or DoD only issue, as the loss of intellectual property (IP) across the nation has been an ongoing event for a number of years, with the public only recently gaining a small insight to this major issue. What needs to be understood is the impact of the loss of the country’s IP to the rest of the globe, due to the apparent complete lack of concern regarding securing company owned systems and data. For some, the idea of IP loss is difficult to grasp or to put in easy-to-understand terms, however we can put some measurement to it over the past several years. From reports, the loss of IP has a measurable financial impact, with estimates placing the financial cost from stolen IP at $600 billion in lost revenue for the United States. That includes several billions of dollars being lost to counterfeit goods that compete on not only the domestic market, but the international market as well.

As we move forward in the digital age, the critical nature of having secured IT systems is becoming more and more glaring.  It seems clear that the information security factor will continue to have a large impact on all business sectors, with the military industry being the first to be called on to fully secure their systems. It is very likely this trend will expand outward, as people continue to express overwhelming concern over their personal data and how systems and applications are collecting and monitoring actions and activities. Companies that decide to get ahead of this significant problem are showing a commitment to long-term investment that should have positive impact on not only profit, but also revenue in the years to come.

Once full details on CMMC are made available, we will look to post a blog that gives a clearer definition as to what the CMMC requirements entail.

Medical community challenge:

In a business environment where resources are limited, compliance requirements abound, and budgets are constantly challenged to meet cost containment targets, the complexity of the regulations your business is obligated to comply with can present a challenge. This challenge becomes even more difficult within the dynamic environment of hospitals, doctors’ offices, and all supporting elements of the medical profession. One of the key elements of facing this challenge is understanding what defines Protected Health Information (PHI) and what qualifies an organization as a HIPAA Covered Entity.

In broad terms, PHI is information that deals, or is associated in any way, with medical details or medical records of an individual. For the term “Electronic Protected Health Information” (ePHI), the definition doesn’t change much, as it simply encompasses the information or data being maintained in an electronic format, as on a computer or any other digital device. To clarify PHI more precisely, the privacy rule states it is “any information held by a covered entity which concerns health status, the provision of healthcare, or payment for healthcare that can be linked to an individual”. Most people respond with “wow, that sounds like it covers a lot” – which is does. Not only is the health-centric data covered by HIPAA, but so is data that directly identifies a person, or a “personal identifier”.  To help get our arms around this topic, we can gain understanding of what HIPAA considers as a personal identifier by reviewing a section of the regulation (Sections 164.514(b) and(c)) for the Privacy Rule. What we can see is that HIPAA considers the following 18 data points as personal identifiers:

Keep in mind the above is not an exhaustive list, as it is the definition by HIPAA that drives what can be considered a personal identifier.  What should be understood is that this is a starting point for the listing of what needs to be considered when looking to secure and keep private the PHI and ePHI within your organization. These are the data sets that need to be located and tagged so that they can be properly secured.  A good methodology is to review the official definition and decide if a particular data element qualifies as protected under HIPAA. It is advisable to err on the side of caution and include data that “could be” viewed as sensitive, because making the wrong determination can easily lead a company to having to pay HIPAA fines and penalties. Despite the small possibility that some data could have an extra layer of protection with this broader approach, it likely is a small price to pay when considering the potential fines and penalties – as was seen with Anthem Inc, reported to have paid $115 million to settle lawsuits over its HIPAA information breach.

This brings us to the next key element for HIPAA – which organizations are obligated to adhere to HIPAA, and am I one?

Here again, we see that HIPAA protections apply to a wide array of organizations and businesses – obviously, these entities are linked to, or perform some activity, with health information. It is the connection with data that brings in the HIPAA regulation and its requirements, as described below. The organizations that deal with medical data are officially termed as “covered entities”. Any contractors, vendors, or 3rd party relationships with a covered entity that involves PHI or ePHI fall under the official term of “business associates”. The requirements of HIPAA extend to business associates, through the covered entity, and are required to be clearly defined within the Business Associate Agreement (BAA). The BAA is to be a component of the contractual agreement between the two organizations.

For clarity on what qualifies as a covered entity: ­

Covered entities are the individuals, institutions, or organizations that maintain patient healthcare or payment information or would reasonably be expected to come into contact with PHI in the course of their daily duties – mostly, healthcare providers, health plans, and healthcare clearinghouses. Examples of covered entities include:

What about 3rd party vendors? If a 3rd party is engaged by a covered entity, then a Business Associates Agreement (BAA) is required, per HIPAA. A BAA is a focused document that addresses the requirements of HIPAA and acknowledges that the business relationship between the two parties will involve PHI or ePHI.  To help define where these components apply, here is a more detailed explanation of a Business Associate:

A Business Associate is a person or entity, other than a workforce member, who performs certain contractual functions or activities for a covered entity, or provides certain services to a covered entity, when those functions involve the access to, or the use or disclosure of, PHI. Per HIPAA, Business Associate functions or activities include (but not limited to) creating, receiving, maintaining, or transmitting protected health information for a functions including claims processing or administration, data analysis, processing or administration, utilization review, quality assurance, patient safety activities, billing, benefit management, practice management, and repricing.

It should be clear that the protections for HIPAA-defined medical information and data follow that data, no matter where it resides or who handles it. If your organization has any dealings or contact with medical companies or entities, and you do not have HIPAA protections in place, it would be worthwhile to perform a thorough review to be certain. That review should be fully documented and put forth to proper legal counsel to consider and make a definitive conclusion as to the obligations your company has under the HIPAA regulation.

Too often organizations seem to not have a good understanding of what data they have within their systems, and this leads to a lack of knowledge as to what legal obligations a company has committed itself to. Don’t let this happen to you – leverage the knowledge presented here, along with the information that is publicly available to make a clear determination as to what information security protections your company needs.

Medical community challenge:

In a business environment where resources are limited, compliance requirements abound, and budgets are constantly struggling to meet cost containment targets, the complexity of the regulations your business is required to comply with can present a challenge. This challenge becomes even more difficult within the dynamic environment of hospitals, doctors’ offices, and all of the supporting elements of the medical profession. Of course, these efforts are for the critical actions for life saving procedures for the focal point of the medical community - the patient. However, the digital age that we have moved in to over the past 20 years, despite the convenience it offers, comes with risks.  Patients have suffered the compromise of personal information, resulting in the patient population expressing considerable concerns regarding how their medical data is handled.

These concerns are not without due cause, given the sensitive business of life support that medical organizations have chosen to engage in, and the information involved with any medical procedure or activity.  Those concerns are partly expressed in the Health Insurance Portability and Accountability Act (HIPAA), which compels medical business to treat the data they possess with certain protections.  We will break down the predominant components of the HIPAA regulation as a basis for gaining a clear understanding of the drivers behind this law. In later postings on this topic, we will explore a strategy to align your organization to the information security requirements defined within HIPAA, HITECH, and the Omnibus rule.

The Health Insurance Portability and Accountability Act of 1996 establishes requirements for healthcare organizations with respect to ensuring security and privacy of protected healthcare information (PHI) and electronic protected healthcare information (ePHI). Broadly speaking, the overarching HIPAA principle for this type of data is that it is to remain private. Only people who have a definitive need for that data should be able to access it.  Of course, it should go without saying, that the only way to provide any kind of privacy is through the effective deployment of security measures to restrict access and exposure of the data.  The principles of privacy and security are irrefutably linked, as you cannot have one without the other, which gives the logic to the two more well-known rules of HIPAA that we will cover below.

There are a number of rules that are recognized within HIPAA, or what most people come to call HIPAA, which usually encompass other healthcare data regulations (e.g., HITECH and the Omnibus Final Rule).  Some of the rules are more well-known than others. Due to their history as the being first established with HIPAA, the best known are probably the Privacy Rule and the Security Rule. However, that’s not where the rules stop. There have been regulation updates to HIPAA as the issues around the handling of medical data have become better understood. It can be a challenge to keep track of all of these rules:

Now that you have a base-line understanding of what HIPAA is comprised of, we can move on to another primary component of HIPAA, which is understanding the criteria for PHI and ePHI, as well as understanding if you and your organization fall under the HIPAA regulation.
NEXT UP: What is PHI or ePHI and who has to abide by HIPAA?

Microsoft Secure Score. If you’re an IT administrator or security professional in an organization that uses Office 365, then you’ve no doubt used the tool or at least heard the term. It started as Office 365 Secure Score, but it was renamed in April 2018 to reflect a wider range of elements being scored.

What does it do? The tool looks at configurable settings and actions primarily within your Office 365 and Azure AD environment, and awards points for selections that meet best practices. In their words, “From a centralized dashboard you can monitor and improve the security for your Microsoft 365 identities, data, apps, devices, and infrastructure.”

But what doesn’t Microsoft Secure Score do? Microsoft is very good at telling you the great things its products can do, so I won’t repeat them here. The concept is sound, and I applaud them for giving users a tool that prioritizes secure configurations. They have come a long way from having auditing turned off by default in their products, e.g., Server 2000. I will point out why Microsoft Secure Score isn’t enough when it comes to understanding and testing the security of your Microsoft 365 environment.

Reason number 1:  The fox shouldn’t guard the hen house.

I am a Certified Public Accountant (CPA), and as such, I’ve spent a good portion of my life performing audits and assessments. A key independence rule CPAs abide by is:  an auditor must not audit his or her own work. Microsoft isn’t exactly independent when scoring its own product’s settings and capabilities. The financial motivation exists for Microsoft to setup a scoring system that makes users feel good about using Microsoft products.  Interoperability and performance will always be a higher priority than security.

This fact is furthered by the scoring system setup, which unlocks higher point opportunities with higher priced subscriptions. For example, Microsoft Cloud App Security and Azure Advanced Threat Protection are unlocked with E5 licenses, or as a $5.50 per user per month add on to an existing E3 license. This can be as much as a 70% price increase. If you want more chances to raise your overall score and have a higher score ceiling, spend more money…a very beneficial side-effect for Microsoft.

Also, remember that Secure Score is reflective of a Microsoft opinion and their subjective value for security controls they believe are important. This differs from widely accepted standards from organizations like NIST (National Institute of Standards and Technology) or CIS (Center for Internet Security) which are vendor neutral and have been refined, improved, and evolved over time.

Reason number 2:  No two environments are alike.

First let me say that Secure Score can be dented and bent to fit different environments. Scoring for certain areas can be manually entered if you have a third-party solution for a control. It will be incumbent on the person checking those controls to match what Secure Score is asking for. This is an all-or-nothing proposition as indicated within Secure Score, “Marking as resolved through third-party indicates that you have completed this action in a non-Microsoft app, and will give you the full point value of this action.

This is a key area where the Secure Score blanket fails to keep all areas of the entity covered and warm. There are bound to be components and configuration requirements that don’t quite fit what Secure Score evaluates or how it is scored. Think of the myriad of application combinations to handle Customer Relationship Management (CRM), Mobile Device Management (MDM), Security Information and Event Management (SIEM), Data Loss Prevention (DLP), and Multifactor Authentication (MFA) just to name a few.  An independent assessment of the environment that references best practice hardening guides for specific products comprising the solution is the only way to complete a proper evaluation.

Reason number 3:  Security is a journey, and a scorecard makes it a destination.

Don’t get me wrong, I like scores and grades. CPA’s generally like to measure and quantify things. Secure Score quantifies security, gives you trends over time on your score, and even allows you to measure your score against others based on a global average, industry average, and similar seat count average.

What I don’t like is how the scores can be manipulated, or how they can be construed. If the O365 administrator wants to improve their percentage of points achieved, the simplest way is to select “ignore” for the scoring areas that they have earned 0 points. Per Secure Score documentation, “Once you ignore an improvement action, it will no longer count toward the total Secure score points you have available.” Lower the denominator, keep the numerator, and poof! We are more secure. Or are we?

Executives looking at a scorecard may also be satisfied once it has reached a certain percentage of the total available. A project which will move the Secure Score from 650 out of 807 points to 710 out of 807 points appears to make the company about 8% more secure to a non-security decision maker handling the company budget. That project may not make the cut. In reality, any scoring shortage could represent a critical configuration issue that puts information assets at risk. That point may get lost if the focus is score.

Reason number 4:  A by-product of automated security is a false sense of it.

We hear stories all the time about breach activities that were being reported by automated logging systems, except no one was looking at the logs. IT management puts a tool in place and checks a box that implies the organization is secure in that area. Secure Score is ripe for this. Several improvement actions that will increase your score involve reviewing reports. When a link for a report is clicked, Secure Score assumes the report was reviewed and awards points. To keep the points, the link must be clicked within specific time intervals from within the Secure Score user interface, but this process does not record what was reviewed, or any notes or actions resulting from the review. There is no substitute for the actual review process and confirming that the review is happening.


Also consider an environment made up of multiple applications from different vendors where automated security evaluations, like Secure Score, are put in place. Each application that makes up the system interacts with other applications, potentially creating security control blind spots. For example, an email system that hands-off outbound email to a 3rd party DLP solution. Are there security holes in the process that transfers data in and out of the DLP application? Identifying those weaknesses requires a wholistic view, measured against current accepted best practices, that just isn’t offered by Secure Score or any other automated solution.

In conclusion, I think Secure Score has a place in monitoring and evaluating an organization’s information security posture. Microsoft is taking recommendations from its user base and is working to improve Secure Score’s results and widen its coverage. It is a barometer of an information security environment that could produce important information when properly utilized.  

The bottom line though is that it is just one tool. It cannot replace a diligent information security program; or at a higher level, an information security management system. Independent assessment and review of controls, policies, procedures, and the people managing the environment work in tandem to assure the confidentiality, integrity, and availability of an organizations information assets.  Consider the diversity of an organizations landscape:

These areas are all interdependent, yet all have their own unique traits and ways to be assessed and secured.  No one measurement tool is enough.

By Jeffrey T. Lemmermann, CPA, CISA, CITP, CEH - Information Assurance Consultant

GDPR has been in place since May 25th, 2018 and has already been used in legal actions against companies, with over 200,000 cases reported within this first year. The law is expected to make a notable impact on companies, as it has considerable fines and penalties. Even when compared to HIPAA and FISMA, GDPR has the most threatening teeth of any law to date. Even without GDPR being in full force, information security infractions have been getting more attention from multiple angles.  There have been some examples of how expensive this can get, as seen with Alphabet and its $9.4bn in fines, over the past 3 years. It would appear by these recent historical events that information security is rising to a point of serious contemplation for businesses world-wide.

However, this should not be a news flash by any means. The implementation of a serious data protection law by the European Union has been in development for some time now (starting in 1995). Most notably, the now infamous “Right to be forgotten” was generating news and conversation on this very topic.  Even still, as noted above, companies seem to be caught flat footed and have had to pay dearly for infractions.

GDPR drives the idea, at least in part, that information is a business asset, and as such, businesses are obligated to manage that asset in a manner that will not bring harm to its customers and employees. The public has voiced its concerns numerous times, indicating that loss of privacy has a legitimate ability to cause harm to an individual. GDPR gives those voices traction to hold organizations accountable for lack of proper management, security, and ultimately privacy of their Personally Identifiable Information (PII).

So, how can a company successfully meet the requirements of GDPR? Let’s take a look to explore the best viable answer to that question.

As a general principle of information security, evidence is the best method to prove how an organization deploys security controls.  GDPR is no exception, as it calls out repeatedly, the requirement to be able to “demonstrate compliance”, as seen in Chapter 2, Article 5 of the regulation, where the principles of processing personal data are addressed. To be clear, evidence, also known as ‘audit artifacts’ or ‘audit trails’ within other compliance frameworks and in general among the audit community.  Not surprisingly, within the United States, the requirement for audit artifacts is also seen in regulation, namely HIPAA and FISMA, both of which use the NIST standards to achieve security. The HIPAA focused security controls are seen in NIST SP 800-66, with FISMA using NIST SP 800-53, tying in the NIST Cyber Security framework to round out an information security program. Both regulations then use the NIST security control base, which in turn, supports privacy for IT systems and data.

Which brings us to the next important question, “What about privacy, isn’t that part of the GDPR?” Excellent point. Here again, NIST shows strength as a framework, as SP 800-53, rev 4, includes privacy controls, in appendix J.  When held up against the extensive GDPR requirements, it is clear that these privacy controls can easily be leveraged to support the goals of GDPR.  Some examples from NIST:

Naturally, this leads our conversation to “where do I need to apply these controls?” The data that is identified to be protected by GDPR and NIST is broadly understood as Personally Identifiable Information (PII) and both regulations have similar descriptions, only GDPR calls it “Personal Data”.  GDPR appears to be the broader of the two definitions, as seen below:

GDPR PII:  ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

GDPR (article 4, Definitions, paragraph 1)

NIST PII: (Personally Identifiable Information): Information which can be used to distinguish or trace the identity of an individual (e.g., name, social security number, biometric records, etc.) alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual (e.g., date and place of birth, mother’s maiden name, etc.).

For any company system, these are the data sets that you want to ‘tag’ or search for to ensure that the proper protections are in place. Once that footprint is well understood, you now have the starting point for deploying not only your security controls, but also to check to be sure the privacy controls are in place.  In the case of GDPR, the privacy controls repeat the requirement that signed consent be obtained from the data subject (much like HIPAA), with a number of notable exceptions – so be certain to review them for a full understanding. When considering how to tackle the requirements of not only GDPR, but FISMA, HIPAA or any other information security law or concern, the best place to look is NIST, in my opinion.  NIST not only offers the most complete, thorough and well researched controls, it is also the framework recognized by the US government and federal courts. Putting NIST controls in place puts any company in an advantageous position, not only for the potential of being able to understand the requirements of a government contract, but for also showing the positive actions that a company takes regarding information security if ever questioned in court.

GDPR can offer some insight on how the overall public is viewing information security, and how that scope is more expansive than one might initially think. Interestingly, GDPR addresses an area that came as a surprise to me, which is centered around the use of ‘junk mail’ and spam.  Both are addressed within the regulation, which in turn, will reduce the amount of unwanted traffic across your inbox, as well as your mailbox (if you reside in the EU).

Overall, from not only review of the regulation and associated writings on the subject, but from knowledge of the federal level protections, GDPR is very much in line with the principles of FISMA, if not directly in line with some of its stated requirements.  To date, there is no officially identified framework to address the GDPR requirements, and based on my assessment, it makes the most sense to look to the NIST framework to address this shark-toothed law. Not to mention, if you have any federally sourced data on your system, FISMA is in play within your organization already, which requires NIST protections be in place. As an added bonus, if you have no other data privacy or security concerns past GDPR, and you are based within the U.S., deploying NIST puts you in alignment for the only law within the country (currently).  As several people have already stated, the introduction of GDPR will most likely result in some sort of similar, if not more robust, new regulation within the United States.  So, if you’re based in the U.S., buckle up, the ride is most likely not over.

In the end, the ability to address GDPR is not insurmountable – it simply is an area that requires a well thought-out, managed, approach and plan; as is true for many areas in business.  Consider these items to start that process:

  1. Review the GDPR regulation and/or gain knowledge on where it applies to your company, possibly accomplished via a mapping exercise
  2. Review the security and privacy controls from NIST and determine where significant gaps exist in your current security and privacy posture
  3. Begin remediation of the gaps, tracking your progress to understand (and start to limit) your companies’ exposure to GDPR infractions

SynerComm can assist you with assessing your security or privacy controls status to address any framework, including PCI-DSS, FISMA or HIPAA. Contact us today for assistance on your information security needs!

Are you using a framework to establish your information security program? If not, I get it; it’s complicated. On a second thought, have you lost your mind?

I’ve been there. A number of years ago, while taking up a resolution to better document and organize a network that was developing rapidly, I began researching frameworks, mainly ISO and NIST.  How many pages? What??? That is just the description book; there is an implementation novel as well?

If you are starting from scratch, there is a knowledge barrier that appears to be very steep. Once you see it, you undoubtedly ask yourself, “is it worth the climb?”  Then, the next time you get on an airplane, ask yourself, “are pre-flight checklists worth the effort?”

A pre-flight checklist exists to ensure that all the requirements for a safe flight are in place before the plane leaves the ground. A pretty sound idea, since after it leaves the ground, it’s kind of too late.

I am squarely on the side that it is worth it and can make the case that all corporate IT breaches could have been avoided, or at the very least minimized, with a properly selected and implemented framework. Why? Because mature frameworks will contain controls, situations, and steps that you cannot think of on your own. They are designed to help prepare for the obvious and the unforeseen.

Consider the Target breach from 2013. This was a breach that started with a 3rd party contractor and ultimately led to the compromise of Personally Identifiable Information (PII) of 70 million customers along with data for 40 million credit and debit cards. There are many accounts of what happened during this event, but I’m going to draw a basic chain of events from the most widely accepted descriptions for our scenario:

  1. 3rd Party fell victim to malware attack and had their vendor credentials compromised.
  2. Credentials used to access Target’s hosted vendor site and find web application vulnerability.
  3. Exploit allowed attackers to upload tools to key systems on Target’s network.
  4. New credentials with administrator level access were created within the network.
  5. Databases identified that contained PII. Data copied to extraction point.
  6. Install malware on key systems to scan memory and capture credit card information.
  7. Credit card information copied to extraction point. Data extracted via FTP

For this breach, let’s look at NIST 800-53, an extremely deep and complete framework, consisting of 18 control families. It is divided into Low, Moderate, and High implementations based on the system impact level. We will assume “Low” for this analysis, which contains 115 controls to be considered (see https://nvd.nist.gov/800-53/Rev4/impact/low). Here are a few of the controls that are directly applicable to each of the steps in the breach:

  1. PS-7: THIRD-PARTY PERSONNEL SECURITY; RA-3: RISK ASSESSMENT
  2. AC-17: REMOTE ACCESS; RA-5: VULNERABILITY SCANNING
  3. AC-3: ACCESS ENFORCEMENT; CM-7: LEAST FUNCTIONALITY; SI-4: INFORMATION SYSTEM MONITORING
  4. AC-2: ACCOUNT MANAGEMENT; IA-2: IDENTIFICATION AND AUTHENTICATION (ORGANIZATIONAL USERS)
  5. AU-6: AUDIT REVIEW, ANALYSIS, AND REPORTING; SE-1: INVENTORY OF PERSONALLY IDENTIFIABLE INFORMATION
  6. CM-5: ACCESS RESTRICTIONS FOR CHANGE; SI-16: MEMORY PROTECTION,  
  7. SC-7: BOUNDARY PROTECTION; SC-8: TRANSMISSION CONFIDENTIALITY AND INTEGRITY

Note that I said “a few of the controls…” The above is just a quick sampling of controls that would have prevented, or at least, minimized the damage done in the breach. Other controls would also come into play, as some controls address documentation, some address enterprise level controls, some application level controls. The key is, they work together and rely on each other.

Here is an example:

SC-7 is documented this way on the nist.gov website –

SC-7 BOUNDARY PROTECTION

Control Description

The information system:
a. Monitors and controls communications at the external boundary of the system and at key internal boundaries within the system;
b. Implements subnetworks for publicly accessible system components that are [Selection: physically; logically] separated from internal organizational networks; and
c. Connects to external networks or information systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security architecture.

Related to: AC-4AC-17CA-3CM-7CP-8IR-4RA-3SC-5SC-13

CM-7 is in the “Related to:” section, which shows controls that are reliant in either one direction or both directions. Here is CM-7 -

CM-7 LEAST FUNCTIONALITY

Control Description

The organization:
a. Configures the information system to provide only essential capabilities; and
b. Prohibits or restricts the use of the following functions, ports, protocols, and/or services:

Related to: AC-6CM-2RA-5SA-5SC-7

Each control has related controls, which is why proper implementation of the entire framework is essential to maximizing the benefits.

So how do you start? Pick your idiom: It’s like writing a novel, eating an elephant, mailing a jeep home, drinking a half barrel of beer. You do it one page, one bite, a few parts, or one glass at a time.

Which framework should you select? Statistically, according to Tenable’s Trends in Security Framework Adoption Survey (https://www.tenable.com/whitepapers/trends-in-security-framework-adoption) released in 2018, 84% of organizations in the US leverage a security framework in their organization, with the top 4 being:

  1. PCI DSS (47%)
  2. ISO 27001/27002 (35%)
  3. CIS Critical Security Controls (32%)
  4. NIST Framework for Improving Critical Infrastructure Security (29%)

Look first to your organization and/or your customers. If you are in manufacturing, and have adopted ISO for your manufacturing standards, then the ISO 27000 series (specifically ISO/IEC 27001:2013) probably makes sense. If your organization will be relying on credit card processing, then the PCI DSS framework may be mandatory. If your client base includes governmental entities, then NIST will be a requirement.

So, consider this your crash warning indicator light. It is blinking, and you should probably do something about it!

“The first step towards getting somewhere is to decide that you are not going to stay where you are. “

-Chauncey Depew

The Challenge

You budget for, enable, and staff your organization’s information security program with people, technology, and visionary prowess. As you step back and observe do you find yourself wondering: Does the business consider the program relevant? Is my security program effective? In a business environment where resources are limited, compliance requirements abound, and budgets are constantly challenged to meet cost containment targets, this article will explore a strategy to align information technology (IT), information security (IS) (note: one is not necessarily inclusive of the other – a topic for another article), system and data owners (SDO), aka: your business units, and leadership.

The Opportunity

Aligning IT, IS, SDO, and leadership will strengthen information systems’ value and inherent information security situational awareness, an awareness I would argue is incorrectly shouldered by IT. When it comes to managing information assets to assure the confidentiality, integrity, and availability (CIA) of an organization’s systems and data, what roles are in play? Good question, here are the primary ones found in any organization, with roles defined:

How can you effectively secure what you do not fully understand? Effectively securing an organization’s systems and data requires a clear understanding, outside of IT, of information systems value and risk. Components of a total information systems picture may include:

An effective communications strategy will strengthen information systems’ alignment between IT, IS, and the business. When an organization raises the level of awareness with the” total information systems picture”, a business process will take hold that facilitates system discussions leading to meaningful system decisions. While there can be many types of system decisions organizations must consider, a few examples may include:

The Plan

A strategy for enabling effective communications will look different from one organization to another. A communications strategy should consider an organization’s unique characteristics, culture, and climate. Activities that can contribute to enabling an effective communications strategy should include:

Planning, execution, and effective communications can produce meaningful results and aid in your information security program being experienced as relevant.

Background

While experts have agreed for decades that passwords are a weak method of authentication, their convenience and low cost has kept them around. Until we stop using passwords or start using multi-factor authentication (for everything), a need for stronger passwords exists. And as long as people create their own passwords that must be memorized, those passwords will remain weak and guessable. This blog/article/rant will cover a brief background of password cracking as well as the justification for SynerComm’s 14-character password recommendation.

First things first: What is a password?

Authentication is the process of verifying the identify of a user or process, and a password is the only secret “factor” used in authentication. For the authentication process to be trusted, it must positively identify the account owner and thwart all other attempts. This is critical, because access and privileges are granted based on the user’s role. Considering how easily passwords can be shared, most have already concluded that passwords are an insufficient means of authenticating people. We must also consider that people must memorize their password and that they often need passwords on dozens if not hundreds of systems. Because of this, humans create weak, easily guessed, and often reused passwords.

Password Controls

Over the years, several password controls have emerged to help strengthen password security. This includes minimum password length, complexity, preventing reuse, and a reoccurring requirement to create new passwords. While it is a mathematical fact that longer passwords and a larger key space (more possible characters) do indeed create stronger passwords, we now know that regularly changing one’s password provides no additional security control. In fact, forcing users to regularly create new and complex passwords weakens security. It forces users to create guessable patterns or simply write them down. OK, I will stop here, we'll save the ridiculousness of password aging for a future blog.

So Why 14 Characters?

So why is 14 characters the ideal or best recommended password length? It is not. It is merely a minimal length; we still prefer to see people using even longer passwords (or doing better than passwords in the first place). SynerComm recommends a 14-character minimum for several reasons. First, 14-character passwords are very difficult to crack. Most passwords containing 9 characters or less can be brute-force guessed in under 1 day with a modern password cracking machine. Passwords with 10-12 characters and even 13-14 characters can still be easily guessed if they are based on a word and a 4-digit number. (Consider Summer2018! or your child’s name and birthday.) Next, and perhaps more importantly, 14-character minimums will prevent bad password habits and promote good ones. When done with security awareness training, users can be taught to create and use passphrases instead of passwords. Passphrases can be sentences, combinations of words, etc. that can be meaningful and easy to remember. Finally, 14 characters is the largest “Minimal Password Length” currently allowed by Microsoft Windows. While Windows supports very long passwords, it is not simple to enforce a minimum greater than 14 characters (PSOs can be used to increase this in Windows 2008 and above, and registry hacks from anything older, but it can be a tedious process and introduces variables into the management and troubleshooting of your environment).

The remainder of this article provides facts and evidence to support our recommendations.

Analysis of Password Length

SynerComm collected over 180,000 NTLM password hashes from various breached domain controllers and attempted to crack them using dictionary, brute-force, and cryptanalysis attacks. The chart below shows the password lengths of the over 93,000 passwords cracked. It is interesting to find passwords that fall drastically below the usual minimum length of eight characters. Although few, it is also worth noting that 20, 21 and 22-character passwords (along with one 27-character password) were cracked in these analyses.

Passwords Cracked = 93,706. Total unique entries of those passwords cracked = 68,161

Passwords of 9 or fewer characters account for 50% of those cracked; 12 or fewer, 75%

Password Length - Number of Cracked Passwords
1 = 3 (0.0%)
2 = 2 (0.0%)
3 = 137 (0.15%)
4 = 27 (0.03%)
5 = 405 (0.43%)
6 = 1527 (1.63%)
7 = 3827 (4.08%)
8 = 26191 (27.95%)
9 = 23677 (25.27%)
10 = 17564 (18.74%)
11 = 9098 (9.71%)
12 = 6267 (6.69%)
13 = 2915 (3.11%)
14 = 1063 (1.13%)
15 = 577 (0.62%)
16 = 276 (0.29%)
17 = 81 (0.09%)
18 = 39 (0.04%)
19 = 13 (0.01%)
20 = 10 (0.01%)
21 = 1 (0.0%)
22 = 4 (0.0%)
23 = 0 (0.0%)
24 = 0 (0.0%)
25 = 0 (0.0%)
26 = 1 (0.0%)
27 = 1 (0.0%)

Analysis of Password Composition

*Note: The password "acme" was used to replace specific company names. For example, if the password "synercomm123$" would have been found in a SynerComm password dump it would have been replaced with "acme123$". This change occurred only to serve the top 10 password and base word tables. Analyses of length and masks were performed without this change.

Top 10 passwords
Password1 = 543 (0.58%)
Summer2018 = 424 (0.45%)
Summer18 = 395 (0.42%)
acme80 = 368 (0.39%)
Fall2018 = 362 (0.39%)
Good2go = 350 (0.37%)
yoxvq = 345 (0.37%)
Gr8team = 338 (0.36%)
Today#08 = 308 (0.33%)
Spring2018 = 219 (0.23%)
Top 10 base words
password = 1993 (2.13%)
summer = 1663 (1.77%)
acme = 1619 (1.73%)
spring = 734 (0.78%)
fall = 706 (0.75%)
welcome = 652 (0.7%)
winter = 577 (0.62%)
w0rdpass = 562 (0.6%)
good2go = 351 (0.37%)
yoxvq = 345 (0.37%)
Last 4 digits (Top 10)
2018 = 3037 (3.24%)
2017 = 821 (0.88%)
1234 = 733 (0.78%)
2016 = 659 (0.7%)
2015 = 588 (0.63%)
2014 = 561 (0.6%)
2013 = 435 (0.46%)
2012 = 358 (0.38%)
2010 = 296 (0.32%)
2019 = 286 (0.31%)
Masks (Top 10)
?u?l?l?l?l?l?d?d (6315) (8 char)
?u?l?l?l?l?l?d?d?d?d (4473) (10 char)
?u?l?l?l?l?l?l?d?d (4021) (9 char)
?u?l?l?l?d?d?d?d (3328) (8 char)
?u?l?l?l?l?d?d?d?d (2985) (9 char)
?u?l?l?l?l?l?l?l?d?d (2742) (10 char)
?u?l?l?l?l?l?l?d (2601) (8 char)
?u?l?l?l?l?l?l?l?d (2371) (9 char)
?u?l?l?l?l?l?l?d?d?d?d (1794) (11 char)
?u?d?d?d?d?d?d?d?d (1756) (9 char)

Password Hash Cracking Speeds

When performing our own password cracking, SynerComm uses a modern password cracker built with 8 powerful GPUs (https://www.synercomm.com/blog/how-to-build-a-2nd-8-gpu-password-cracker/). Typically used by gamers to create realistic three-dimensional worlds, these graphics cards are remarkably efficient at performing the mathematical calculations required to defeat password hashing algorithms. The first screenshot below shows a brute-force guess of an 8-character password. It shows that most 8-character passwords will crack in 4.5 hours or less. While the same attack against a 9-character password could take up to 18 days to complete, we can reduce the key space (possible characters used in passwords) and complete 10-11 character attacks in just 1-2 days or less. The second screenshot shows an optimized character set mask attack against 11-character passwords. This attack completes in less than 8 hours and returns many poorly selected 11-character passwords.

Below is an optimized crack attempt for 11-character passwords using only common characters and format (e.g., beginning with an upper case letter or number):

Password Best Practices

  1. Do Not Share Your Password with Anyone!
  2. Do Not Store Passwords in Spreadsheets, Documents, or Email! Also avoid storing passwords in your browser (IE, Firefox, Chrome).
  3. Create passphrases instead of passwords. Long passwords are always stronger than short passwords. Passwords shorter than 10 characters can be easily and quickly cracked if their hashes become available to the attacker. SynerComm recommends enforcing at least a 12-character minimum for standard user accounts but suggests using a 14-character minimum to promote good password creation methods. Privileged accounts such as domain administrators should have even longer passwords.
  4. While password complexity is less critical with long (>=14 char) passwords, it still helps ensure a larger key space. Encourage users to use less common characters such as spaces, commas, and any other special character found on the keyboard. (Spaces can make an enormous difference!)
  5. Never reuse the same password on multiple accounts. While it is easier to remember 1 password than 100, our next best practice will provide a solution to that problem too. Dumps containing passwords from breaches are great starting places to guessing a user’s password.
  6. Use a password safe. Modern password managers can sync stored passwords between computers and mobile devices. By using a safe, most users only need to remember 2-3 passwords and the rest can be stored securely in a safe.
    1. When using a safe, it is best practice to allow the application to generate most passwords. This way you can create 15-20 character completely random passwords that you never need to know or memorize.
  7. Implement multi-factor authentication whenever possible. Passwords will always be a weak and vulnerable form of authentication. Using multi-factor greatly reduces the chances of a successful authentication attack. Multi-factor authentication should be used for ALL (no exceptions) remote access and should increasingly be considered for ALL privileged account access.

*For shared accounts (root, admin, etc.), restrict the number of people who have access to the password. Change these passwords anytime someone who could know the password leaves the organization.

AIT-Why14Characters-20190516_Blog.pdfDownload

~Brian Judd (@njoyzrd) with password analysis by Chad Finkenbiner

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram