In penetration testing, it’s important to have an accurate scope and even more important to stick to it. This can be simple when the scope is limited to a company’s internet service provider (ISP) or ARIN provided IP ranges. But in many cases, our client’s public systems have grown to include multiple cloud hosted servers, applications, and services. It may seem obvious to say that anything owned or managed by the company should be in-scope for testing, but how do we know what is “owned or managed”? Ideally, we’d test everything that creates risk to an organization, but that isn’t always possible… read on.

I led this article by stating that an accurate scope is critical to penetration testing. If the scope only includes the IP blocks provided by your ISP, you’re probably missing systems that should be tested. Alternately, pentesting a system that you don’t have permission to test could land you in hot water. The good news is that hosting providers like Amazon Web Services (AWS) and Azure allow penetration testing of systems within your account. In other words, because you manage them, you have the right to pentest them. In these environments, pentesting your individual servers (or services) does not affect “neighboring” systems or the cloud host’s infrastructure.

In addition to the many compute and storage providers, you may also have websites and applications that are hosted and managed by a 3rd party. These still create risk to your company, but the hosting provider has complete control over who has permission to perform testing. When there is custom code or sensitive data at play, you should be seeking (written) permission to pentest/assess these systems and applications. If the host is unable or unwilling to allow testing, they should provide evidence of their own independent testing.

There are also going to be cloud systems that, despite creating risk to your organization, can’t be tested at all. This includes software as a service (SaaS) applications like SalesForce, SAP,  and DocuSign. 

And you guessed it… there are also systems like Azure AD, Microsoft 365, and CloudFlare that are not explicitly in-scope, but their controls may not be avoidable during external pentests. MS 365 uses Azure AD which is basically a public extension of your on-premise (internal) Active Directory; complete with extremely high-performance authentication services. Most authentication attacks today take place directly against Azure AD due to its performance and public accessibility. In other words, an attacker could have your passwords before they ever touch a system on your network. Likewise, if your company uses CloudFlare to protect your websites and web applications, it inherently becomes part of the scope because testing of these apps should force you through their proxy/control.

Hopefully this information will help you plan for your next pentest or assessment. If your company maintains an accurate inventory of external systems that includes all of your data center and cloud systems, you’re already off to a great start. Still, there is always value in doing regular searches and discoveries for systems you may be missing. One method involves reviewing your external DNS to obtain a list of A and CNAME records for your domains.  (For ALL of your domains…)  By resolving all of your domains and subdomains you can easily come up with a pretty large list of IP addresses that are in some way tied to your company. Now all you need to do is lookup each IP to see what it’s hosting and who owns it. Easy right?

If you don’t already have a tool for looking up bulk lists of IP addresses or you prefer not to paste a list of your company’s IP addresses into someone else’s website, we’ve got a solution. Whodat.py was written to take very large lists of IP addresses and perform a series of whois and geoip lookups. If the IP address is owned by Amazon or Microsoft, additional details on the service or data center get added based the host’s online documentation. This tool was designed for regular use by our penetration testers, but its concepts and capabilities are a core functionality of our CASM Engine™ and our suite of Continuous Attack Surface Management and Continuous Penetration Testing subscriptions.

SynerComm partnered with ChannelBytes to present 60 minute session where we discuss what it means to do quality, modern penetration testing in 2020.

Penetration testing is a core part of the networking security toolset, but few people outside of industry specialists understand what penetration testing is, when to make use of it, and most importantly, what to do with the information it provides. This 60-minute session will answer those questions, dispel pentesting myths, and outline clear use cases.

We will be chatting live, fielding your questions, and doing our best to jam as much pen testing value into an hour as possible.


Video created by channelbytes.com

Participating in Black Hat USA 2020, we sat down with Dark Reading where our own Brian Judd, VP Information Assurance discusses how we are innovating and evolving penetration testing.

See more at www.darkreading.com

What is a Pwnagotchi?

From the Website:

Pwnagotchi is an A2C-based “AI” powered by bettercap and running on a Raspberry Pi Zero W that learns from its surrounding WiFi environment in order to maximize the crackable WPA key material it captures (either through passive sniffing or by performing deauthentication and association attacks). This material is collected on disk as PCAP files containing any form of handshake supported by hashcat, including full and half WPA handshakes as well as PMKIDs.

Sound Familiar?

In case you're curious about the name: Pwnagotchi (ポーナゴッチ) is a portmanteau of pwn and -gotchi. It is a nostalgic reference made in homage to a very popular children's toy from the 1990s called the Tamagotchi. The Tamagotchi (たまごっち, derived from tamago (たまご) "egg" + uotchi (ウオッチ) "watch") is a cultural touchstone for many Millennial hackers as a formative electronic toy from our collective childhoods.








Side B




Flashing an Image (https://pwnagotchi.ai/installation)

Flashing an Image

The easiest way to create a new Pwnagotchi is downloading the latest stable image from our release page and writing it to your SD card.

Download the latest Pwnagotchi release

Once you have downloaded the latest Pwnagotchi image, you will need to use an image writing tool to install that image on your SD card. We recommend using balenaEtcher, a graphical SD card writing tool that works on Mac OS, Linux, and Windows; it is the easiest option for most users. (balenaEtcher also supports writing images directly from the ZIP file, without any unzipping required!)

To write your Pwnagotchi image with balenaEtcher:

- Download the latest Pwnagotchi .img file.

- Verify the SHA-256 checksum of the .img

- Download balenaEtcher and install it.

- Connect an SD card reader with the SD card inside.

- Open balenaEtcher and select from your hard drive the Raspberry Pi .img or .zip file you wish to write to the SD card.

- Select the SD card you wish to write your image to.

- Review your selections, then click Flash! to begin writing data to the SD card.

Connect Your USB Micro to the Data Port and Wait for the Pwnagotchi to Boot




Configure Your Newly Found Ethernet Adapter




Connect to the Terminal via Putty




Words of Caution

Example Config

Edit the config located in /etc/pwnagotchi/config.yml, restart, and you should be good-to-go.

# Add your configuration overrides on this file any configuration changes done to default.yml will be lost!
# Example:
# ui:
#   display:
#     type: 'inkyphat'
#     color: 'black'
      enabled: false
      report: false
        - '<YOURNETWORK>'
      enabled: true
      type: 'waveshare_2'
      color: 'black'
        username: pi
        password: <YOURPASSWORD>

Anatomy of a Pwnagotchi Screen (https://pwnagotchi.ai/usage)




Completed Build











- @TheL0singEdge

Bridging the Gap Between Point-in-Time Penetration Tests 

“So, let’s say we fix all of the vulnerabilities that the pentest discovers… How do we know tomorrow that we’re not vulnerable to something new?”


Having been part of the penetration testing industry for over 15 years, I’ve been challenged by many clients with this very question. The fact is that they are right, a penetration test is a point-in-time assessment and new vulnerabilities are discovered every day. We hope that our patch and vulnerability management processes along with our defensive controls (firewalls, etc.) keep our systems secure. Over the past 5 years, we’ve experienced a rise in the number of clients moving towards quarterly penetration testing and seeing the value of rotating through different penetration testers.

In 2017, SynerComm’s penetration testers decided to put their heads together to develop an even better solution. (Honestly, one of our top guys had been nudging me for two years with an idea already…) We agreed that nothing replaces the need for regular human-led penetration testing. As of today, no amount of automation or AI can come close to replicating the intuition and capabilities of an actual penetration tester. That said, if we can be confident that nothing (ok, very little) has changed since the last penetration test, we can be significantly more confident that new vulnerabilities are not present. Building on this idea, the continuous pentest was born.

Continuous pentesting combines the best of both worlds by using automation to continually monitor for changes, and human pentesters to react to those changes quickly. Computers are great at monitoring IP addresses, services, websites, and DNS. They can also monitor breaches and data dumps for names, email addresses, and passwords. What makes continuous pentesting successful, is taking actions based on changes and using orchestration to determine if additional scans can be run and if a pentester should be alerted.

There is no replacement for the validation provided by a thorough, skilled, and human-led penetration test. External and internal pentests with social engineering demonstrate precisely how a determined and skilled intruder could breach your company’s systems and data. Continuous Penetration Testing focuses on public systems and online exposures and should always follow a full, human-led, external penetration test. Partner with SynerComm and we’ll keep an eye on your perimeter security year-round.

Palo Alto Networks firewalls have the ability to create security policies and generate logs based on users and groups, and not just IP addresses.  This functionality is called User-ID.

User-ID™ enables you to map IP addresses to users on your network using a variety of techniques. The methods include using agents, monitoring domain controller event logs, monitoring terminal servers, monitoring non-AD authentication servers and syslog servers, and even through captive portals (that prompt the user for login). In addition to its use in policies, logging access and threats by user can be invaluable in incident response and forensics. To take full advantage of this feature, it is ideal to map as many IP addresses to users as possible.

With all these great methods to map users to IP addresses, we often miss many systems. They include non-domain joined systems, Linux/Unix systems that don’t centrally authenticate, and potentially many other devices (phones, cameras, etc.). Palo Alto has yet another feature for mapping users, but one that comes with great risk.

To identify mappings for IP addresses that the agent didn’t map, the firewall can probe and interrogate devices. The intention is to only probe systems connected to trusted internal zones, but a misconfigured zone could even allow sending probes out to the internet. Taking that misconfiguration aside, client probing is still a significant security risk. By default, Palo Alto agents send out a request every 20 minutes to all IP addresses that were recently logged but not mapped to a user. It does this assuming that the IP belongs to a Windows system and it uses a WMI probe to log into the unmapped system.

SynerComm believes that a large number of PAN customers have enabled WMI and/or NetBIOS Client Probing within the User-ID settings.  Our AssureIT penetration testing team is regularly detecting this on internal pentests. SynerComm recommends disabling Client Probing in the User-ID Agent setup due to the risk. 

The Vulnerability:

Many networking and network security devices use Microsoft WMI probing to interrogate Windows hosts for things like collecting user information.  For authentication purposes, a WMI probe contains the username and hashed password of the service account being used. When a domain account is used, an NTLMv1 or NTLMv2 authentication process takes place.  It has come to our attention that our penetesters are finding Palo Alto firewalls that are using insecure User-ID methods. Specifically, those that are using WMI and NetBIOS probes to attempt user identification. This allows an attacker to obtain the service account’s username, domain name, and the password hash (more likely the hashed challenge/nonce). Because the service account requires privileges, this becomes a serious security exposure that could be easily abused.

An October 30, 2019 Palo Alto Advisory “Best Practices for Securing User-ID Deployments” recommends ensuring that User-ID is only enabled on internal/trust zones, and applying the principal of least privilege for the service account.  Again though, SynerComm recommends also disabling WMI probing completely.

The Attack….

(By: Brian Judd, VP Information Assurance)

In a perfect world, we could trust that every device on our internal network is owned, managed, and monitored by our company and our staff. That includes having full trust that no systems are already compromised, and that no intruder or insider could place a rogue device on our network. Because this is rarely, if ever the case, it’s a stretch to think that it’s safe to share valid domain credentials with any device connected to an internal network.

Using well-known penetration testing tools like responder.py, it is trivial to setup a SMB server that that can listen for and respond to NTLM authentication requests. When good OPSEC isn’t a factor, responder.py also includes abilities to respond to LLMNR and NetBIOS broadcasts to poison other local systems into authenticating to its listening SMB server. It then stores the username and the hashed challenge (nonce) from the authentication messages. Depending on the strength of the password, these captured “hashes” could be cracked and the account could be used to log into other systems.

While all of that sounds scary, it isn’t the concern of this article. If configured to use “Client Probing”, Palo Alto firewalls and their User-ID agents make WMI and NetBIOS connections to map unknown IP addresses to their logged in user. Also, because WMI is IP based, it’s possible to probe any reachable (retable) network/system. To be effective, User-ID almost always uses a domain service account so that it can access any domain member system. An attacker with the ability to run responder.py on an internal network is likely to receive authentication requests from Palo Alto User-ID agents without any need for noisy poisoning attacks. By default, the agent probes every 20 minutes and anytime a new log is written to the firewall without user identification.

OK, let’s make this a bit worse… What if we didn’t need to crack the service account’s password? What if we could just relay the agent’s authentication request to another system and trick it into authenticating the attacker instead? Again, this is trivial and easy using well-known tools like ntlmrelayx.py or MultiRelay.py. Even worse, these tools are not exploits, this is how NTLM authentication was designed to work. If the relayed account’s privilege is sufficient, ntlmrelayx.py will even dump the system’s stored hashes from the SAM database, or execute shell code.

Oh, remember earlier when I mentioned that Palo Alto’s agent probes anytime a new log is written by an unmapped IP address? Using this “feature”, we can script something as simple as a DNS lookup or wget request to generate access logs on the firewall and trigger a User-ID authentication request. With a little time, these logins could be relayed to log into every other system on the network. Considering that older Palo Alto documentation was vague with regards to the necessary service account privileges, it is common to find them as members of highly privileged groups including Domain Administrators. To an attacker, this could be game, set, match in just a few minutes.

SynerComm Recommends:

  1. Disable (Do NOT Enable) Client Probing within Palo Alto’s User-ID Agent configuration
    1. Didn’t you read the title of this article, “Stop Sharing Your Password with Everyone”
  2. Configure the User-ID service account with the minimum required privileges, NO domain admin!
    1. Palo Alto Manual: Create a Dedicated Service Account for the User-ID Agent
  3. Ensure that User-ID is only enabled on trusted internal zones and further restrict it to the specific source IP addresses of its agents
  4. Set a very strong random password for the User-ID agent’s service account
    1. 20 characters is usually sufficient, but it’s always ok to make it longer!
  5. Enable SMB Signing on Windows workstations and servers
    1. While not specific to Palo Alto, this control prevents most NTLM relaying attacks
  6. Disable LLMNR and NetBIOS in the local security settings of Windows operating systems
    1. Again, nothing to do with Palo Alto, but this prevents LLMNR and NetBIOS poisoning attacks
  7. Disable NTLMv1 and NTLMv2 authentication
    1. Kerberos can replace NTLM, but it’s not backwards compatible with older operating systems.  Be sure to research and test first, and always have a backout plan.
  8. Perform annual internal and external penetration tests to uncover hidden weaknesses that leave your networks and systems vulnerable to attack
    1. https://www.synercomm.com/cybersecurity/penetration-testing/

Palo Alto User-ID Security Best Practices

Specify included and excluded networks when configuring User-ID

The include and exclude lists available on the User-ID Agent, as well as agentless User-ID, can be used to limit the scope of User-ID.  Typically, administrators are only concerned with the portion of IP address space used in their organization.  By explicitly specifying networks or preferably a host address /32,  to be included with or excluded from User-ID, we can help to ensure that only trusted and company-owned assets are probed, and that no unwanted mappings will be created unexpectedly. See above image.

Disable WMI and NetBIOS Client Probing

WMI, or Windows Management Instrumentation, is a mechanism that can be used to actively probe managed Windows systems to learn IP-user mappings.  Because WMI probing trusts data reported back from the endpoint, it is not a recommended method of obtaining User-ID information in a high security network.  In environments containing relatively static IP-user mappings, such as those found in common office environments with fixed workstations, active WMI probing is not needed.  Roaming and other mobile clients can be easily identified even when moving between addresses by integrating User-ID using Syslog or the XML API and can capture IP-user mappings from platforms other than Windows as well.  

On sensitive and high security networks, WMI probing increases the overall attack surface, and administrators are recommended to disable WMI probing and instead rely upon User-ID mappings obtained from more isolated and trusted sources, such as domain controllers.

If you are using the User-ID Agent to parse AD security event logs, syslog messages, or the XML API to obtain User-ID mappings, then WMI probing should be disabled.  Captive portal can be used as a fallback mechanism to re-authenticate users where security event log data may be stale.

Use a dedicated service account for User-ID services with the minimal permissions necessary

User-ID deployments can be hardened by only including the minimum set of permissions necessary for the service to function properly.  This includes DCOM Users, Event Log Readers, and Server Operators.  If the User-ID service account were to be compromised by an attacker, having administrative and other unnecessary privileges creates significant risk.  Domain Admin and Enterprise Admin privileges are not required to read security event logs and consequently should not be granted.

Detailed process to create dedicated secure Windows service account

When you use a NON-DOMAIN ADMIN account for User-ID Agent, then additional steps are needed on the server

Deny interactive logon for the User-ID service account

While the User-ID service account does require certain permissions in order to read and parse Active Directory security event logs, it does not require the ability to log on to servers or domain systems interactively.  This privilege can be restricted using Group Policies, or by using a Managed Service Account with User-ID (See Microsoft TechNet for more information on configuring Group Policies and Managed Service Accounts.)  If the User-ID service account were to be compromised by a malicious user, the impact could be reduced by denying interactive logons.

Deny remote access for the User-ID service account

Typically, service accounts should not be members of any security groups that are used to grant remote access. If the User-ID service account credentials were to be compromised, this would prevent the attacker from using the account to gain access to your network from the outside using a VPN. 

 Configure egress filtering

Prevent any unwanted traffic (including potentially unwanted User-ID Agent traffic) from leaving your protected networks out to the Internet by implementing egress filtering on perimeter firewalls

Additional Information

For more information on setting up and configuring User-ID see the following:

User-ID, PAN-OS Administrator's Guide
https://docs.Palo Altonetworks.com/pan-os/9-0/pan-os-admin/user-id/user-id-concepts/user-mapping/client-probing.html
Getting Started: User-ID
Create User Groups for Access to Whitelist Applications, Internet Gateway Best Practice Security Policy
User-ID Resource List on Configuring and Troubleshooting

One of the greatest, yet seemingly unknown, dangers that face any cloud-based application is the deadly combination of an SSRF vulnerability and the AWS Metadata endpoint. As this write up from Brian Krebbs explains, the breach at Capital One was caused by an SSRF vulnerability that was able to reach the AWS Metadata endpoint and extract the temporary security credentials associated with the EC2 instance's IAM Role. These credentials enabled the attacker to access other Capital One assets in the cloud and the result was that over 100 million credit card applications were compromised.

The purpose of this blog post is to explain the technical details of such a vulnerability and give some helpful suggestions for avoiding a similar situation in any organization.

The Vulnerabilities

In order to fully understand the impact of this cloud one-two punch it is necessary to break down the attack chain into its various components: SSRF and the AWS Metadata Endpoint. First, Server Side Request Forgery (SSRF) is a vulnerability that allows an attacker to control the destination address of an HTTP request sent from the vulnerable server. While this is not always the case (see Blind SSRF), the attacker can often see the response from the request as well. This allows the attacker to use the vulnerable server as a proxy for HTTP requests which can result in the exposure sensitive subnets and services.

Consider the following PHP code:

echo file_get_contents("http://".$_GET['hostname']."/configureIntegration.php");

The code above sends an HTTP request to the hostname specified by the attacker in the "hostname" GET parameter. Logic like this is commonly found in the "Integrations" section of applications. This code is vulnerable to SSRF. Consider the following scenario: There is a sensitive service running on the loopback interface of the vulnerable server. This is emulated by the following configuration:

The PHP code above is hosted on the web server that faces the internet. When an attacker discovers this endpoint, he/she might use the following to grab the data from the internal application:

curl http://vulnerableserver.com/ssrf.php?hostname=localhost:8081/secret.html?

Which would result in a hit on the internal HTTP server:

┬─[[email protected]:/t/secret]─[02:29:48 PM]
╰─>$ python3 -m http.server 8081 --bind
Serving HTTP on port 8081 ( ... - - [15/Aug/2019 14:30:56] "GET /secret.html?/configureIntegration.php HTTP/1.0" 200 -

and return the following to the attacker:

This is only available on a loopback interface

Now that the danger of SSRF is clear, let's look at how this vulnerability may be exploited in the context of the cloud (AWS in particular).

Due to the dynamic nature of the cloud, it became necessary that server instances (EC2 for example) have some way to get some basic information about their configuration for the purpose of orienting themselves to the environment in which they were spun up. Out of this need the AWS Metadata endpoint was born. This endpoint (, when hit from any EC2 instance, will reveal information about the configuration of the particular EC2 instance. There is quite a lot of information available via this endpoint including: hostname, external ip address, metrics, lan information, security groups, and last but not least, the IAM (Identity and Access Management) credentials associated with this EC2 instance. It is possible to retrieve these security credentials by hitting the following url where [ROLE] is the IAM role name:

[email protected]:~$ curl[ROLE]
  "Code" : "Success",
  "LastUpdated" : "2019-08-15T18:13:44Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIAN0P3n0W4y1nv4L1d",
  "SecretAccessKey" : "A5tGuw2QXjmqu8cTEu1zs0Dw8yt905HDCzrF0AdE",
  "Token" : "AgoJb3JpZ2luX2VjEJv//////////wEaCXVzLWVhc3QtMSJHMEUCIEX46oh4kz6AtBiTfvoHGqfVuHJI29ryAZy/wXyR51SAiEA04Pyw9HSwSIRNx6vmYpqm7sD+DkLQiFzajuwI2aLEp4q8gMIMxABGgwzNjY4OTY1NTU5NDkiDOBEJDdUKxKUkgkhGyrPA7u8oSds5hcIM0EeoHvgxvCX/ChiDsuCEFO1ctMpOgaQuunsvKLzuaTp/86V96iZzuoPLnpHHsmIUTrCcwwGqFzyaqvJpsFWdv89YIhARAMlcQ1Cc9Cs4pTBSYc/BvbEFb1z0xWqPlBNVKLMzm2K5409f/KCK/eJsxp530Zt7a1MEBp/rvceyiA5gg+6hOu65Um+4BNT+CjlEk3gwL6JUUWr9a2LKYxmyR4fc7XtLD2zB0jwdnG+EPv7aDPj7EoWMUoR/dOQav/oSHi7bl6+kT+koKzwhU/Q286qsk0kXMfG/U95TdUr70I3b/L/dhyaudpLENSU7uvPFi8qGVGpnCuZCvGL2JVSnzf8327jyuiTF7GvXlvUTh8bjxnZ8pAhqyyuxEW1tosL2NuqRHmlCCfnE3wLXJ0yBUr7uxMHTfL1gueEWghymIGhAxiYIKA9PPiHCDrn4gl5AGmLyzqxenZgcNnuwMjeTnhQ0mVf7L8PR4ZWRo9h3C1wMbYnYNi5rdfQcByMIN/XoR2J74sBPor/aObMMHVnmpNjbtRgKh0Vpi48VgXhXfuCAHka3rbYeOBYC8z8nUWYJKuxv3Nj0cQxXDnYT6LPPXmtHgZaBSUwxMHW6gU6tAHi8OEjskLZG81wLq1DiLbdPJilNrv5RPn3bBF+QkkB+URAQ8NBZA/z8mNnDfvESS44fMGFsfTIvIdANcihZQLo6VYvECV8Vw/QaLP/GbljKPwztRC5HSPe6WrC06LZS9yeTpVGZ6jFIn1O/01hJOgEwsK7+DDwcXtE5qtOynmOJiY/iUjcz79LWh184My58ueCNxJuzIM9Tbn0sH3l1eBxECTihDNbL13v5g+8ENaih+f3rNU=",
  "Expiration" : "2019-08-16T00:33:31Z"

The response contains several things: the AccessKeyId, SecretAccessKey, and the Token for this account. Using these credentials, an attacker can login to AWS and compromise the server and potentially many other assets. In the case of the Capital One breach, these credentials were used to access an S3 bucket which contained millions of records of user information.

In summary, the poor implementation of the metadata service in AWS allows for an attacker to easily escalate an SSRF vulnerability to control many different cloud assets. Other cloud providers like Google Cloud and Microsoft Azure also provide access to a metadata service endpoint but requests to these endpoints require a special header. This prevents most SSRF vulnerabilities from accessing the sensitive data there.

How to prevent such a vulnerability

In order to prevent this type of vulnerability from occurring firewall rules will need to be put in place to block off the metadata endpoint. This can be done using the following iptables rule:

sudo iptables -A OUTPUT -d -j DROP

This will prevent any access to this ip address. However, if access to the metadata endpoint is required, it is also possible to exclude certain users from this rule. For example, the iptables rule below would allow only the root user to access the metadata endpoint:

sudo iptables -A OUTPUT -m owner ! --uid-owner root -d -j DROP

These blocks MUST be done at the network level - not the application level. There are too many ways to access this IP address. For example, all of these addresses below refer to the metadata service:

http://425.510.425.510/ Dotted decimal with overflow
http://2852039166/ Dotless decimal
http://7147006462/ Dotless decimal with overflow
http://0xA9.0xFE.0xA9.0xFE/ Dotted hexadecimal
http://0xA9FEA9FE/ Dotless hexadecimal
http://0x41414141A9FEA9FE/ Dotless hexadecimal with overflow
http://0251.0376.0251.0376/ Dotted octal
http://0251.00376.000251.0000376/ Dotted octal with padding DNS Name
http://A. DNS Rebinding ( -> AWS Metadata)

And there are many more. The only reliable way to address this issue is through a network level block of this IP.

The easiest way to check the IAM roles associated with each EC2 instance is to navigate to the EC2 Dashboard in AWS and add the column "IAM Instance Profile Name" by clicking the gear in the top right hand corner. Once the IAM role for each EC2 instance is easily visible, it is possible to check these roles to see if they are overly permissive for the what is required of that EC2 instance.

It is also imperative to understand the pivoting potential of these IAM Roles. If it is possible that an SSRF, XXE, or RCE vulnerability was exploited on any cloud system, the logs for the IAM Role associated with this instance must be thoroughly audited for malicious intent.

Coming from someone who can officially say that information security has given me a few gray hairs, I'm writing this article from the perspective of someone who's been around the block. With over 15 years in information security, I feel like I've seen it all. And while I can't claim to be a great penetration tester myself, I can say that I work with (and have worked with) some truly talented pentesters. I can also feel confident stating that I've read more pentest reports than most.

So, having this background… I get asked by businesses and defenders all the time, "What advice would you give?" and, "What lessons can be learned?"

Well, thanks for asking…. (insert deep breath here)

1. [email protected] are still w3$k!

In fact, we've known that passwords are a weak form of authentication since the moment the first password-based authentication system was created. Passwords can be weak for several compounding reasons. Whether it be due to their limited length and complexity (keyspace) or the fact that they can be shared, guessed, written down, or reused, let's face it, they provide almost no security. Until we stop using passwords or ensure that every last account has a strong and unique password that can't be guessed or cracked, we accept significant risk.

2. Multifactor authentication

(MFA) is not enabled or required for all remote access. While it is almost common place now to find MFA on VPNs, we still find roles, groups, and even URLs allowing MFA to be bypassed. Further, other types of remote access like Citrix and Remote Desktop, Outlook Web Access, and SSH are more overlooked. Remember that when passwords are weak (and they probably are), attackers will be quick to take advantage when MFA is not enforced.

3. Two wrongs don't make a right

Your mom said it, and now I will too. In SynerComm's reporting, we consider both #1 and #2 to be high-severity findings in our pentest reports. When combined, these result in a critical weakness. Password spraying allows an attacker to easily guess common passwords (think Summer19) and gain immediate access to email and internal networks.

4. Vulnerability scanners provide a false sense of security

Don't get me wrong, get your EternalBlue and Heartbleed patched, but don't think just because you're well patched that you are secure. Vulnerability scanning is important, but at its best, it discovers live systems, missing patches, default credentials, weak services, and other well-known vulnerabilities. What it doesn't tell you is that your systems may already include a roadmap to access anything and everything on your network.

Pentesters, just like modern attackers, typically don't rely on missing patches to traverse networks, gather privileges, and access protected data. No vulnerability scanner will warn you that all laptops share the same local administrator password or that a domain admin RDP'd into one of them to troubleshoot an issue (and left their cleartext password cached in memory).

5. Your next-generation firewall and endpoint solution could also provide a false sense of security

Again, don't get me wrong, I am a big fan of solutions like Palo Alto and CrowdStrike.  BUT, simply purchasing and deploying these solutions doesn't make your networks and systems more secure. Like any control, all security solutions must be configured, tuned, and VALIDATED.

Lesson #5: It isn't uncommon to find best of breed security controls running in "monitor only" or "log only" state.  After all, the easiest way to start is to convert that old layer 3 ASA config and turn on the security features later. And let's not forget that ALL IT EMPLOYEES should always be whitelisted in these controls because we don't need that stuff in our way.

6. Maybe this should be #1, but I think hope we've all got this figured out…Compliance does not result in security

Contractual, industry, and especially regulatory compliance are all important, but don't let compliance get in the way of being secure. Information security programs should be designed to protect the confidentiality, integrity, availability, and usefulness of information; compliance should just be a benefit of good security.

7. Last, but not least…  If you develop your own apps, contract development of apps, or acquire custom developed applications, assess them!

Secure coding isn't a new concept, but the concept is (unfortunately) new still to many developers. Widely-used and commercial off the shelf (COTS) applications are heavily scrutinized, but your applications may be waiting for the right attacker to come along. A lesson worth sharing is that a breach can be far more costly than validating and potentially fixing issues before the attack.

If you've made it to this point, thank you for reading through. This often isn't what people expect to hear or even want to hear, but sometimes honesty can be blunt and surprising. My advice is always start with a solid foundation and then build on it. Use frameworks like the CIS Top 20 to provide a prioritized roadmap and don't get caught skipping ahead. Good security can be as simple as keeping to the basics.


While experts have agreed for decades that passwords are a weak method of authentication, their convenience and low cost has kept them around. Until we stop using passwords or start using multi-factor authentication (for everything), a need for stronger passwords exists. And as long as people create their own passwords that must be memorized, those passwords will remain weak and guessable. This blog/article/rant will cover a brief background of password cracking as well as the justification for SynerComm’s 14-character password recommendation.

First things first: What is a password?

Authentication is the process of verifying the identify of a user or process, and a password is the only secret “factor” used in authentication. For the authentication process to be trusted, it must positively identify the account owner and thwart all other attempts. This is critical, because access and privileges are granted based on the user’s role. Considering how easily passwords can be shared, most have already concluded that passwords are an insufficient means of authenticating people. We must also consider that people must memorize their password and that they often need passwords on dozens if not hundreds of systems. Because of this, humans create weak, easily guessed, and often reused passwords.

Password Controls

Over the years, several password controls have emerged to help strengthen password security. This includes minimum password length, complexity, preventing reuse, and a reoccurring requirement to create new passwords. While it is a mathematical fact that longer passwords and a larger key space (more possible characters) do indeed create stronger passwords, we now know that regularly changing one’s password provides no additional security control. In fact, forcing users to regularly create new and complex passwords weakens security. It forces users to create guessable patterns or simply write them down. OK, I will stop here, we'll save the ridiculousness of password aging for a future blog.

So Why 14 Characters?

So why is 14 characters the ideal or best recommended password length? It is not. It is merely a minimal length; we still prefer to see people using even longer passwords (or doing better than passwords in the first place). SynerComm recommends a 14-character minimum for several reasons. First, 14-character passwords are very difficult to crack. Most passwords containing 9 characters or less can be brute-force guessed in under 1 day with a modern password cracking machine. Passwords with 10-12 characters and even 13-14 characters can still be easily guessed if they are based on a word and a 4-digit number. (Consider Summer2018! or your child’s name and birthday.) Next, and perhaps more importantly, 14-character minimums will prevent bad password habits and promote good ones. When done with security awareness training, users can be taught to create and use passphrases instead of passwords. Passphrases can be sentences, combinations of words, etc. that can be meaningful and easy to remember. Finally, 14 characters is the largest “Minimal Password Length” currently allowed by Microsoft Windows. While Windows supports very long passwords, it is not simple to enforce a minimum greater than 14 characters (PSOs can be used to increase this in Windows 2008 and above, and registry hacks from anything older, but it can be a tedious process and introduces variables into the management and troubleshooting of your environment).

The remainder of this article provides facts and evidence to support our recommendations.

Analysis of Password Length

SynerComm collected over 180,000 NTLM password hashes from various breached domain controllers and attempted to crack them using dictionary, brute-force, and cryptanalysis attacks. The chart below shows the password lengths of the over 93,000 passwords cracked. It is interesting to find passwords that fall drastically below the usual minimum length of eight characters. Although few, it is also worth noting that 20, 21 and 22-character passwords (along with one 27-character password) were cracked in these analyses.

Passwords Cracked = 93,706. Total unique entries of those passwords cracked = 68,161

Passwords of 9 or fewer characters account for 50% of those cracked; 12 or fewer, 75%

Password Length - Number of Cracked Passwords
1 = 3 (0.0%)
2 = 2 (0.0%)
3 = 137 (0.15%)
4 = 27 (0.03%)
5 = 405 (0.43%)
6 = 1527 (1.63%)
7 = 3827 (4.08%)
8 = 26191 (27.95%)
9 = 23677 (25.27%)
10 = 17564 (18.74%)
11 = 9098 (9.71%)
12 = 6267 (6.69%)
13 = 2915 (3.11%)
14 = 1063 (1.13%)
15 = 577 (0.62%)
16 = 276 (0.29%)
17 = 81 (0.09%)
18 = 39 (0.04%)
19 = 13 (0.01%)
20 = 10 (0.01%)
21 = 1 (0.0%)
22 = 4 (0.0%)
23 = 0 (0.0%)
24 = 0 (0.0%)
25 = 0 (0.0%)
26 = 1 (0.0%)
27 = 1 (0.0%)

Analysis of Password Composition

*Note: The password "acme" was used to replace specific company names. For example, if the password "synercomm123$" would have been found in a SynerComm password dump it would have been replaced with "acme123$". This change occurred only to serve the top 10 password and base word tables. Analyses of length and masks were performed without this change.

Top 10 passwords
Password1 = 543 (0.58%)
Summer2018 = 424 (0.45%)
Summer18 = 395 (0.42%)
acme80 = 368 (0.39%)
Fall2018 = 362 (0.39%)
Good2go = 350 (0.37%)
yoxvq = 345 (0.37%)
Gr8team = 338 (0.36%)
Today#08 = 308 (0.33%)
Spring2018 = 219 (0.23%)
Top 10 base words
password = 1993 (2.13%)
summer = 1663 (1.77%)
acme = 1619 (1.73%)
spring = 734 (0.78%)
fall = 706 (0.75%)
welcome = 652 (0.7%)
winter = 577 (0.62%)
w0rdpass = 562 (0.6%)
good2go = 351 (0.37%)
yoxvq = 345 (0.37%)
Last 4 digits (Top 10)
2018 = 3037 (3.24%)
2017 = 821 (0.88%)
1234 = 733 (0.78%)
2016 = 659 (0.7%)
2015 = 588 (0.63%)
2014 = 561 (0.6%)
2013 = 435 (0.46%)
2012 = 358 (0.38%)
2010 = 296 (0.32%)
2019 = 286 (0.31%)
Masks (Top 10)
?u?l?l?l?l?l?d?d (6315) (8 char)
?u?l?l?l?l?l?d?d?d?d (4473) (10 char)
?u?l?l?l?l?l?l?d?d (4021) (9 char)
?u?l?l?l?d?d?d?d (3328) (8 char)
?u?l?l?l?l?d?d?d?d (2985) (9 char)
?u?l?l?l?l?l?l?l?d?d (2742) (10 char)
?u?l?l?l?l?l?l?d (2601) (8 char)
?u?l?l?l?l?l?l?l?d (2371) (9 char)
?u?l?l?l?l?l?l?d?d?d?d (1794) (11 char)
?u?d?d?d?d?d?d?d?d (1756) (9 char)

Password Hash Cracking Speeds

When performing our own password cracking, SynerComm uses a modern password cracker built with 8 powerful GPUs (https://www.synercomm.com/blog/how-to-build-a-2nd-8-gpu-password-cracker/). Typically used by gamers to create realistic three-dimensional worlds, these graphics cards are remarkably efficient at performing the mathematical calculations required to defeat password hashing algorithms. The first screenshot below shows a brute-force guess of an 8-character password. It shows that most 8-character passwords will crack in 4.5 hours or less. While the same attack against a 9-character password could take up to 18 days to complete, we can reduce the key space (possible characters used in passwords) and complete 10-11 character attacks in just 1-2 days or less. The second screenshot shows an optimized character set mask attack against 11-character passwords. This attack completes in less than 8 hours and returns many poorly selected 11-character passwords.

Below is an optimized crack attempt for 11-character passwords using only common characters and format (e.g., beginning with an upper case letter or number):

Password Best Practices

  1. Do Not Share Your Password with Anyone!
  2. Do Not Store Passwords in Spreadsheets, Documents, or Email! Also avoid storing passwords in your browser (IE, Firefox, Chrome).
  3. Create passphrases instead of passwords. Long passwords are always stronger than short passwords. Passwords shorter than 10 characters can be easily and quickly cracked if their hashes become available to the attacker. SynerComm recommends enforcing at least a 12-character minimum for standard user accounts but suggests using a 14-character minimum to promote good password creation methods. Privileged accounts such as domain administrators should have even longer passwords.
  4. While password complexity is less critical with long (>=14 char) passwords, it still helps ensure a larger key space. Encourage users to use less common characters such as spaces, commas, and any other special character found on the keyboard. (Spaces can make an enormous difference!)
  5. Never reuse the same password on multiple accounts. While it is easier to remember 1 password than 100, our next best practice will provide a solution to that problem too. Dumps containing passwords from breaches are great starting places to guessing a user’s password.
  6. Use a password safe. Modern password managers can sync stored passwords between computers and mobile devices. By using a safe, most users only need to remember 2-3 passwords and the rest can be stored securely in a safe.
    1. When using a safe, it is best practice to allow the application to generate most passwords. This way you can create 15-20 character completely random passwords that you never need to know or memorize.
  7. Implement multi-factor authentication whenever possible. Passwords will always be a weak and vulnerable form of authentication. Using multi-factor greatly reduces the chances of a successful authentication attack. Multi-factor authentication should be used for ALL (no exceptions) remote access and should increasingly be considered for ALL privileged account access.

*For shared accounts (root, admin, etc.), restrict the number of people who have access to the password. Change these passwords anytime someone who could know the password leaves the organization.


~Brian Judd (@njoyzrd) with password analysis by Chad Finkenbiner

Why? … Stop asking questions!


In February 2017, we took our first shot at upgrading our old open-frame 6 GPU cracker (NVIDIA 970).  It served us well, but we needed to crack 8 and 9-character NTLM hashes within hours and not days. The 970s were not cutting it and cooling was always a challenge. Our original 8 GPU rig was designed to put our cooling issues to rest.

Speaking of cooling issues, we enjoyed reading all of the comments on our 2017 build. Everyone seemed convinced that we were about to melt down our data center. We thank everyone for their concern (and entertainment).

"the graphics cards are too close!"

"nonsense. GTX? LOL. No riser card? LOL good luck."

To address cooling, we specifically selected (at the time) NVIDIA 1080 Founders Edition cards due to their 'in the front and out the rear' centrifugal fan design.  A couple months after our initial blog, we upgraded from NVIDIA 1080 to NVIDIA 1080 Ti cards.  And admitedly, we later found that more memory was useful when cracking with large (>10GB) wordlists.

OK, But Why?

Shortly after building our original 8 GPU cracker, we took it to RSA and used it as part of a narrated live hacking demo. Our booth was a play on the Warlock’s command center where we hacked Evil Corp from the comfort of Ma’s Basement. (yeah, a bit unique for RSA…)

Kracken 3 - RSA Debut

Our 1st 8 GPU rig built in February 2017

Shopping List

You have a little flexibility here, but we’d strongly suggest the Tyan chassis and Founders Edition NVIDIA cards. The Tyan comes with the motherboard, power supplies (3x), and arrives all cabled up and ready to build. We went with a 4TB SSD to hold some very large wordlists but did not setup RAID with a 2nd drive (yet). Higher CPU speeds and memory mostly help with dictionary attacks; therefore a different build may be better suited for non-GPU cracking.




The Build

Despite being a hash munching monster and weighing nearly 100 lbs. when assembled, this build is easy enough for novice.

Tyan B7079F77CV10HR-N

Hardware Build Notes

  1. Normally I like to install the CPU(s) first, but I ordered the wrong ones and had to install them 3 days later. Be sure to get V3 or V4 XEON E5 processors, V2 is cheaper but ‘it don’t fit’.

    1. When installing the (included) Tyan heat-sinks, we added a little extra thermal paste even through the heat-sinks already have some on the bottom.

  2. Install memory starting in Banks A and E (see diagram above). CPU 0 and CPU 1 each require matching memory. Memory Banks A-D are for CPU 0 and Memory Banks E-H are for CPU 1. We added 2x 32GB in Bank A and 2x 32GB in Bank E for a total of 128GB RAM.

  3. Install hard drive for (Linux) operating system. We chose a 4TB SSD drive to ensure plenty of storage for large wordlists and optimum read/write performance. The chassis has 10 slots so feel free to go crazy with RAID and storage if you wish.

  4. Prep all 8 GPU cards by installing the included Tyan GPU mounting brackets. They are probably not required, but they ensure a good seat.

  5. Install GPU cards. Each NVIDIA 1080 Ti requires 2 power connections per card. The regular 1080 cards only require 1 if you decide not to go the ‘Ti’ route. Again, Tyan includes all necessary power cables with the chassis.

  6. Connect or insert OS installation media. I hate dealing with issues related to booting and burning ISOs written to USB flash; so we went with a DVD install (USB attached drive).

  7. Connect all 3 power cords to the chassis and connect the other end of each cord to a dedicated 15A or 20A circuit. While cracking, the first 2 power supplies draw 700-900W with a less on the 3rd. They do like dedicated circuits though, it is easy to trip breakers if anything else is sharing the circuit.

Software Build Notes

Everyone has their own preferred operating system and configuration, so we’ve decided not to go telling you how to do your thing. If you are new to installing and using a Linux operating system, we did include a complete walk-through in our February 2017 post: How to build a 8 GPU password cracker.

The basic software build steps are as follows:

  1. Install your preferred Linux OS. We chose Ubuntu 18.04 LTS (64 bit - server). Fully update and upgrade.

  2. Prepare for updated NVIDIA drivers:

2a. Blacklist the generic NVIDIA Nouveau driver

sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo update-initramfs -u
sudo reboot

2b. Add 32-bit headers

sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt-get install build-essential libc6:i386

2c. Download, unzip and install the latest NVIDIA driver from http://www.nvidia.com/Download/index.aspx

sudo ./NVIDIA*.run
sudo reboot

3. Download and install hashcat from https://hashcat.net/hashcat/

4. (Optional) Download and install hashview from http://www.hashview.io/

The Outcome

Go ahead, run a benchmark with hashcat to make sure everything works!

./hashcat-5.0.0/hashcat64.bin -m 1000 -b

Going to be at RSA 2019? Stop by and see us! https://events.synercomm.com/events/138/


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram