Warning: This blog contains purposeful marketing and gratuitous plugs for SynerComm’s CASM™ Subscription services. Seriously though, the following article will present the need for better external visibility and vulnerability management.
Whether you are vulnerability scanning to meet compliance requirements or doing it as part of good security practices, there is a universal need. At the time of this article, there are essentially three equally capable and qualified scanning solutions. They include products from Tenable, Rapid7 and Qualys. My point is that each of these scanning solutions, if configured correctly, should produce accurate and similar results. Therefore, as long as your scanning provider is using one of these three solutions, they should be able to detect vulnerabilities. SynerComm starts with a top scanner and then addresses all the gaps that your MSSP is missing.
Vulnerability scanning and analysis is a critical process within all information security programs. Scanners should find missing patches, dangerous configurations, default passwords, and hundreds of other weaknesses. Their technology is based on probing systems over networks and trying to determine if the system exhibits specific vulnerabilities. While the process itself isn’t complicated, many organizations choose to outsource it to a managed service provider. If you need a provider or already have one, it’s time to upgrade to Continuous Attack Surface Management (CASM™).
Vulnerability scanning MSSPs served their role well for many years but failed to keep up. They failed to keep with cloud migrations, failed to keep up with the rate of IT changes, and failed to provide tools that simplify and enable security for their subscribers.
If you’ve ever wondered what your systems and exposures look like to a cyber-criminal, just ask a pentester. SynerComm’s CASM® Engine was originally designed to provide accurate and timely reconnaissance information to our penetration testers. Access to this data and our ‘Findings-Based Reporting’ is available to all CASM® and Continuous Penetration Test subscribers.
Learn more about our Continuous Attack Surface Management, SynerComm’s CASM® Engine, and our industry-leading Continuous Penetration Test subscriptions.
|Scheduled Scanning of Known Assets||✔️||✔️|
|Ad-Hoc (Manual) Scanning||✔️||✔️|
|24/7 Online Dashboard Reporting||✔️||✔️|
|Discovery of New Assets||✔️|
|Elimination of False-Positives||✔️|
|Risk-Based Customizable Alerts||✔️|
|Access to Penetration Testers||✔️|
Palo Alto Networks firewalls have the ability to create security policies and generate logs based on users and groups, and not just IP addresses. This functionality is called User-ID.
User-ID™ enables you to map IP addresses to users on your network using a variety of techniques. The methods include using agents, monitoring domain controller event logs, monitoring terminal servers, monitoring non-AD authentication servers and syslog servers, and even through captive portals (that prompt the user for login). In addition to its use in policies, logging access and threats by user can be invaluable in incident response and forensics. To take full advantage of this feature, it is ideal to map as many IP addresses to users as possible.
With all these great methods to map users to IP addresses, we often miss many systems. They include non-domain joined systems, Linux/Unix systems that don’t centrally authenticate, and potentially many other devices (phones, cameras, etc.). Palo Alto has yet another feature for mapping users, but one that comes with great risk.
To identify mappings for IP addresses that the agent didn’t map, the firewall can probe and interrogate devices. The intention is to only probe systems connected to trusted internal zones, but a misconfigured zone could even allow sending probes out to the internet. Taking that misconfiguration aside, client probing is still a significant security risk. By default, Palo Alto agents send out a request every 20 minutes to all IP addresses that were recently logged but not mapped to a user. It does this assuming that the IP belongs to a Windows system and it uses a WMI probe to log into the unmapped system.
SynerComm believes that a large number of PAN
customers have enabled WMI and/or NetBIOS Client Probing within the User-ID
settings. Our AssureIT penetration
testing team is regularly detecting this on internal pentests. SynerComm
recommends disabling Client Probing in the User-ID Agent setup due to the risk.
Many networking and network security devices use Microsoft WMI probing to interrogate Windows hosts for things like collecting user information. For authentication purposes, a WMI probe contains the username and hashed password of the service account being used. When a domain account is used, an NTLMv1 or NTLMv2 authentication process takes place. It has come to our attention that our penetesters are finding Palo Alto firewalls that are using insecure User-ID methods. Specifically, those that are using WMI and NetBIOS probes to attempt user identification. This allows an attacker to obtain the service account’s username, domain name, and the password hash (more likely the hashed challenge/nonce). Because the service account requires privileges, this becomes a serious security exposure that could be easily abused.
An October 30, 2019 Palo Alto Advisory “Best Practices for Securing User-ID Deployments” recommends ensuring that User-ID is only enabled on internal/trust zones, and applying the principal of least privilege for the service account. Again though, SynerComm recommends also disabling WMI probing completely.
(By: Brian Judd, VP Information Assurance)
In a perfect world, we could trust that every device on our internal network is owned, managed, and monitored by our company and our staff. That includes having full trust that no systems are already compromised, and that no intruder or insider could place a rogue device on our network. Because this is rarely, if ever the case, it’s a stretch to think that it’s safe to share valid domain credentials with any device connected to an internal network.
Using well-known penetration testing tools like responder.py, it is trivial to setup a SMB server that that can listen for and respond to NTLM authentication requests. When good OPSEC isn’t a factor, responder.py also includes abilities to respond to LLMNR and NetBIOS broadcasts to poison other local systems into authenticating to its listening SMB server. It then stores the username and the hashed challenge (nonce) from the authentication messages. Depending on the strength of the password, these captured “hashes” could be cracked and the account could be used to log into other systems.
While all of that sounds scary, it isn’t the concern of this article. If configured to use “Client Probing”, Palo Alto firewalls and their User-ID agents make WMI and NetBIOS connections to map unknown IP addresses to their logged in user. Also, because WMI is IP based, it’s possible to probe any reachable (retable) network/system. To be effective, User-ID almost always uses a domain service account so that it can access any domain member system. An attacker with the ability to run responder.py on an internal network is likely to receive authentication requests from Palo Alto User-ID agents without any need for noisy poisoning attacks. By default, the agent probes every 20 minutes and anytime a new log is written to the firewall without user identification.
OK, let’s make this a bit worse… What if we didn’t need to crack the service account’s password? What if we could just relay the agent’s authentication request to another system and trick it into authenticating the attacker instead? Again, this is trivial and easy using well-known tools like ntlmrelayx.py or MultiRelay.py. Even worse, these tools are not exploits, this is how NTLM authentication was designed to work. If the relayed account’s privilege is sufficient, ntlmrelayx.py will even dump the system’s stored hashes from the SAM database, or execute shell code.
Oh, remember earlier when I mentioned that Palo Alto’s agent probes anytime a new log is written by an unmapped IP address? Using this “feature”, we can script something as simple as a DNS lookup or wget request to generate access logs on the firewall and trigger a User-ID authentication request. With a little time, these logins could be relayed to log into every other system on the network. Considering that older Palo Alto documentation was vague with regards to the necessary service account privileges, it is common to find them as members of highly privileged groups including Domain Administrators. To an attacker, this could be game, set, match in just a few minutes.
Specify included and excluded networks when configuring User-ID
The include and exclude lists available on the User-ID
Agent, as well as agentless User-ID, can be used to limit the scope of User-ID.
Typically, administrators are only concerned with the portion of IP
address space used in their organization. By explicitly specifying
networks or preferably a host address /32, to be included with or excluded from User-ID,
we can help to ensure that only trusted and company-owned assets are probed,
and that no unwanted mappings will be created unexpectedly. See above image.
Disable WMI and NetBIOS Client Probing
WMI, or Windows Management Instrumentation, is a mechanism that can be used to actively probe managed Windows systems to learn IP-user mappings. Because WMI probing trusts data reported back from the endpoint, it is not a recommended method of obtaining User-ID information in a high security network. In environments containing relatively static IP-user mappings, such as those found in common office environments with fixed workstations, active WMI probing is not needed. Roaming and other mobile clients can be easily identified even when moving between addresses by integrating User-ID using Syslog or the XML API and can capture IP-user mappings from platforms other than Windows as well.
On sensitive and high security networks, WMI probing increases the overall attack surface, and administrators are recommended to disable WMI probing and instead rely upon User-ID mappings obtained from more isolated and trusted sources, such as domain controllers.
If you are using the User-ID Agent to parse AD security event logs, syslog messages, or the XML API to obtain User-ID mappings, then WMI probing should be disabled. Captive portal can be used as a fallback mechanism to re-authenticate users where security event log data may be stale.
Use a dedicated service account for User-ID services with the minimal permissions necessary
User-ID deployments can be hardened by only including the minimum set of permissions necessary for the service to function properly. This includes DCOM Users, Event Log Readers, and Server Operators. If the User-ID service account were to be compromised by an attacker, having administrative and other unnecessary privileges creates significant risk. Domain Admin and Enterprise Admin privileges are not required to read security event logs and consequently should not be granted.
Deny interactive logon for the User-ID service account
While the User-ID service account does require certain
permissions in order to read and parse Active Directory security event logs, it
does not require the ability to log on to servers or domain systems
interactively. This privilege can be restricted using Group Policies, or
by using a Managed Service Account with User-ID (See Microsoft TechNet for more
information on configuring Group Policies and Managed Service Accounts.)
If the User-ID service account were to be compromised by a malicious user, the impact
could be reduced by denying interactive logons.
Deny remote access for the User-ID service account
Typically, service accounts should not be members of any security groups that are used to grant remote access. If the User-ID service account credentials were to be compromised, this would prevent the attacker from using the account to gain access to your network from the outside using a VPN.
Configure egress filtering
Prevent any unwanted traffic (including potentially unwanted User-ID Agent traffic) from leaving your protected networks out to the Internet by implementing egress filtering on perimeter firewalls
For more information on setting up and configuring User-ID see the following:
User-ID, PAN-OS Administrator's Guide
Getting Started: User-ID
Create User Groups for Access to Whitelist Applications, Internet Gateway Best Practice Security Policy
User-ID Resource List on Configuring and Troubleshooting
Coming from someone who can officially say that information security has given me a few gray hairs, I'm writing this article from the perspective of someone who's been around the block. With over 15 years in information security, I feel like I've seen it all. And while I can't claim to be a great penetration tester myself, I can say that I work with (and have worked with) some truly talented pentesters. I can also feel confident stating that I've read more pentest reports than most.
So, having this background… I get asked by businesses and defenders all the time, "What advice would you give?" and, "What lessons can be learned?"
Well, thanks for asking…. (insert deep breath here)
In fact, we've known that passwords are a weak form of authentication since the moment the first password-based authentication system was created. Passwords can be weak for several compounding reasons. Whether it be due to their limited length and complexity (keyspace) or the fact that they can be shared, guessed, written down, or reused, let's face it, they provide almost no security. Until we stop using passwords or ensure that every last account has a strong and unique password that can't be guessed or cracked, we accept significant risk.
(MFA) is not enabled or required for all remote access. While it is almost common place now to find MFA on VPNs, we still find roles, groups, and even URLs allowing MFA to be bypassed. Further, other types of remote access like Citrix and Remote Desktop, Outlook Web Access, and SSH are more overlooked. Remember that when passwords are weak (and they probably are), attackers will be quick to take advantage when MFA is not enforced.
Your mom said it, and now I will too. In SynerComm's reporting, we consider both #1 and #2 to be high-severity findings in our pentest reports. When combined, these result in a critical weakness. Password spraying allows an attacker to easily guess common passwords (think Summer19) and gain immediate access to email and internal networks.
Don't get me wrong, get your EternalBlue and Heartbleed patched, but don't think just because you're well patched that you are secure. Vulnerability scanning is important, but at its best, it discovers live systems, missing patches, default credentials, weak services, and other well-known vulnerabilities. What it doesn't tell you is that your systems may already include a roadmap to access anything and everything on your network.
Pentesters, just like modern attackers, typically don't rely on missing patches to traverse networks, gather privileges, and access protected data. No vulnerability scanner will warn you that all laptops share the same local administrator password or that a domain admin RDP'd into one of them to troubleshoot an issue (and left their cleartext password cached in memory).
Again, don't get me wrong, I am a big fan of solutions like Palo Alto and CrowdStrike. BUT, simply purchasing and deploying these solutions doesn't make your networks and systems more secure. Like any control, all security solutions must be configured, tuned, and VALIDATED.
Lesson #5: It isn't uncommon to find best of breed security controls running in "monitor only" or "log only" state. After all, the easiest way to start is to convert that old layer 3 ASA config and turn on the security features later. And let's not forget that ALL IT EMPLOYEES should always be whitelisted in these controls because we don't need that stuff in our way.
Contractual, industry, and especially regulatory compliance are all important, but don't let compliance get in the way of being secure. Information security programs should be designed to protect the confidentiality, integrity, availability, and usefulness of information; compliance should just be a benefit of good security.
Secure coding isn't a new concept, but the concept is (unfortunately) new still to many developers. Widely-used and commercial off the shelf (COTS) applications are heavily scrutinized, but your applications may be waiting for the right attacker to come along. A lesson worth sharing is that a breach can be far more costly than validating and potentially fixing issues before the attack.
If you've made it to this point, thank you for reading through. This often isn't what people expect to hear or even want to hear, but sometimes honesty can be blunt and surprising. My advice is always start with a solid foundation and then build on it. Use frameworks like the CIS Top 20 to provide a prioritized roadmap and don't get caught skipping ahead. Good security can be as simple as keeping to the basics.
So you would like to automate your vulnerability management lifecycle? Good luck. But if you are motivated hopefully this little bit of powershell will help. Here are the prereqs:
- Must have powershell v3
- Must use Qualys for vulnerability scanning.
- For added functionality an additional cmdlet has been included to generate tickets with an external ticketing system via email (this format requires Computer Associates Service Desk, but if your's takes an email as input it should just be a formatting change, also if you do set this up submit a pull request to the github project and I will include another cmdlet for your ticketing system)
Once these are met it is quite simple to get up and rolling with this module. Clone the repo from git and then import the module into your powershell environment. Once done you will have the following cmdlets at your disposal:
These are relatively self explanatory and they are pipeable as well. Before you are able to use all the functions you will have to setup a couple variables to match your own environment. These are listed in the header of the module under defaults or explicitly mentioned in the comments. There are examples in the help of the cmdlets. If you have any feedback feel free to leave a comment or issue a pull request if you would like to make an improvement.