Featured Posts

Monday, February 2, 2015

Physical Home Security with Wireless IP Cameras and Monitoring Software

Click here to view a PDF of this post

I see a lot of people who want a cheap and custom security system for their apartment, home, or business. I recently accomplished this and want to share how. Hopefully this will inspire people to be proactive in protecting their property.

My security solution consists of:

Most of these things can be changed with alternatives such as the cameras, software, DNS host, online storage, etc. This was the setup which I found to work best for me.

So for about $240 total I have the following features:
  • Cameras for pictures and video recording capable of good quality night vision.
  • Motion detection to determine when to record video.
  • E-mail and text message alerts based off motion detection.
  • Remotely monitor the cameras from anywhere.
  • Online backup of all pictures and video in real time.

Step 1 - Configure Wireless IP Cameras

  • Plug in each camera to an outlet for power and connect it to your home network (router) with the provided ethernet cable.
  • Use the IP Camera Tool to find the IP address of the cameras. This tool can be found on the included CD or downloaded here:
  • Type the IP into your browser and connect over port 88, it will probably be something like
  • Login with the default 'admin' user and no password. You should then create a strong password for this account, with a max of 12 characters. I suggest using a password manager or similar to create a complex password. Once logged in you can go to Basic Settings>User Accounts to delete the 'admin' username and create your own.
  • Login to each camera and disable DHCP and assign each one its own static IP address. This makes it easy to unplug the camera when you don't need to use it and power it back on when you do.
  • Enter and save your network information in Network>Wireless Settings. This is assuming the cameras will be used wirelessly and not always connected via an ethernet cable.
That was easy! After you do this for each of your cameras you are pretty much done with setting them up. Go through each page and configure them how you like, but remember all motion detection and alarms will be done with the Blue Iris software (which is not a necessity, just my preference because of the many options). You could set up detection, alarms, and remote monitoring with just the Foscam cameras without Blue Iris software however there are drastically less configurable options.

Step 2 - Configure Blue Iris Software

At the time of writing this the current version of Blue Iris is It's still fairly new, with some people reporting bugs which is why I am waiting to update, plus my current setup works just fine. A very helpful resource is the help page and search which has pretty good documentation about the different features of Blue Iris. If you ever get stuck, use this:
  • Add a camera by clicking the '+' button in the top right corner of the Camera’s window.
  • Add the settings for the type of camera you own. There is a large drop down list to choose many different cameras. The user and password are the credentials you used to set up the cameras in step 1.
  • Now that the camera is added and connected to Blue Iris you can configure various motion detection and alarm options. Some of these options may take some trial and error to find which settings fit your desired security solution best.
  • These are my settings for motion detection and what actions are taken once motion is detected. I have it recording video and taking snapshots once any detection happens.
  • These are my alerts settings. I immediately get a text message and e-mail with snapshot if the motion detection is triggered.
At this point you have your cameras set up with the correct detection, recording, and alert settings. Currently recorded media is being saved to the local hard drive of the laptop or whatever device you are running Blue Iris on. The only thing left to do is online backup/storage just in case an intruder were to steal your laptop, thus losing all of the evidence, as well as accessing the cameras remotely (optional). I'm going to go over setting up Google Drive to automatically sync all of the recorded video and pictures since it's free and you get 15GB. Any good security solution should always have some type of online backup.

Step 3 - Create Online Backup

That's pretty much it! Test it out by dragging files into the folders to make sure they sync immediately. I like to have Google Drive boot on system startup so it's one less thing I have to do before I start Blue Iris.

Step 4 (Optional) - Configure Remote Access

Remote access consists of two things, the Blue Iris web server and a DNS host. Be aware this does increase the attack surface of your network so always use strong passwords and understand that nothing is ever completely secure. If you aren’t actively using your security system or planning on remotely accessing your cameras I would not run the Blue Iris web server unless you absolutely have to.
  • Go to Blue Iris options and select the Web Server tab. Use the 'Find address now' button or Google to find your public IP address. Choose a port number which is not being used on your laptop. I like to choose something not obvious (80, 443, etc.) however this is just an obfuscation technique and is not truly security. Add your local and external IP address as well.
  • Create a Blue Iris user and strong password which you can use to login remotely:
  • Login to your router and forward the port number you just configured for the web server: You should now be able to remotely access the Blue Iris login page from the external IP and port you configured previously. What if your public IP changes? Use dynamic DNS. There are many good free DNS options out there, I chose FreeDNS.
  • Create an account at FreeDNS, add a subdomain, and download a dynamic DNS client such as FreeDNS Update. Add your FreeDNS account information in the settings tab. You will want this running whenever you want to remotely access your cameras.

    You can now remotely access the Blue Iris login page remotely from your own domain on the port you configured in the Web Server tab:

Done! So before I leave the house I turn on the system by using the 'traffic signal' in Blue Iris and setting it to green. There is an option in Blue Iris to define the amount of time before motion detection and recording occurs. I have that option set to around 5 minutes so it doesn't start recording me leaving the house and send unnecessary sms and e-mail alerts. So before I leave my house I have my laptop running Blue Iris, Google Drive, and FreeDNS Update. I usually try to put the laptop in a place which isn't obvious so there is less chance of it being stolen if someone did break in. My laptop uses a Core i5-2410M CPU at 2.3GHz and usually runs around 10-30% load with everything running. Because of this you want to make sure your laptop is always plugged in and its power settings are configured to never sleep, hibernate, or shut down the hard drive.

I now have a custom security solution which was a fairly cheap investment. Please let me know if you have any questions, comments, or if this was helpful.

Thursday, October 30, 2014

Detecting and Exploiting the HTTP PUT Method

I recently found a web server which allowed the HTTP PUT Method. This was detected and proven vulnerable by a Nessus vulnerability scan which actually uploaded it's own page at /savpgr1.html with the text “A quick brown fox jumps over the lazy dog.” My first thought was to see if I could upload a shell (php, asp, jsp) which you can make in metasploit or find online. Unfortunately this didn't work as none of them were being interpreted by the server. Another possible attack scenario could have been a phishing attack where we created our own page within the web application.

During this test I didn't have much time and there wasn't a lot of information online about the HTTP PUT method from a penetration testing perspective. This blog post will be going over various ways to detect if a web server accepts the PUT method, how to successfully complete a PUT request, and how to set up a test web server which accepts PUT.

Detecting the HTTP PUT Method

  • OPTIONS method via Netcat, Burp, etc.:
    nc 80
    OPTIONS / HTTP/1.1
  • Nmap: The http-methods.nse script checks each HTTP method and outputs the response. This can be really nice for quickly checking multiple servers/ports at a time. Example usage would be: nmap --script=http-methods.nse -p80,443 .
  • Nessus: One of the ways Nessus reports on detected HTTP methods is through plugin 43111 "HTTP Methods Allowed (per directory)". The plugin file used is "web_directory_options.nasl" which can usually be found in /opt/nessus/lib/nessus/plugins.

Make a PUT Request / Upload Data

  • Request with Netcat, Burp, etc.:
    nc 80
    PUT /hello.htm HTTP/1.1
    User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
    Accept-Language: en-us
    Connection: Keep-Alive
    Content-type: text/html
    Content-Length: 182
    <h1>Hello, World!</h1>
  • cURL: It is worth noting these commands have varying level of success.
    curl -i -X PUT -H "Content-Type: application/xml; charset=utf-8" -d @"/tmp/some-file.xml"
    curl -X PUT -d "text or data to put"
    curl -i -H "Accept: application/json" -X PUT -d "text or data to put"
  • I found this old python script which PUTs a local file onto a target web server. The script takes two arguments: a local file and the destination url. There are optional arguments for authentication. The content-length of the local file is automatically calculated and updated in the PUT request. I had pretty good success with using this script, which can be downloaded here.
  • Nmap: The http-put.nse script uploads a local file to a web server via PUT request. I have not personally used it but it might be a good option. Example usage would be: nmap -p 80 --script http-put --script-args http-put.url='/uploads/rootme.php',http-put.file='/tmp/rootme.php'.
  • Nessus: Nessus has an interesting plugin which actually makes the PUT request with its own data. This is done by plugin 10498 "Web Server HTTP Dangerous Method Detection." I have not tried this but am fairly certain it would work: edit the "http_methods.nasl" with your own data and run the Nessus scan with that plugin enabled. Just a quick update to the data and content length should be it.
  • Metasploit: Metasploit also gives you the ability to PUT a file with auxiliary/scanner/http/http_put. I haven't tried this but it seems straight forward.

Example PUT Python Web Server

To test some of the techniques discussed I originally tried to configure an Apache server but had issues getting it to successfully accept my PUT requests, so I found and modified a couple of python scripts to set up a quick web server for testing. Here it is working with the script I mentioned earlier: The python PUT server script can be downloaded here but it may be sensitive to how the requests are formatted and not accept requests using the methods mentioned above.

Hopefully this is enough to get started with making PUT requests. It's a really interesting attack vector and should not be allowed by application developers and owners in most circumstances. If you you have any other ideas or suggestions based on this post please comment!


Tuesday, June 3, 2014

HTTP Security Headers Nmap Parser

Click here to download source code

Recently there have been some reports on how often major sites such as the Alexa top sites use security-related HTTP headers. Surprisingly (or maybe not) most are NOT taking full advantage of these headers. Among many system owners there seems to be a lack of awareness in regards to http headers, especially those related to security. In many architectures, these headers can be configured without changing the application, so why not take a look at them for your own sites? The reward for implementing (or removing) some of these headers can be extremely beneficial. It is worth noting that some headers are only supported by specific browsers and only offer a certain level of protection, so these headers should not be solely relied on from a security perspective.

What’s one of the first things we do when we start testing the security posture of a network? Discovery with Nmap. Nmap has a built in NSE script ‘http-headers’ which will return the headers via a HEAD request of a web server. Manually looking through a large Nmap output file to see which headers are being used can be really difficult, so I wrote a small parser in python which takes in the Nmap .xml output file and generates an .html report with only security-related header information.


  1. Run Nmap with http-headers script and xml output:
    nmap --script=http-headers <target> -oX output_file.xml
  2. Run with the .xml Nmap output file:
    python -f output_file.xml

Usage: { -f file } [-o output_filename]
There is one required argument which is the .xml Nmap output file. The user can also specify the output filename (default: Security-Headers-Report.html)

After running the script we have a nicely formatted table which contains every asset (ip:port) from the Nmap scan. Each asset displays information about nine different security-related headers: Access Control Allow Origin, Content Security Policy, Server, Strict Transport Security, Content Type Options, Frame Options, Cross Domain Policies, Powered By, and XSS Protection. This table can be copied into software such as Microsoft Excel and modified or sorted as necessary.

The reason behind creating this table is to get a clear view of the headers used in a large environment. With this report we can search for individual IPs and report on them or get a general feeling for the security posture of many servers.


Monday, January 27, 2014

SmeegeScrape: Text Scraper and Custom Word List Generator

Click Here to Download Source Code

Customize your security testing with! It's a simple python script to scrape text from various sources including local files and web pages, and turn the text into a custom word list. A customized word list has many uses, from web application testing to password cracking, having a specific set of words to use against a target can increase efficiency and effectiveness during a penetration test. I realize there are other text scrapers publicly available however I feel this script is simple, efficient, and specific enough to warrant its own release. This script is able to read almost any file which has cleartext in it that python can open. I have also included support for file formats such as pdf, html, docx, and pptx.

Usage: {-f file | -d directory | -u web_url | -l url_list_file} [-o output_filename] [-s] [-i] [-min #] [-max #]

One of the following input types is required:(-f filename), (-d directory), (-u web_url), (-l url_list_file)

-h, --help show this help message and exit
-f LOCALFILE, --localFile LOCALFILE Specify a local file to scrape
-d DIRECTORY, --fileDirectory DIRECTORY Specify a directory to scrape the inside files
-u URL, --webUrl URL Specify a url to scrape page content (correct format: http(s)://
-l URL_LIST_FILE, --webList URL_LIST_FILE Specify a text file with a list of URLs to scrape (separated by newline)
-o FILENAME, --outputFile FILENAME Specify output filename (default: smeegescrape_out.txt)
-i, --integers Remove integers [0-9] from all output
-s, --specials Remove special characters from all output
-min # Specify the minimum length for all words in output
-max # Specify the maximum length for all words in output

Scraping a local file: -f Test-File.txt

This is a sample text file with different text.

This file could be many different filetypes including html, pdf, powerpoint, docx, etc.  Anything which can be read in as cleartext can be scraped.

I hope you enjoy SmeegeScrape, feel free to comment if you like it!

Each word is separated by a newline. The options -i and -s can be used to remove any integers or special characters found. Also, the -min and -max arguments can be used to specify desired word length.

Scraping a web page: -u -si

To scrape web pages we use the python urllib2 module. The format of the url is checked via regex and it must be in the correct format (e.g. http(s)://

Scraping multiple files from a directory: -d test\ -si -min 5 -max 12

The screen output shows each file which was scraped, the total number of unique words found based on the user’s desired options, and the output filename.

Scraping multiple URLs: -l weblist.txt -si -min 6 -max 10

The -l option takes in a list of web urls from a text file and scrapes each url. Each scraped URL is displayed on the screen as well as a total number of words scraped.

This weblist option is excellent to use with Burp Suite to scrape an entire site. To do this, proxy your web traffic through Burp and discover as much content on the target site as you can (spidering, manual discovery, dictionary attack on directories/files, etc.). After the discovery phase, right click on the target in the site map and select the option “Copy URLs in this host” from the drop down list. In this instance for even a small blog like mine over 300 URLs were copied. Depending on the size of the site the scraping could take a little while, be patient!

Now just paste the URLs into a text file and run that as input with the -l option. -l SmeegeScrape-Burp-URLs.txt -si -min 6 -max 12:

So very easily we just scraped an entire site for words with specific attributes (length and character set) that we want.

As you can see there are many different possibilities with this script. I tried to make it as accurate as possible however sometimes the script depends on modules such as nltk, docx, etc. which may not always work correctly. In situations like this where the script is unable to read a certain file format, I would suggest trying to convert it to a more readable file type or copy/paste the text to a text file which can always be scraped.

The custom word list dictionaries you create are up to your imagination so have fun with it! This script could also be easily modified to extract phrases or sentences which could be used with password cracking passphrases. Here are a couple examples I made:

Holy Bible King James Version of 1611: -f HolyBibleDocx.docx -si -min 6 -max 12 -o HolyBible_scraped.txt
HolyBible_scraped.txt sample:
Shakespeare’s Romeo and Juliet: -u -si -min 6 -max 12 -o romeo_juliet_scraped.txt
romeo_juliet_scraped.txt sample:
Feel free to share your scraped lists or ideas on useful content to scrape. Comments and suggestions welcome, enjoy!