Subscribe:

Blogroll

About

Featured Posts

Thursday, October 30, 2014

Detecting and Exploiting the HTTP PUT Method

I recently found a web server which allowed the HTTP PUT Method. This was detected and proven vulnerable by a Nessus vulnerability scan which actually uploaded it's own page at /savpgr1.html with the text “A quick brown fox jumps over the lazy dog.” My first thought was to see if I could upload a shell (php, asp, jsp) which you can make in metasploit or find online. Unfortunately this didn't work as none of them were being interpreted by the server. Another possible attack scenario could have been a phishing attack where we created our own page within the web application.

During this test I didn't have much time and there wasn't a lot of information online about the HTTP PUT method from a penetration testing perspective. This blog post will be going over various ways to detect if a web server accepts the PUT method, how to successfully complete a PUT request, and how to set up a test web server which accepts PUT.

Detecting the HTTP PUT Method

  • OPTIONS method via Netcat, Burp, etc.:
    nc www.victim.com 80
    OPTIONS / HTTP/1.1
    Host: www.victim.com
      
  • Nmap: The http-methods.nse script checks each HTTP method and outputs the response. This can be really nice for quickly checking multiple servers/ports at a time. Example usage would be: nmap --script=http-methods.nse -p80,443 .
  • Nessus: One of the ways Nessus reports on detected HTTP methods is through plugin 43111 "HTTP Methods Allowed (per directory)". The plugin file used is "web_directory_options.nasl" which can usually be found in /opt/nessus/lib/nessus/plugins.

Make a PUT Request / Upload Data

  • Request with Netcat, Burp, etc.:
    nc www.victim.com 80
    PUT /hello.htm HTTP/1.1
    User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
    Host: www.victim.com
    Accept-Language: en-us
    Connection: Keep-Alive
    Content-type: text/html
    Content-Length: 182
    
    <html>
    <body>
    <h1>Hello, World!</h1>
    </body>
    </html>
      
  • cURL: It is worth noting these commands have varying level of success.
    curl -i -X PUT -H "Content-Type: application/xml; charset=utf-8" -d @"/tmp/some-file.xml" http://www.victim.com/newpage
    
    curl -X PUT -d "text or data to put" http://www.victim.com/destination_page
    
    curl -i -H "Accept: application/json" -X PUT -d "text or data to put" http://victim.com/new_page
      
  • Quickput.py: I found this old python script which PUTs a local file onto a target web server. The script takes two arguments: a local file and the destination url. There are optional arguments for authentication. The content-length of the local file is automatically calculated and updated in the PUT request. I had pretty good success with using this script, which can be downloaded here.
  • Nmap: The http-put.nse script uploads a local file to a web server via PUT request. I have not personally used it but it might be a good option. Example usage would be: nmap -p 80 --script http-put --script-args http-put.url='/uploads/rootme.php',http-put.file='/tmp/rootme.php'.
  • Nessus: Nessus has an interesting plugin which actually makes the PUT request with its own data. This is done by plugin 10498 "Web Server HTTP Dangerous Method Detection." I have not tried this but am fairly certain it would work: edit the "http_methods.nasl" with your own data and run the Nessus scan with that plugin enabled. Just a quick update to the data and content length should be it.
  • Metasploit: Metasploit also gives you the ability to PUT a file with auxiliary/scanner/http/http_put. I haven't tried this but it seems straight forward.

Example PUT Python Web Server

To test some of the techniques discussed I originally tried to configure an Apache server but had issues getting it to successfully accept my PUT requests, so I found and modified a couple of python scripts to set up a quick web server for testing. Here it is working with the Quickput.py script I mentioned earlier: The python PUT server script can be downloaded here but it may be sensitive to how the requests are formatted and not accept requests using the methods mentioned above.

Hopefully this is enough to get started with making PUT requests. It's a really interesting attack vector and should not be allowed by application developers and owners in most circumstances. If you you have any other ideas or suggestions based on this post please comment!

Resources:

https://www.owasp.org/index.php/Testing_for_HTTP_Methods_and_XST
http://www.tutorialspoint.com/http/http_methods.htm
https://gist.githubusercontent.com/codification/1393204/raw/3fd4a4...429f/server.py
http://www.acmesystems.it/python_httpserver

Tuesday, June 3, 2014

HTTP Security Headers Nmap Parser

Click here to download source code

Recently there have been some reports on how often major sites such as the Alexa top sites use security-related HTTP headers. Surprisingly (or maybe not) most are NOT taking full advantage of these headers. Among many system owners there seems to be a lack of awareness in regards to http headers, especially those related to security. In many architectures, these headers can be configured without changing the application, so why not take a look at them for your own sites? The reward for implementing (or removing) some of these headers can be extremely beneficial. It is worth noting that some headers are only supported by specific browsers and only offer a certain level of protection, so these headers should not be solely relied on from a security perspective.

What’s one of the first things we do when we start testing the security posture of a network? Discovery with Nmap. Nmap has a built in NSE script ‘http-headers’ which will return the headers via a HEAD request of a web server. Manually looking through a large Nmap output file to see which headers are being used can be really difficult, so I wrote a small parser in python which takes in the Nmap .xml output file and generates an .html report with only security-related header information.

Steps:

  1. Run Nmap with http-headers script and xml output:
    nmap --script=http-headers <target> -oX output_file.xml
  2. Run Security-Headers-Nmap-Parser.py with the .xml Nmap output file:
    python Security-Headers-Nmap-Parser.py -f output_file.xml

Usage: Security-Headers-Nmap-Parser.py { -f file } [-o output_filename]
There is one required argument which is the .xml Nmap output file. The user can also specify the output filename (default: Security-Headers-Report.html)

After running the script we have a nicely formatted table which contains every asset (ip:port) from the Nmap scan. Each asset displays information about nine different security-related headers: Access Control Allow Origin, Content Security Policy, Server, Strict Transport Security, Content Type Options, Frame Options, Cross Domain Policies, Powered By, and XSS Protection. This table can be copied into software such as Microsoft Excel and modified or sorted as necessary.

The reason behind creating this table is to get a clear view of the headers used in a large environment. With this report we can search for individual IPs and report on them or get a general feeling for the security posture of many servers.

Resources:
https://securityheaders.com
https://www.owasp.org

Monday, January 27, 2014

SmeegeScrape: Text Scraper and Custom Word List Generator

Click Here to Download Source Code

Customize your security testing with SmeegeScrape.py! It's a simple python script to scrape text from various sources including local files and web pages, and turn the text into a custom word list. A customized word list has many uses, from web application testing to password cracking, having a specific set of words to use against a target can increase efficiency and effectiveness during a penetration test. I realize there are other text scrapers publicly available however I feel this script is simple, efficient, and specific enough to warrant its own release. This script is able to read almost any file which has cleartext in it that python can open. I have also included support for file formats such as pdf, html, docx, and pptx.

Usage: SmeegeScrape.py {-f file | -d directory | -u web_url | -l url_list_file} [-o output_filename] [-s] [-i] [-min #] [-max #]

One of the following input types is required:(-f filename), (-d directory), (-u web_url), (-l url_list_file)

-h, --help show this help message and exit
-f LOCALFILE, --localFile LOCALFILE Specify a local file to scrape
-d DIRECTORY, --fileDirectory DIRECTORY Specify a directory to scrape the inside files
-u URL, --webUrl URL Specify a url to scrape page content (correct format: http(s)://smeegesec.com)
-l URL_LIST_FILE, --webList URL_LIST_FILE Specify a text file with a list of URLs to scrape (separated by newline)
-o FILENAME, --outputFile FILENAME Specify output filename (default: smeegescrape_out.txt)
-i, --integers Remove integers [0-9] from all output
-s, --specials Remove special characters from all output
-min # Specify the minimum length for all words in output
-max # Specify the maximum length for all words in output

Scraping a local file: SmeegeScrape.py -f Test-File.txt

Test-File.txt
This is a sample text file with different text.

This file could be many different filetypes including html, pdf, powerpoint, docx, etc.  Anything which can be read in as cleartext can be scraped.

I hope you enjoy SmeegeScrape, feel free to comment if you like it!
  
Output:
enjoy
comment
powerpoint,
feel
text
is
sample
as
including
file
in
if
different
pdf,
to
read
which
you
SmeegeScrape,
hope
be
Anything
This
html,
cleartext
text.
free
it!
with
a
I
like
filetypes
could
scraped.
can
many
docx,
etc.
  

Each word is separated by a newline. The options -i and -s can be used to remove any integers or special characters found. Also, the -min and -max arguments can be used to specify desired word length.

Scraping a web page: SmeegeScrape.py -u http://www.smeegesec.com -si

To scrape web pages we use the python urllib2 module. The format of the url is checked via regex and it must be in the correct format (e.g. http(s)://smeegesec.com)

Scraping multiple files from a directory: SmeegeScrape.py -d test\ -si -min 5 -max 12

The screen output shows each file which was scraped, the total number of unique words found based on the user’s desired options, and the output filename.

Scraping multiple URLs: SmeegeScrape.py -l weblist.txt -si -min 6 -max 10

The -l option takes in a list of web urls from a text file and scrapes each url. Each scraped URL is displayed on the screen as well as a total number of words scraped.

This weblist option is excellent to use with Burp Suite to scrape an entire site. To do this, proxy your web traffic through Burp and discover as much content on the target site as you can (spidering, manual discovery, dictionary attack on directories/files, etc.). After the discovery phase, right click on the target in the site map and select the option “Copy URLs in this host” from the drop down list. In this instance for even a small blog like mine over 300 URLs were copied. Depending on the size of the site the scraping could take a little while, be patient!

Now just paste the URLs into a text file and run that as input with the -l option.

SmeegeScrape.py -l SmeegeScrape-Burp-URLs.txt -si -min 6 -max 12:

So very easily we just scraped an entire site for words with specific attributes (length and character set) that we want.

As you can see there are many different possibilities with this script. I tried to make it as accurate as possible however sometimes the script depends on modules such as nltk, docx, etc. which may not always work correctly. In situations like this where the script is unable to read a certain file format, I would suggest trying to convert it to a more readable file type or copy/paste the text to a text file which can always be scraped.

The custom word list dictionaries you create are up to your imagination so have fun with it! This script could also be easily modified to extract phrases or sentences which could be used with password cracking passphrases. Here are a couple examples I made:

Holy Bible King James Version of 1611: SmeegeScrape.py -f HolyBibleDocx.docx -si -min 6 -max 12 -o HolyBible_scraped.txt
HolyBible_scraped.txt sample:
Testament
Esther
Obadiah
parents
strife
fearful
passage
deathmark
continuance
children
nought
remove
traffic
Malachi
Haggai
  
Shakespeare’s Romeo and Juliet: SmeegeScrape.py -u http://shakespeare.mit.edu/romeo_juliet/full.html -si -min 6 -max 12 -o romeo_juliet_scraped.txt
romeo_juliet_scraped.txt sample:
Juliet
Entire
Shakespeare
heartless
Benvolio
manage
coward
several
houses
Citizen
partisans
Capulets
CAPULET
crutch
households
  
Feel free to share your scraped lists or ideas on useful content to scrape. Comments and suggestions welcome, enjoy!

Wednesday, November 6, 2013

HashTag: Password Hash Identification

Click here to download source code or access it online at OnlineHashCrack

Interested in password cracking or cryptography? Check this out. HashTag.py is a tool written in python which parses and identifies various password hashes based on their type. HashTag was inspired by attending PasswordsCon 13 in Las Vegas, KoreLogic’s ‘Crack Me If You Can’ competition at Defcon, and the research of iphelix and his toolkit PACK (password analysis and cracking kit). HashTag supports the identification of over 250 hash types along with matching them to over 110 hashcat modes. HashTag is able to identify a single hash, parse a single file and identify the hashes within it, or traverse a root directory and all subdirectories for potential hash files and identify any hashes found.

One of the biggest aspects of this tool is the identification of password hashes. The main attributes I used to distinguish between hash types are character set (hexadecimal, alphanumeric, etc.), hash length, hash format (e.g. 32 character hash followed by a colon and a salt), and any specific substrings (e.g. ‘$1$’). A lot of password hash strings can’t be identified as one specific hash type based on these attributes. For example, MD5 and NTLM hashes are both 32 character hexadecimal strings. In these cases I make an exhaustive list of possible types and have the tool output reflect that. During development I created an excel spreadsheet which contains much of the hash information which can be found here or here.

Usage: HashTag.py {-sh hash |-f file |-d directory} [-o output_filename] [-hc] [-n]

Note: When identifying a single hash on *nix operating systems remember to use single quotes to prevent interpolation. (e.g. python HashTag.py -sh '$1$abc$12345')

-h, --help show this help message and exit
-sh SINGLEHASH, --singleHash SINGLEHASH Identify a single hash
-f FILE, --file FILE Parse a single file for hashes and identify them
-d DIRECTORY, --directory DIRECTORY Parse, identify, and categorize hashes within a directory and all subdirectories
-o OUTPUT, --output OUTPUT Filename to output full list of all identified hashes
--file default filename: HashTag/HashTag_Output_File.txt
--directory default filename: HashTag/HashTag_Hash_File.txt
-hc, --hashcatOutput --file: Output a file per different hash type found, if corresponding hashcat mode exists
--directory: Appends hashcat mode to end of separate files
-n, --notFound --file: Include unidentifiable hashes in the output file. Good for tool debugging (Is it Identifying properly?)

Identify a single hash (-sh):

HashTag.py -sh $1$MtCReiOj$zvOdxVzPtrQ.PXNW3hTHI0


HashTag.py -sh 7026360f1826f8bc


HashTag.py -sh 3b1015ccf38fc2a32c18674c166fa447


Parsing and identifying hashes from a file (-f):

HashTag.py -f testdir\street-hashes.10.txt -hc

Here is the output file. Each identified hash outputs the hash, char length, hashcat modes (if found) , and possible hash types:
Using the -hc/--hashcat argument we get a file for each hash type if a corresponding hashcat mode is found. This makes the process of cracking hashes with hashcat much easier as you immediately have the mode and input file of hashes:
Output from a file with many different hash types (the filenames are hashcat modes and inside are all hashes of that type):

Traversing Directories and Identifying Hashes (-d):

HashTag.py -d ./testdir -hc

The output consists of three main things:

  • Folders containing copies of potentially password protected files. This makes it easy to group files based on extension and attempt to crack them.
  • HashTag default files - A listing of all hashes, password protected files the tool doesn’t recognize, and hashes the tool can’t identify (good for tool debugging).
  • Files for each identified hash type - each file contains a list of hashes. The -hc/--hashcat argument will append the hashcat mode (if found) to the filename.

Resources: Quite a bit of research went into the difference between password hash types. During this research I found a script called Hash Identifier which was actually included in one of the Backtrack versions. After looking it over I feel my tool has a lot more functionality, efficiency, and accuracy. My other research ranged from finding different hash examples to generating my own hashes via the passlib module. I would like to give credit to the following resources which all had some impact in the creation of this tool.

http://wiki.insidepro.com/index.php/Main_Page
https://hashcat.net/wiki/
http://openwall.info/wiki/john/
http://pythonhosted.org/passlib/index.html

As always, if you see any coding errors, false assumptions (hash identification), or have constructive criticism please contact me. Hope you like it!