Finding the Gap: How Curiosity and Creativity Drive Threat Detection
Introduction
In my last post I shared a straightforward process that I use to sharpen my detection engineering skills using threat intel reports as an inspiration and starting point for researching and detecting real-world attacker techniques. Since then, I’ve heard from several people within and outside of my organization who are interested in getting into detection engineering, and one who even landed his first detection engineer role (shout out to @Micahs0day)! Inspired by their interest and success, I wanted to delve a little deeper into how security practitioners can approach detection engineering while giving back to the open-source community.
In this article I’ll demonstrate another entry point into this discipline , Atomic Red Team, while hopefully keeping the tone light and accessible for everyone. I hope to emphasize that detection engineering is a living discipline, with opportunities for creativity and collaboration bounded only by your imagination. And it doesn’t require years of experience, a fancy title, or thousands of Twitter followers to jump in, get your hands dirty, and even make a difference. Anyone with some base knowledge, curiosity, and creativity can make meaningful contributions to the threat detection hive mind.
(The following is inspired by this Tweet from @TheEis4Extra aka Blue Team Thomas):
Ok, here’s our objectives :
- Research an attacker technique and test an associated procedure in the lab using Atomic.
- Discover available Sigma rules designed to detect this activity and test them in the lab.
- Find the gap by modifying the procedure to circumvent the rule(s).
- Strengthen the existing rules or build new ones to catch the modified behavior.
Ready? Let’s jump in! 🪂
The Technique
“What’s in a name? That which we call a rose by any other name would smell just as sweet.”
That’s what Shakespeare wrote in Romeo and Juliet to demonstrate that Montagues and Capulets could get together, fall in love, and…whatever else happens at the end of that play. Right? Something like that? Shout out to my high school English teacher Chris Twombley for trying to get me to read and care about Shakespeare. 🤷♂️
But the point is, we put a lot of value in names of things in cybersecurity. Names of commands, names of files — there’s literally hundreds of detection rules that will trigger off of the name of an executable, registry key, filename, or other artifact if it lands in a SIEM in the right combination.
Attacker Behavior Scenario
Consider the following scenario which may help to explain the technique we’ll explore: suppose there is a well-known attacker procedure where they use cmd.exe to execute an evil command called evilcommand:
C:\Temp> cmd.exe /c evilcommand
Yikes, amirite? 😱 Bone-chilling stuff. Thankfully the threat intel gunslingers in our community have zeroed in on this behavior so now every SIEM vendor and their brother knows how to spot this. There’s even a Sigma rule out there that’ll detect this that you can feed into whatever platform or query language you choose. But what happens if an attacker renames the binary, cmd.exe, and uses that renamed file to execute the same attack while evading detection?
This hypothetical scenario uses a made up command, but it is a real technique with documented use by multiple threat actors including Lazarus Group, a cybercrime organization with ties to the North Korean government. The technique is called Masquerading: Rename System Utilities, and is a real way that attackers evade defenses and execute their malicious commands.
Running the Test
To test this technique, we’ll refer to an Atomic test (see the link below):
This test copies cmd.exe from it’s well-known home on the Windows File system to a temp folder, renaming it to lsass.exe in the process. Then it launches itself, giving a would-be attacker the ability to run commands with cmd that will appear in logs to come from lsass.exe. (Side note: apparently the use of the copy
command is not logged by Sysmon or the Windows security event log???)
To run the test, I will fire up my handy virtual detection lab, which I built using the Vagrant (Virtualbox) deployment option from https://detectionlab.network/. I highly recommend this project as an efficient way to build a detection engineering sandbox!
I’ll launch the VMs using the vagrant up
command on my host machine and then wait for them to resume.
In no time at all I have a virtual Active Directory system running including a domain controller, end user workstation, Windows Event Forwarding server, and Ubuntu server running Splunk. I’ll run a quick test command to verify the logging is working:
Now I simply copy and paste the Atomic test commands and run them:
If you are following along, congratulations! 🥳 You may have just executed your first Atomic Red Team test. I love how simple, easy-to-digest, and flexible these tests are. And sure enough, in the screenshot above, we can see that, according to our logs for the system, the whoami command (which I use as a stand-in for the made up “evilcommand” from our earlier scenario) appears to have been executed as a child process of lsass.exe, not CMD.
Detection Time!
As detection engineers, our job is to find attacker behavior before it lands our (or our customer’s) CEO in the headlines. To detect this behavior, I will scan the Sigma rules repository process creation rules to see what helpful rules are out in the community today. I see plenty of rules with “renamed” in the name:
The first one, which you can check out here, looks really promising!
It checks for processes created where there is a mismatch between the OriginalFileName property and the image (aka executable) file name. OriginalFileName is a property of a VERSIONINFO string that has some association with a Windows binary. You can read more about it here if you’re inclined. For now, I’ll convert that Sigma rule to Splunk using Sigma CLI, the Sigma Command Line Interface. From my local clone of the rules repository, I run this command:
sigma convert -t splunk -p sysmon proc_creation_win_renamed_binary.yml -o %TEMP%/temp
Parsing Sigma rules [####################################] 100%
…which results in this beauty:
EventID=1 OriginalFileName IN ("Cmd.Exe", "CONHOST.EXE", "PowerShell.EXE", "pwsh.dll", "powershell_ise.EXE", "psexec.exe", "psexec.c", "cscript.exe", "wscript.exe", "MSHTA.EXE", "REGSVR32.EXE", "wmic.exe", "CertUtil.exe", "RUNDLL32.EXE", "CMSTP.EXE", "msiexec.exe", "7z.exe", "WinRAR.exe", "wevtutil.exe", "net.exe", "net1.exe", "netsh.exe", "InstallUtil.exe") NOT (Image IN ("*\\cmd.exe", "*\\conhost.exe", "*\\powershell.exe", "*\\pwsh.exe", "*\\powershell_ise.exe", "*\\psexec.exe", "*\\psexec64.exe", "*\\cscript.exe", "*\\wscript.exe", "*\\mshta.exe", "*\\regsvr32.exe", "*\\WMIC.exe", "*\\certutil.exe", "*\\rundll32.exe", "*\\cmstp.exe", "*\\msiexec.exe", "*\\7z.exe", "*\\WinRAR.exe", "*\\wevtutil.exe", "*\\net.exe", "*\\net1.exe", "*\\netsh.exe", "*\\InstallUtil.exe"))
Ok…so not the prettiest looking query I’ve ever seen, but does it work? I’ll make some minor modifications, drop it into Splunk and…
Success! The search query generated from the Sigma rule, which could easily be turned into a saved search/alert, detected the activity from the Atomic test.
So far, we’ve:
- Researched a real attacker technique.
- Tested it in our lab using Atomic Red Team.
- Performed some cursory research on open-source threat detection resources to detect the behavior.
- Tested the open-source, off-the-shelf detection logic in our lab, and found that our Atomic test triggered an alert.
What more is there to do? Attackers don’t quit easily, so neither do we. We need to find the gaps in our detection capability.
Finding the Gap
No rule is ever perfect, and attackers will always try to find ways to get around our detections. That’s why we need more creative, curious people in threat detection. Our next step is to think about how we might modify the procedure in the Atomic test to be more sneaky while still achieving CMD-driven command execution that won’t show up in a SOC analyst’s queue.
Let’s look carefully at that Sigma rule again. I notice it uses two Sysmon event ID 1 fields, “OriginalFileName” and “Image.” The rule matches against a list of a couple dozen OriginalFileName values, then checks for Image values that do not end with that same set of values.
Think about some ways to circumvent this logic. 🤔 First, we could rename CMD to a somewhat benign value that’s also on the list, like conhost.exe. Second, we could research a way to modify the OriginalFileName value of the renamed binary to match the new executable name, or something else entirely. I’m interested in this second possibility because OriginalFileName is a common way to detect renamed binaries. It turns out there’s a tool called rcedit to do exactly that! It popped up as the second search result after Googling “change original file name windows executable.”
Back in the lab, I downloaded rcedit and reviewed the documentation on GitHub. I took a look at the file details of my renamed executable, noting the Original filename attribute.
I then executed the following command from the Downloads folder on my workstation, trying to modify this value:
rcedit-x64.exe C:\Windows\Temp\lsass.exe — set-version-string OriginalFilename “NothingToSeeHere.exe”
I check the file details again and it worked! You can see I’ve also modified the ProductName property for good measure:
I re-run the Atomic test using this modified binary and check whether the detection logic from the Sigma rule yields a match. No match!
And there it is: a gap in an existing detection, which we found using only minimal time and effort. While this is a somewhat simplistic example, it illustrates how relatively easy (and interesting) it can be to dive into a real-world attacker technique, isolate a specific procedure, and test that procedure from the standpoint of a defender, inspired by thinking like an attacker. The fact that an easy Google search quickly yields an effective way to neutralize an established detection just shows how much opportunity there is for community involvement, testing, and improvement in this field. Speaking of which…
Strengthening the Detection
So, how can we protect our organization and customers from this technique? I’d like to either improve the rule we found (which is already really good), or create some new rules to detect our evasion. Let’s think of the general types of evidence that this behavior leaves:
- Downloading rcedit-x64.exe from the web. Windows/Microsoft Defender does not identify this tool as a virus, probably because it is a legit tool with legitimate uses.
- Running rcedit-x64.exe.
- Renaming an important file (cmd.exe) to something else.
- Others? (let me know in the comments!)
Of these, I like option two the best. We’ll cover that next and then discuss option three a bit.
Process Creation
A detection for using rcedit seems pretty straightforward, so let’s start with that. I’ll look for versions of rcedit being used to alter resource strings for any of the internal executable metadata properties that we might wish to use to determine a process’ validity.
title: Suspicious Use of Rcedit Utility to Alter Executable Metadata
id: 0c92f2e6-f08f-4b73-9216-ecb0ca634689
status: experimental
description: Detects the suspicious use of rcedit to potentially alter executable PE metadata properties, which could conceal efforts to rename system utilities for defense evasion.
references:
- https://security.stackexchange.com/questions/210843/is-it-possible-to-change-original-filename-of-an-exe
- https://www.virustotal.com/gui/file/02e8e8c5d430d8b768980f517b62d7792d690982b9ba0f7e04163cbc1a6e7915
- https://github.com/electron/rcedit
author: Micah Babinski
date: 2022/12/11
tags:
- attack.defense_evasion
- attack.t1036.003
- attack.t1036
- attack.t1027.005
- attack.t1027
logsource:
category: process_creation
product: windows
detection:
selection1:
Image|endswith:
- '\rcedit-x64.exe'
- '\rcedit-x86.exe'
CommandLine|contains: '--set-resource-string'
selection2:
CommandLine|contains:
- 'OriginalFileName'
- 'CompanyName'
- 'FileDescription'
- 'ProductName'
- 'ProductVersion'
- 'LegalCopyright'
condition: selection1 and selection2
falsepositives:
- Unknown
level: medium
I think this is an effective rule, and following a search I don’t see any existing rules that cover this particular technique. I’ve also read about the use of the same technique to rename the executable to a name without “.exe” as a file extension — I was curious whether this would work as well, and it did:
C:\Users\vagrant>copy %SystemRoot%\System32\cmd.exe %SystemRoot%\Temp\test.txt
1 file(s) copied.
C:\Users\vagrant>%SystemRoot%\Temp\test.txt /c whoami
win10\vagrant
Here is a very simple version of a rule that would detect this behavior (a far more comprehensive version is available here):
title: Process Creation without .exe File Extension
id: 02dc3892-2fd0-4dd5-b2d7-62052a837abe
status: experimental
description: Detects process creations where the Image does not have a .exe file extension.
references:
- https://vblocalhost.com/uploads/VB2021-Kayal-etal.pdf
author: Micah Babinski
date: 2022/12/11
tags:
- attack.defense_evasion
- attack.t1036.003
- attack.t1036
- attack.s1020
logsource:
category: process_creation
product: windows
detection:
selection:
Image|endswith: '.exe'
condition: not selection
falsepositives:
- Unknown
level: high
File Renaming
In my opinion, detecting renamed files is tricky. There’s a lot to be desired in the off-the-shelf logging options for renaming of files. And since what we are essentially doing is creating a file (not renaming), we’ll need to look at file creation events (like sysmon event 11 or Windows security event 4663). This gets tricky because, while these events could show us that a file called lsass.exe was created, nothing in the logs show us that this file was copied from a legitimate version of cmd.exe.
Fortunately, there’s a rule for that already, called Files With System Process Name In Unsuspected Locations. This rule detects creation of Windows system executables in unexpected folders. Since our Atomic test used lsass.exe in an unusual folder (C:\Windows\Temp), running this rule in the Splunk instance does match the activity:
To extend these available concepts, we could consider rules that would extend these concepts and match on related activity. Two possibilities might be:
- Use of command or scripting interpreters to create EXE files.
- Creation of EXE files in temp directories (temp or tmp).
Conclusion
I hope that this article has demonstrated that detection engineering is for anyone curious enough to research, test, and think creatively about how to circumvent existing detection methods. It’s a messy process — the workflow I illustrated above yielded some valuable detection rules and useful concepts, but it didn’t close any major doors to a determined attacker. We can create rules to detect documented behavior, but if an attacker frequently switches up their techniques in crafty ways, they will evade detection.
However, threat actors (and criminals more generally) often act in patterned, predictable ways. When writing this I realized, gratefully, that I usually could not actually improve upon existing detection rules available in the Sigma rules repository, at least with my current skill level. This doesn’t mean that I, and my fellow newcomers to the threat detection discipline, should not try.
As a final thought, I will share that for me, this process is really fun. It’s very exciting to see how many interesting detection challenges are out there, and interrelated projects like Sigma, Mitre ATT&CK, Detection Lab, and Atomic Red Team make those challenges so easy to discover, test, and explore.
Thanks for sticking with me through a lengthy post, and as always, happy analyzing! 🧐