CLI Forensics: What You Can Do With Terminal Tools

Digital forensics is typically associated with specialized GUI applications like Autopsy, FTK Imager, or X-Ways Forensics. These tools have their place, but much of the underlying work can be reproduced in the terminal. Command-line tools offer portability, scriptability, and the ability to work on remote systems or headless servers where GUIs aren't available.

This guide covers what you can accomplish with CLI forensics tools, organized by investigation type. If you already work in the terminal for development or security work, you likely have some of these tools installed.

Platform note: Examples assume a Unix-like environment. Some commands are platform-specific. fs_usage is macOS-only, interface names like en0 and lo0 are macOS conventions (Linux uses eth0, lo), and md5 vs md5sum differs by OS. Platform-specific commands are noted inline where relevant.

Minimal Toolkit Setup

# macOS (Homebrew)
brew install sleuthkit exiftool binwalk wireshark

# Linux (Debian/Ubuntu)
sudo apt install sleuthkit exiftool binwalk tshark

Why CLI for Forensics?

Portability: Terminal tools work over SSH, don't require display servers, and behave consistently across systems. Your workflow transfers directly from local investigation to remote incident response.

Automation: Every CLI command can be scripted. Repetitive analysis tasks (log parsing, timeline generation, hash verification) become one-liners or shell scripts.

Resource efficiency: No GUI overhead means tools run faster and work on resource-constrained systems. You can analyze evidence on the same machine being investigated without significant performance impact.

Integration: CLI tools compose naturally. Pipe tshark output to grep, feed Sleuth Kit results to awk for processing, chain exiftool with hash verification in a single pipeline.

The GUI tools you see in forensics courses are often wrappers around these same CLI utilities, designed for examiner workflow convenience rather than technical capability.

Forensic Rigor: Before You Start

Forensic correctness matters whether you're responding to an incident or preparing evidence for legal proceedings. A few fundamentals:

Work on copies, not originals. Always image the source drive and analyze the copy. Mounting a live filesystem for analysis can alter timestamps and other metadata.

Use write blockers. Hardware or software write blockers prevent accidental modification of evidence during acquisition. If you must mount a filesystem, be aware that even read-only mounts can modify metadata on some filesystems due to journal replay. Prefer working from disk images with forensic tools rather than mounting the original device.

# Read-only loopback mount of a disk image (safer than mounting the raw device)
sudo mount -o ro,noexec,loop disk.dd /mnt/evidence

Loopback mounts are safer than mounting raw devices, but may still trigger filesystem-level changes on some kernels and filesystem types. Prefer analysis with forensic tools like Sleuth Kit when possible.

Hash on acquisition, verify before analysis. Generate a hash immediately after imaging and verify it before every analysis session. Any mismatch means the evidence has changed.

# Hash immediately after acquisition
shasum -a 256 disk.dd > disk.dd.sha256

# Verify before each analysis session
shasum -a 256 -c disk.dd.sha256

Chain of custody. Document who handled evidence, when, and what was done to it. This matters for legal admissibility and for reconstructing your own investigation later.

Consider dcfldd or ddrescue over plain dd. Both offer built-in hashing and better error handling for damaged media:

# dcfldd: hashes while imaging, producing a log typically compatible with shasum -c
dcfldd if=/dev/sda of=disk.dd hash=sha256 sha256log=disk.dd.sha256

# ddrescue: better recovery from failing drives (hash separately after)
ddrescue /dev/sda disk.dd disk.log
shasum -a 256 disk.dd > disk.dd.sha256

Network Forensics

Network traffic analysis reveals communication patterns, data exfiltration, command and control traffic, and lateral movement within compromised networks.

Tools: tshark, tcpdump, lsof, ss

Capture live traffic:

# Capture all traffic on interface en0 (macOS) / eth0 (Linux)
# -nn disables DNS resolution, keeping captures cleaner and faster
sudo tcpdump -i en0 -nn -w capture.pcap

# Capture only HTTP traffic
sudo tcpdump -i en0 -nn 'tcp port 80' -w http_traffic.pcap

Analyze existing PCAPs:

# Display packet summaries
tshark -r capture.pcap

# Extract HTTP requests
tshark -r capture.pcap -Y 'http.request' -T fields -e http.host -e http.request.uri

# List TCP conversations to find stream indices worth following
tshark -r capture.pcap -q -z conv,tcp

# Follow a specific TCP stream by index (0 is the first; use conv output to pick)
tshark -r capture.pcap -z follow,tcp,ascii,0

# Extract DNS queries (useful for detecting C2 and data exfiltration via DNS)
tshark -r capture.pcap -Y "dns" -T fields -e dns.qry.name

# Identify top talkers by IP conversation volume
tshark -r capture.pcap -q -z conv,ip

# Export HTTP objects (files transferred over HTTP)
tshark -r capture.pcap --export-objects http,./http_objects

Encrypted traffic: TLS/HTTPS traffic analysis is limited without session keys. You can identify patterns (destination IPs, connection timing, data volumes) but not content. Certificate inspection via tshark -Y "tls.handshake.type == 2" -T fields -e tls.handshake.certificate can still yield useful information.

Identify active connections:

# Show all listening ports and established connections
sudo lsof -i -P -n | grep LISTEN

# List network connections by process
sudo lsof -i -a -p <PID>

# Modern alternative to netstat on Linux
ss -tulnp

# Monitor real-time network activity (macOS only)
sudo fs_usage -w -f network

Common investigation patterns:

Detect port scanning (many connection attempts to sequential ports), identify C2 beaconing (regular outbound connections to same external IP), extract credentials from cleartext protocols (HTTP Basic Auth, FTP, Telnet), reconstruct file transfers from packet captures.

Filesystem Forensics

Filesystem analysis involves examining disk images, recovering deleted files, and reconstructing activity timelines without modifying evidence.

Tools: Sleuth Kit (fls, icat, istat, mmls, mactime)

The Sleuth Kit provides CLI utilities for analyzing filesystems without mounting them, which preserves evidence integrity.

Install on macOS:

brew install sleuthkit

Examine disk structure:

# List partition table and get partition offsets
mmls disk.dd

# List files in filesystem -- offset is required when partitions exist
# Use the offset value from mmls output (e.g. 2048 sectors)
fls -r -o 2048 disk.dd

# List deleted files only
fls -rd -o 2048 disk.dd

Extract files:

# Extract file by inode number
icat -o 2048 disk.dd 512 > recovered_file.jpg

# Inspect inode metadata in detail
istat -o 2048 disk.dd 512

Timeline analysis:

# Generate bodyfile with MAC times
fls -m / -r -o 2048 disk.dd > bodyfile

# Convert to human-readable timeline
mactime -b bodyfile -d > timeline.csv

When reading a timeline, look for bursts of file activity in a narrow window -- these often correspond to attacker tooling being dropped or data being staged for exfiltration. Cross-reference activity timestamps against your known compromise window to separate attacker actions from normal system noise. Before drawing conclusions, verify that timestamps are normalized to a common timezone (UTC is safest) and account for potential clock skew on the source system.

Practical workflow:

Image the suspect drive with dcfldd or ddrescue, use mmls to identify partition offsets, run fls to list all files including deleted ones, extract suspicious files with icat for further analysis, generate a timeline to understand the sequence of events.

Quick Triage Tools

Before reaching for Sleuth Kit or tshark, a few standard Unix utilities are invaluable for fast triage:

# Identify file type regardless of extension
file suspicious_binary

# Extract printable strings (quick way to find hardcoded IPs, paths, credentials)
strings suspicious.bin | grep -i password
strings suspicious.bin | grep -E '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b'

# Hex dump for raw inspection
xxd suspicious.bin | head -40
hexdump -C suspicious.bin | head -40

For any investigation, consider logging the entire terminal session from the start. script session.log records all input and output, which supports auditability and makes it easier to reconstruct exactly what commands were run and when.

These compose naturally into pipelines. For example, pairing strings with grep and sort can surface indicators of compromise in seconds without touching heavier tools.

Metadata Analysis

File metadata contains information beyond file content: creation timestamps, author names, GPS coordinates, software versions, edit history. This data often reveals more than the file contents themselves.

Tool: exiftool

ExifTool reads and writes metadata for hundreds of file formats.

Install on macOS:

brew install exiftool

Basic usage:

# Display all metadata
exiftool photo.jpg

# Extract specific fields
exiftool -GPS* -DateTimeOriginal photo.jpg

# Check document author and edit history
exiftool -Author -ModifyDate -CreateDate document.docx

# Scan directory recursively
exiftool -r /path/to/evidence/

Investigation scenarios:

Photos contain GPS coordinates showing where they were taken, timestamps revealing when events occurred. Documents expose author identity, organization names from templates, revision history showing when content changed. Executables contain compilation timestamps, build paths with usernames, embedded resources.

Strip metadata for privacy:

# Remove all metadata
exiftool -all= photo.jpg

# Create cleaned copy
exiftool -all= -o cleaned.jpg original.jpg

Firmware and Embedded Analysis

Firmware images from routers, IoT devices, and embedded systems often contain filesystems, configuration files, and hardcoded credentials. Extracting and analyzing these reveals security vulnerabilities and potential backdoors.

Tool: binwalk

Binwalk scans binary files for embedded filesystems, compressed data, and executable code.

Install on macOS:

brew install binwalk

Basic analysis:

# Scan for signatures
binwalk firmware.bin

# Extract all found filesystems/files
binwalk -e firmware.bin

# Verbose output with offsets
binwalk -B firmware.bin

Common findings:

Squashfs or cramfs filesystems containing Linux root filesystem, compressed kernel images, bootloader code, configuration files with hardcoded passwords, encryption keys, SSL certificates.

Pitfalls to be aware of:

Binwalk extraction can fail or produce false positives, particularly with obfuscated or custom-packed firmware. Extracted filesystems sometimes require manual mounting. For dynamic analysis, the extracted filesystem may need to be emulated (QEMU is the standard approach) since the binaries are built for non-x86 architectures like MIPS or ARM.

Investigation workflow:

Extract firmware from device (JTAG, SPI flash dump, or manufacturer update file), run binwalk to identify embedded components, extract filesystem, examine contents for credentials and backdoors, analyze network configuration and services, check for known vulnerable binaries.

This directly overlaps with embedded security work and IoT penetration testing.

Process and System Inspection

Live system analysis identifies running malware, suspicious processes, and active network connections before evidence is collected.

Tools: lsof, ps, fs_usage

Identify suspicious processes:

# List all processes with full details
ps aux

# Show what files a process has open
lsof -p <PID>

# Find processes using specific files
lsof /var/log/system.log

# Monitor filesystem activity in real-time (macOS only, requires root, output is noisy)
sudo fs_usage -w -f filesys

Network connections by process:

# Show all network connections
sudo lsof -i -P -n

# Filter by specific port
sudo lsof -i :443

# Track network activity by process
sudo lsof -i -a -p <PID>

Investigation patterns:

Processes listening on unusual ports, unexpected outbound connections to external IPs, processes accessing sensitive files without legitimate reason, filesystem activity inconsistent with claimed process purpose.

File Integrity and Hashing

Cryptographic hashes verify file integrity and identify known malware. Hash databases (NSRL, VirusTotal) allow rapid classification of files as known-good, known-bad, or unknown.

Tools: shasum, md5sum

Generate hashes:

# SHA-256 hash (recommended)
shasum -a 256 suspicious_file.exe

# MD5 hash (legacy, still used in some databases)
# macOS:
md5 suspicious_file.exe
# Linux:
md5sum suspicious_file.exe

# Hash entire directory
find /evidence -type f -exec shasum -a 256 {} \; > hashes.txt

Verify integrity:

# Check against known hash
echo "a1b2c3d4... file.bin" | shasum -a 256 -c

# Compare hash files
diff known_good_hashes.txt current_hashes.txt

Investigation workflow:

Hash all evidence files immediately upon collection, verify hashes before analysis to ensure integrity, compare against known malware databases, document hash values in investigation reports for court admissibility.

Practical Investigation Scenarios

Incident Response on Linux Server

  1. Start session logging: script session.log
  2. Capture environment: env > environment.txt && uname -a >> environment.txt
  3. Capture network connections: sudo lsof -i -P -n > active_connections.txt
  4. List running processes: ps aux > running_processes.txt
  5. Capture live traffic: sudo tcpdump -i eth0 -nn -w capture.pcap &
  6. Image suspicious filesystem: dcfldd if=/dev/sda1 of=evidence.dd hash=sha256 sha256log=evidence.dd.sha256
  7. Analyze image with Sleuth Kit: fls -r -o evidence.dd > file_listing.txt
  8. Generate timeline: fls -m / -r -o evidence.dd | mactime -d > timeline.csv
  9. Verify evidence integrity: shasum -a 256 -c evidence.dd.sha256

IoT Device Security Audit

  1. Extract firmware from device (hardware tools or manufacturer update)
  2. Analyze with binwalk: binwalk -e device_firmware.bin
  3. Examine extracted filesystem for credentials
  4. Check file permissions and setuid binaries
  5. Emulate with QEMU if dynamic analysis is needed
  6. Analyze network behavior with packet capture
  7. Document findings with file hashes and timestamps

Malware Triage

  1. Hash suspected binary: shasum -a 256 malware.exe
  2. Identify file type: file malware.exe
  3. Extract strings: strings malware.exe | grep -iE 'http|password|cmd|exec'
  4. Extract metadata: exiftool malware.exe
  5. Monitor filesystem activity: sudo fs_usage -w -f filesys -p (macOS) / strace -p for syscall-level tracing or inotifywait -r -m /tmp for directory-level filesystem events (Linux; note neither is a direct per-process equivalent of fs_usage)
  6. Capture network activity: sudo tcpdump -i lo0 -nn -w malware_traffic.pcap (macOS; use lo on Linux)
  7. Extract embedded files: binwalk -e malware.exe
  8. Document indicators of compromise (IPs, domains, file hashes)

Next Steps

This toolkit covers most common DFIR scenarios, but several areas warrant dedicated tools and study:

Memory forensics: RAM analysis requires Volatility. A starting point:

vol.py -f memory.dump pslist

This surfaces running processes at the time of capture, including those that have since been killed or hidden. Memory is where you find encryption keys, injected shellcode, and other artifacts that never touch disk.

Windows artifact analysis: Registry examination, Windows Event Log (EVTX) parsing, and NTFS-specific features (MFT, alternate data streams, USN journal) require Windows-focused tooling. Most Linux-based tools have limited support here.

Advanced file carving: When filesystem metadata is destroyed, tools like Foremost or Scalpel reconstruct files from raw byte patterns. Sleuth Kit handles most deletion recovery; carving is a niche requirement for heavily damaged or wiped media.

Malware reverse engineering: Disassembly and decompilation with Ghidra or radare2 is a separate skill tree, though it complements forensics work naturally when triage surfaces suspicious binaries.

Conclusion

CLI forensics tools provide substantial investigative capability without GUI dependencies. The workflow integrates naturally with terminal-based development and security work.

The core toolkit (Sleuth Kit, binwalk, tshark, exiftool, standard Unix utilities) covers network analysis, filesystem investigation, metadata extraction, firmware analysis, and system inspection. Installation takes minutes. Competency requires focused practice but builds on existing CLI familiarity.

For security practitioners already working in the terminal, adding these tools extends capability into forensics and incident response with minimal friction. The investment compounds: each investigation builds tool fluency and investigative intuition applicable to future work.