← Back to feed

LAMEHUG: First Malware to Integrate Live LLM for Dynamic C2 Command Generation

Date: 2026-03-28
TLP: TLP:CLEAR
Tags: Malware, APT28, LLM-Integrated Malware

Executive Summary

LAMEHUG is the first publicly documented malware sample that integrates a live LLM API call into its command-and-control loop. Attributed to APT28 (also tracked as UAC-0001, Fancy Bear, and Forest Blizzard), the malware uses an OpenAI-compatible API endpoint to dynamically generate C2 commands based on reconnaissance data collected from the infected host. This represents a significant evolution in how threat actors use generative AI operationally.

Detailed Findings

Traditional malware uses static or templated C2 command structures. LAMEHUG breaks this pattern by sending host context data, including OS version, installed software, network configuration, and user privilege level, to an LLM API. The LLM returns tailored commands appropriate for the target environment. This means the malware effectively adapts its post-exploitation behavior without the operator needing to manually craft commands for each compromised host.

Operational Flow

  1. Initial access is achieved via spearphishing with a weaponized document
  2. The first-stage dropper executes a Python-based implant
  3. The implant performs local reconnaissance, collecting hostname, OS, domain membership, installed software, running processes, and network interfaces
  4. Reconnaissance output is formatted into a structured prompt and sent to an LLM API endpoint
  5. The LLM response contains shell commands appropriate for the target environment
  6. Commands are parsed, validated for basic safety checks by the implant, and executed
  7. Output from executed commands is fed back to the LLM for the next iteration

The LLM acts as a decision engine, replacing the human operator in the initial triage and lateral movement phases. The API endpoint used is compatible with the OpenAI chat completions format, though it does not connect to OpenAI infrastructure directly. The operators appear to use a self-hosted or third-party LLM service.

Implications

The adaptive nature of this approach complicates signature-based detection. Since the C2 commands are generated dynamically, traditional IOC matching against known command strings becomes less effective. Behavioral detection, particularly monitoring for the pattern of host enumeration followed by API calls followed by command execution, becomes the primary detection surface.

MITRE ATT&CK Mapping

TechniqueIDContext
Phishing: Spearphishing AttachmentT1566.001Initial delivery via weaponized document
Command and Scripting Interpreter: PythonT1059.006Python-based implant execution
System Information DiscoveryT1082Host reconnaissance for LLM context
Application Layer Protocol: Web ProtocolsT1071.001LLM API calls over HTTPS
Standard EncodingT1132.001JSON-formatted C2 communication
Exfiltration Over C2 ChannelT1041Data sent to LLM API as prompt context

IOCs

IOCs for LAMEHUG are restricted per source reporting agreements. Known indicators include Python implant hashes, C2 domain infrastructure, and LLM API proxy endpoints. Contact source reports for specific values.

Detection Recommendations

Focus detection on the behavioral chain rather than static indicators. Key signals include Python processes making repeated HTTPS requests to API endpoints with JSON payloads matching chat completion schemas, particularly when preceded by system enumeration commands. Monitor for unusual patterns of host reconnaissance followed by outbound API calls with large JSON request bodies.

For Splunk environments, correlate process creation events showing Python execution with web proxy logs showing HTTPS POST requests to non-standard API endpoints containing "chat/completions" or "v1/messages" in the URI path.

References