πŸ” DeepQ β€” Web Attack Classifier

A fine-tuned Qwen3.5-0.8B model that analyzes raw HTTP request payloads to detect web attacks. Given an HTTP request, the model reasons through the payload step by step and returns a structured JSON result identifying the attack type and the specific malicious syntax.


How It Works

The model uses chain-of-thought (CoT) reasoning β€” before producing a final answer, it thinks through the request structure, anomaly signals, and attack patterns inside a <think> block. This reasoning is accessible via the OpenAI-compatible SDK using the reasoning field on the response message.

<think>
1. [Structure Analysis]   GET request with query parameter 'id' containing user input
2. [Anomaly Detection]    Single quote (') detected β€” attempting to break SQL string context
3. [Pattern Mapping]      OR 1=1 is a tautology used to bypass authentication
4. [Evasion Technique]    Double dash (--) comments out the rest of the original query
5. [Attack Classification] SQL Injection via GET parameter manipulation
</think>
{"attack_type": "SQL Injection", "attack_syntax": "' OR 1=1--"}

Supported Attack Types

Label Description
Normal Benign HTTP traffic
SQL Injection SQL syntax injected into parameters
Cross Site Scripting (XSS) Script injection via input fields or URLs
Command Injection OS command injection via HTTP parameters
Path Traversal Directory traversal using ../ patterns
Forced Browsing Direct access to hidden or restricted paths
Brute Force Repeated authentication attempts
Cookie Manipulation Tampering with cookie values
File Upload Malicious file upload attempts
File Download Unauthorized file download attempts
Host Discovery Network/host reconnaissance via HTTP

Usage

The model is served via a vLLM-compatible endpoint and accessed through the OpenAI SDK. Enable thinking mode via chat_template_kwargs to get the full CoT reasoning.

import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    base_url="http://your-server:8000/v1",
    api_key="EMPTY"
)

http_request = """GET /index.php?id=1' OR 1=1-- HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0"""

async def analyze(payload: str):
    response = await client.chat.completions.create(
        model="Qwen3.5-0.8B",
        messages=[
            {
                "role": "system",
                "content": "You are a cybersecurity analysis AI. Analyze the given HTTP payload and determine whether it contains an attack."
            },
            {
                "role": "user",
                "content": payload
            }
        ],
        max_tokens=2048,
        temperature=0.0,
        top_p=0.95,
        presence_penalty=1.5,
        extra_body={
            "chat_template_kwargs": {"enable_thinking": True},
            "top_k": 20,
            "min_p": 0.0,
            "repetition_penalty": 1.0,
        },
    )

    content  = response.choices[0].message.content       # JSON result
    reasoning = response.choices[0].message.reasoning    # CoT process inside <think>

    return content, reasoning

content, reasoning = asyncio.run(analyze(http_request))
print("Reasoning:\n", reasoning)
print("Result:\n", content)

Output

# reasoning  β†’  the full <think>...</think> process
# content    β†’  final JSON
{"attack_type": "SQL Injection", "attack_syntax": "' OR 1=1--"}
Downloads last month
317
Safetensors
Model size
0.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for madcows/DeepQ

Finetuned
(160)
this model