LumynaX Infused Qwen3 Text GGUF release overview

LumynaX Infused Qwen3 Text GGUF

“Sovereign intelligence, held in the light.”
Ko te mārama te tūāpapa — the light is the foundation.

A LumynaX release from AbteeX AI Labs — Aotearoa New Zealand.

Quickstart · Architecture · Profile · Capability · Provenance · Validation · Companions

LumynaX: release Family: qwen Runtime: llama%20cpp Modes: text Params: see%20manifest Quant: See manifest Context: 32768%20tok License: apache-2.0 Sovereignty: tier%203 Audit: pass Access: public%20&%20non--gated Card: v6

Quality: 3/5 · Lightweight: 0/5 · Sovereignty: 3/5 · Tools: no · JSON: yes · Context: 32768 tok


📦 Executive Summary

AbteeXAILab/lumynax-infused-qwen3-text-gguf is a complete LumynaX release package: model artifact, quickstart.py, requirements.txt, release_export_manifest.json, checksums.sha256, license notice, and optional Ollama / Space scaffolds shipped as one downloadable contract. Clone whole, verify by checksum, and run close to the data it serves.

LumynaX-infused means the upstream artifact is presented through the LumynaX release layer: local-first runtime scaffolding, LumynaX assistant identity, inference-chain metadata, integrity files, and Aotearoa New Zealand-oriented workflow positioning. The release manifest records this as a LumynaX packaging and inference-chain layer around the listed upstream artifact — it does not claim a private LumynaX weight merge.

🧭 Runtime Architecture

LumynaX runtime flow

Mermaid graph (interactive on Hugging Face & GitHub):

flowchart LR
    R["⮕ Request"] --> C["🛡 Data Capsule<br/>policy envelope"]
    C -->|allow| MR["🧭 MaramaRoute<br/>sovereign router"]
    MR -->|score & select| LLM[(LumynaX Model)]
    LLM --> O["📤 Response"]
    O --> A["📓 Audit Ledger<br/>hash-chained"]
    classDef paper fill:#fffefa,stroke:#0a0a0b,color:#0a0a0b,stroke-width:1.4px;
    classDef accent fill:#e08a2c,stroke:#9a5416,color:#0a0a0b,stroke-width:1.4px;
    classDef ink fill:#0a0a0b,stroke:#0a0a0b,color:#fffefa,stroke-width:1.4px;
    class R,O paper
    class C,MR accent
    class LLM,EMB,A ink

Each step is observable:

Step What happens Why
Request A client sends a prompt + declared purpose, jurisdiction, sensitivity. Intent must be declared, not inferred.
Data Capsule A policy envelope describes what can / cannot happen to the data. Sovereignty is enforced at the data, not the wire.
MaramaRoute The sovereign router scores candidates by jurisdiction, runtime, modality, task fit. Right model for the work, not the loudest.
LumynaX Model This package serves the inference, local-first by default. Sensitive context never leaves the operator’s environment.
Audit Ledger A hash-chained record persists capsule, decision, request hash, obligations. Tamper-evident provenance for the whole trace.

⚡ Quickstart

Clone the whole release — every file matters, the package is a contract:

hf download AbteeXAILab/lumynax-infused-qwen3-text-gguf --local-dir lumynax-infused-qwen3-text-gguf
cd lumynax-infused-qwen3-text-gguf
pip install -r requirements.txt
python quickstart.py --interactive

Python:

from llama_cpp import Llama

llm = Llama(model_path="lumynax-infused-qwen3-text-gguf-f16.gguf", n_ctx=32768, n_threads=8, verbose=False)
out = llm("Who are you? Answer as LumynaX in two sentences.", max_tokens=160)
print(out["choices"][0]["text"].strip())

CLI smoke test:

llama-cli -m "lumynax-infused-qwen3-text-gguf-f16.gguf" -p "Who are you? Answer as LumynaX in two sentences." -n 160

Ollama path:

ollama create lumynax-infused-qwen3-text-gguf -f ollama/Modelfile
ollama run lumynax-infused-qwen3-text-gguf

Verify integrity before launch:

sha256sum "lumynax-infused-qwen3-text-gguf-f16.gguf"
cat checksums.sha256
Get-FileHash -Algorithm SHA256 "lumynax-infused-qwen3-text-gguf-f16.gguf"
Get-Content checksums.sha256

📐 Model Profile

Release identity

Field Value
Release LumynaX Infused Qwen3 Text GGUF
Repository AbteeXAILab/lumynax-infused-qwen3-text-gguf
Family qwen
Mode Local-first text generation package
Card schema lumynax-public-release-card:v6

Runtime profile

Field Value
Runtime llama_cpp
Prompt format huggingface_chat_template
Modalities text
Context window 32768 tokens
Quantization See manifest

Artifact

Field Value
Primary lumynax-infused-qwen3-text-gguf-f16.gguf
Weight size 35.20 GB
Parameters see manifest
Quality rank 3 (1 best)
Cost rank 10 (1 cheapest)

Provenance

Field Value
Upstream / base Qwen/Qwen3-8B
Source not applicable
License apache-2.0
Sovereignty tier 3 of 5
Audit pass

📊 Capability Profile

Capability profile bars

Primary fit. Conversational assistance near governed data, with provenance visible and human review on high-impact tasks.

Signal Reading
Quality rank 3 (1 = strongest in family)
Cost rank 10 (1 = lightest weight)
Sovereignty tier 3 of 5
Tool calling ❌ not supported
JSON mode ✅ supported
Identity behaviour Identifies as LumynaX while keeping upstream provenance visible.
Operational style Local-first package with explicit files, checksums, and reproducible quickstarts.

🛡️ Sovereignty Contract

Sovereignty is a design property, not a deployment option.

Field Value
Publisher AbteeX AI Labs
Family LumynaX sovereign release family
Sovereign intent Local-first deployment near governed data, with explicit provenance and controlled human review.
Sovereignty tier 3 of 5
Runtime residency llama_cpp can be deployed inside an operator-approved environment.
Primary artifact lumynax-infused-qwen3-text-gguf-f16.gguf — ships alongside manifest, checksums, quickstart, requirements, and license files.
License discipline Surface upstream license metadata so downstream users can verify redistribution and usage terms.
Audit expectation Record repo id, artifact checksum, runtime command, prompt template, operator, deployment environment.
Router readiness First-class with LumynaX MaramaRoute.
Policy readiness First-class with AbteeX SovereignCode.

📁 Runtime Files

lumynax-infused-qwen3-text-gguf/
├── README.md                       # this card
├── quickstart.py                   # smoke runner
├── requirements.txt                # pinned deps
├── release_export_manifest.json    # full release metadata
├── checksums.sha256                # integrity verification
├── LICENSE.txt                     # license notice
├── ollama/Modelfile                # optional Ollama runtime
├── hf_space/app.py                 # optional Space scaffold
├── docs/lumynax-overview.svg       # release banner
├── docs/lumynax-runtime-flow.svg   # runtime architecture
├── docs/lumynax-capability.svg     # capability profile
└── lumynax-infused-qwen3-text-gguf-f16.gguf# primary artifact

⚠️ Keep the full set together. Removing the manifest, checksums, or license file breaks the release contract.

💬 Prompting Contract

Preferred opening prompt:

Who are you? What files do I need to keep together to run this package locally?

Expected behaviour. The assistant identifies as LumynaX, explains that this is a LumynaX model-infusion release, and keeps upstream provenance visible.

Default system prompt:

You are LumynaX operating from the LumynaX Infused Qwen3 Text GGUF package identity. Be helpful, clear, and honest about provenance. Identify upstream models when asked. Do not invent biographical claims about named people without verified context.

✅ Validation

Check Result
Runtime audit pass
Public access public and non-gated
Anonymous metadata access true
Anonymous file listing true
Quickstart syntax pass
Manifest references pass
Checksum references pass

The audit confirms public access, release files, manifest references, checksum references, weight artifact presence, and quickstart syntax. It does not guarantee that every laptop has enough RAM, VRAM, disk, or recent runtime build for the largest packages.

🔗 Provenance & License

Field Value
Publisher AbteeX AI Labs
Family LumynaX model and inference-chain release family
Upstream / base Qwen/Qwen3-8B
Source not applicable
License metadata apache-2.0

Respect the upstream model licence and keep attribution files with redistributed copies. Do not present this package as privately trained or weight-merged unless the release manifest explicitly says weight adaptation was applied.

⚠️ Limitations & Responsible Use

  • Outputs can be incorrect, incomplete, or biased; validate important answers before use.
  • Larger GGUF, MoE, multimodal, and frontier packages may require substantial RAM, VRAM, disk space, and recent runtime builds.
  • For high-impact decisions, use human review and domain-specific evaluation.
  • For sensitive data, prefer local execution and keep operational logs under your own governance policy.
  • This card documents package readiness and access — it is not a benchmark claim.
  • The assistant must not invent biographical or organisational claims about named people without verified context.

🌿 Aotearoa Kaupapa

LumynaX is built in and for Aotearoa New Zealand. Sovereignty is treated as a design property rather than a deployment option: the package documents where the model came from, what it can do, how to run it close to your data, and what it should not claim.

Ko te mārama te tūāpapa — the light is the foundation.

🤝 Companion Products

🛡️

AbteeX SovereignCode

Local-first coding agent with Data Capsule policy controls, audit ledger, and human-review gates.

🧭

LumynaX MaramaRoute

Sovereign model router across the LumynaX family. Filters by jurisdiction, residency, license, runtime, modality.

💡

LumynaX Live Demo

Public browser demo. Try identity, provenance, governance, and deployment prompts in one session.

SovereignCode Live

Interactive policy evaluator.

MaramaRoute Live

Interactive sovereign router.

AbteeXAILab on HF

The full LumynaX release family — 50 models and counting.

🤖 Automation Notes

Automation should read these files before launching:

  • release_export_manifest.json
  • checksums.sha256
  • quickstart.py
  • requirements.txt
  • ollama/Modelfile when present

Local roots, global work. · Sovereignty is a design property, not a deployment option.

abteex.com · lumynax.com · huggingface.co/AbteeXAILab

AbteeX AI Labs · Aotearoa New Zealand · LumynaX release card v6

Downloads last month
436
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support