Text Generation
Transformers
Safetensors
English
llama
llama3
humanizer
rewriting
conversational
merged
sft
editorial
Eval Results (legacy)
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("randhir302/HumanFlow")
model = AutoModelForCausalLM.from_pretrained("randhir302/HumanFlow")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
HumanFlow-Llama3-8B
Humanize AI Text with Natural Structure, Flow & Tone
🤗 Hugging Face Model • 💻 GitHub Repository • Apache-2.0 License
Overview
HumanFlow-Llama3-8B is a fine-tuned Llama 3 model designed to transform robotic AI-generated writing into content that feels natural, human, readable, and authentic.
Instead of replacing words only, HumanFlow improves:
- sentence rhythm
- structure
- tone
- flow
- readability
- realism
Why HumanFlow?
Most AI-generated text feels:
- repetitive
- over-polished
- generic
- predictable
- emotionally flat
HumanFlow rewrites outputs to feel more organic and naturally written.
Performance Snapshot
| Metric | Base Model | HumanFlow |
|---|---|---|
| Human-Like Score | 18% | 99% |
| Natural Tone | Low | High |
| Rewrite Quality | Basic | Advanced |
| Readability | Generic | Strong |
Internal Evaluation
| Metric | Score |
|---|---|
| BERTScore F1 | 0.8424 |
| ROUGE-L | 0.0908 |
| Perplexity | 1.5242 |
| Text Overlap | 0.0528 |
Best Use Cases
- SEO rewriting
- Blog enhancement
- Student writing cleanup
- Email personalization
- AI content polishing
- SaaS integrations
- Human-style generation pipelines
Before vs After
Input
In today’s rapidly evolving digital landscape, it is imperative for organizations to leverage strategic methodologies in order to maximize engagement.
HumanFlow Output
Online markets move fast. If a company wants attention, it needs smart strategy, clear messaging, and content people actually care about.
Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "randhir302/HumanFlow"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = """
Rewrite this in a more human tone:
Artificial intelligence is transforming industries worldwide.
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=220,
temperature=0.75,
top_p=0.90,
repetition_penalty=1.10
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Recommended Settings
temperature = 0.75
top_p = 0.90
repetition_penalty = 1.10
max_new_tokens = 700
## Roadmap
- [x] Public Launch
- [x] Hugging Face Release
- [x] Fine-Tuned Base Model
- [ ] GGUF Quantized Release
- [ ] HumanFlow Pro API
- [ ] Browser Editor
- [ ] Multilingual Version
---
## Community
If HumanFlow helps you:
⭐ Like the model
⭐ Share outputs
⭐ Benchmark it
⭐ Build products with it
- Downloads last month
- 1,189
Model tree for randhir302/HumanFlow
Evaluation results
- BERTScore F1 on Internal Evaluation Suiteself-reported0.842
- ROUGE-L on Internal Evaluation Suiteself-reported0.091
- Perplexity on Internal Evaluation Suiteself-reported1.524
- Text Overlap on Internal Evaluation Suiteself-reported0.053
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="randhir302/HumanFlow") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)