Upload 3 files
3bae5fb verified
Training Notes (9B)
Summary
- Base model:
FLUX.2-klein-base-9B
- LoRA rank:
16
- Optimizer:
AdamW8bit
- Learning rate:
3e-5
- Training steps (tentative best):
600
Dataset
- Source images were collected from open-license panorama/HDRI archives on the web.
- License policy was centered on permissive sources (
CC0 / Public Domain), with part of the set including CC BY assets.
- After preprocessing and filtering, training was run on a curated ERP pair set of about
1000 samples.
Preprocess / Pair Generation
- Target format: ERP (equirectangular)
2:1
- Source panoramas were normalized to ERP and cleaned (invalid aspect/failed reads removed).
- Control images were generated by:
- building a green ERP canvas,
- sampling
1-3 reference patches,
- projecting patches with a pinhole-based spherical mapping,
- handling seam continuity with wrap-aware placement.
- The model is trained to fill green regions while preserving visible reference context.