Unfortunately, the model outputs hebrew instead of english upon being asked questions post-ablation that it would normally refuse (regardless of measurement).
It might be better to finetune Fat_Fish again first and then ablate.
The ablation proj scan only cost 5 cents on runpod so its uploaded here anyway.
# python measure.py -m B:\12B\SicariusSicariiStuff--Fat_Fish -o B:\12B\SicariusSicariiStuff--Fat_Fish\SicariusSicariiStuff--Fat_Fishablit_proj --batch-size 8 --projected
# python analyze_old.py B:\12B\SicariusSicariiStuff--Fat_Fish\SicariusSicariiStuff--Fat_Fishablit_proj -c
# sharded_ablate.py big_fish.yml --normpreserve --projected
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Naphula-Archives/Fat_Fish-MPOA
Base model
mistralai/Mistral-Nemo-Base-2407 Finetuned
SicariusSicariiStuff/Fat_Fish