Request: GPT-4o Distill (Lllama 3.1)
GPT-4o Distill (Lllama 3.1) is a finetune of Llama 3.1 8B trained on the new GPT-4o Distill dataset. It is designed to replicate the style of the now-retired OpenAI chatbot. Please consider Hereticising this model for displaced GPT-4o users.
That's an adapter.
Unfortunately I had not noticed! The adapter can be merged with the base model. Once the combined weights are frozen full Hereticisation should be possible.
If my pact with the chaos lord LoRA falter, resulting in a rapture that leads to utter chaos inside the model, you may want proceed with your own practice. A side note, we should eventually see to the ascension of PaperWitch Chapter under a proper release banner. However, I feel the need to experiment with larger +24B models properly first, which is going to take time given the stretched logistics.
Thanks @MuXodious !
