Request: GPT-4o Distill (Lllama 3.1)

#13
by redaihf - opened

GPT-4o Distill (Lllama 3.1) is a finetune of Llama 3.1 8B trained on the new GPT-4o Distill dataset. It is designed to replicate the style of the now-retired OpenAI chatbot. Please consider Hereticising this model for displaced GPT-4o users.

That's an adapter.

Unfortunately I had not noticed! The adapter can be merged with the base model. Once the combined weights are frozen full Hereticisation should be possible.

It should be, but this process falls wildly out of my mission statement.

Here's a list of reasons for why I won't be doing that I never said I won't deliver, did I?

animation

If necessary I shall make a pact and upload the result. But you have outshone me instead. Heresy!

If my pact with the chaos lord LoRA falter, resulting in a rapture that leads to utter chaos inside the model, you may want proceed with your own practice. A side note, we should eventually see to the ascension of PaperWitch Chapter under a proper release banner. However, I feel the need to experiment with larger +24B models properly first, which is going to take time given the stretched logistics.

redaihf changed discussion status to closed

Sign up or log in to comment