LoRA: Aim Control Style for Mapperatorinator

This is a LoRA (Low-Rank Adaptation) fine-tune for the OliBomby/Mapperatorinator-v30 model. It has been trained to generate beatmaps with aim control style.

AB Test Note (Click to expand)

If you came from mjoink/Mapperatorinator-v30-LoRA-AB-test, this is the model I consider "better". You can verify by checking the hash:

SHA256 of adapter_model.safetensors:

7c3f73af005dd2b20eb491bd227f4933550f2b7602b5d4e66f4601ee377f9c45

Model Details

  • Base Model: OliBomby/Mapperatorinator-v30
  • Model Type: LoRA

How to Use

You can load the LoRA directly from this repository without manually downloading the files. The script will handle it automatically.

  1. Run Inference: Use the lora_path argument and set it to the name of this repository.
    python inference.py \
      audio_path='/path/to/your/audio.mp3' \
      output_path='/path/to/your/output_folder' \
      lora_path='mjoink/Mapperatorinator-v30-LoRA-aim-control' \
      ... # other arguments
    

Training Parameters

This LoRA was trained using the following parameters. Testing has shown that these settings successfully achieve above 99 on all test/*_acc metrics, which is why they are documented here. The configuration inheritance chain is: default.yaml -> v30.yaml -> lora.yaml -> this.yaml.

defaults:
  - lora
  - _self_

compile: false

data:
  dataset_type: "mmrs"
  train_dataset_path: ""
  test_dataset_path: ""
  train_dataset_start: 0
  train_dataset_end: 40
  test_dataset_start: 40
  test_dataset_end: 45

optim:
  name: muon
  base_lr: 0.003
  base_lr_2: 0.00075
  batch_size: 8
  grad_acc: 1
  total_steps: 7000
  warmup_steps: 100

eval:
  every_steps: 1000
  steps: 500

Based on this configuration, you can extrapolate the training parameters needed to achieve the 99 accuracy metric for a dataset of around 200 beatmaps.

Please note that only 40 beatmaps were used for training in this run, rather than 45. When dealing with such an extremely small dataset, it is generally recommended to use all available data for training instead of splitting it.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fourmansyah/LoRA-aim-control

Adapter
(5)
this model