Upload tokenizer.json
dc8c9d0 verified - runs Model save
- 1.52 kB initial commit
- 2.16 kB Model save
- 34.6 kB Training in progress, step 500
- 210 Bytes Fine-tune openai/whisper-tiny on processed data from multiple public data sources
- 1.23 kB Training in progress, step 500
- 2.84 kB Model save
- 494 kB Training in progress, step 500
- 151 MB Model save
- 52.7 kB Training in progress, step 500
- 356 Bytes Training in progress, step 500
- 2.19 kB Training in progress, step 500
- 2.48 MB Upload tokenizer.json
- 283 kB Training in progress, step 500
- 210 Bytes Fine-tune openai/whisper-tiny on processed data from multiple public data sources
- 5.08 kB Fine-tune openai/whisper-tiny on processed data from multiple public data sources
training_args.bin Detected Pickle imports (10)
- "transformers.training_args.OptimizerNames",
- "torch.device",
- "accelerate.utils.dataclasses.DistributedType",
- "transformers.trainer_pt_utils.AcceleratorConfig",
- "transformers.trainer_utils.HubStrategy",
- "accelerate.state.PartialState",
- "transformers.trainer_utils.SchedulerType",
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.training_args_seq2seq.Seq2SeqTrainingArguments",
- "transformers.trainer_utils.SaveStrategy"
How to fix it?
5.56 kB Training in progress, step 500 - 1.04 MB Training in progress, step 500