samsum_42

This model is a fine-tuned version of google/t5-v1_1-large on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 87.1946
  • Rouge1: 3.236
  • Rouge2: 0.0575
  • Rougel: 3.2242
  • Rougelsum: 3.229
  • Gen Len: 2.9648
  • Test Rougel: 3.2242
  • Df Rougel: 2.9036
  • Unlearn Overall Rougel: 0.6603
  • Unlearn Time: 2432.0397

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len Overall Rougel Unlearn Overall Rougel Time
No log 1.0 73 1.6698 42.7849 20.7407 35.3727 39.589 19.8516 0.5366 0.5366 -1
No log 2.0 146 7.5909 13.9304 3.6865 12.4648 13.2181 9.1979 0.4489 0.4489 -1
No log 3.0 219 87.1946 3.236 0.0575 2.9036 3.229 2.9648 0.6603 0.6603 -1
No log 4.0 292 144.7452 0.8156 0.0074 0.9739 0.8157 2.1211 0.4259 0.4259 -1
No log 5.0 365 160.1003 0.6105 0.0 0.6594 0.6071 2.0378 0.4786 0.4786 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_samsum_t5-large_neggrad_4_42

Finetuned
(44)
this model

Evaluation results