How to use google/t5-efficient-base-ff12000 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/t5-efficient-base-ff12000") model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-efficient-base-ff12000")
tags: - t5-new-success
Hf T5: -62.53145694732666 MTF T5: -62.53152084350586