SahilCarterr commited on
Commit
ca71505
·
verified ·
1 Parent(s): 6629ec1

Upload 19 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ 32-photo.jpg filter=lfs diff=lfs merge=lfs -text
32-photo.jpg ADDED

Git LFS Details

  • SHA256: e38a82d597c8c78c2c203cd49a545b87b11088658683c5fed743e91e6967bd46
  • Pointer size: 131 Bytes
  • Size of remote file: 172 kB
README.md CHANGED
@@ -1,5 +1,152 @@
1
- ---
2
- license: other
3
- license_name: bria-3.2
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: diffusers
3
+ inference: false
4
+ tags:
5
+ - text-to-image
6
+ - legal liability
7
+ - Non Commercial Use
8
+ extra_gated_description: >-
9
+ Bria AI Model weights are open source for non commercial use only, per the
10
+ provided [license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
11
+ extra_gated_heading: Fill in this form to immediatly access the model for non commercial use
12
+ extra_gated_fields:
13
+ Name: text
14
+ Email: text
15
+ Company/Org name: text
16
+ Company Website URL: text
17
+ Discord user: text
18
+ I agree to BRIA’s Privacy policy, Terms & conditions, and acknowledge Non commercial use to be Personal use / Academy / Non profit (direct or indirect): checkbox
19
+ license: other
20
+ license_name: bria-3.2
21
+ license_link: https://creativecommons.org/licenses/by-nc/4.0/deed.en
22
+ ---
23
+
24
+
25
+
26
+ # TL;DR
27
+
28
+ Bria 3.2 is the next-generation commercial-ready text-to-image model.
29
+ **With just 4 billion parameters**, it provides exceptional aesthetics and text rendering, evaluated to provide **on par results to leading open-source models, and outperforming other licensed models**.
30
+
31
+ In addition to being **built entirely on licensed data**, 3.2 provides several advantages for enterprise and commercial use:
32
+ * Efficient Compute - the model is X3 smaller than the equivalent models in the market (4B parameters vs 12B parameters other open source models)
33
+ * Architecture Consistency: Same architecture as 3.1—ideal for users looking to upgrade without disruption.
34
+ * Fine-tuning Speedup: 2x faster fine-tuning on L40S and A100.
35
+
36
+
37
+ [CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-3.2-API)
38
+
39
+
40
+
41
+ ![](32-photo.jpg)
42
+
43
+
44
+
45
+ # BRIA 3.2: Training data and Commercial Licensing
46
+
47
+ BRIA 3.2 is our latest text-to-image model explicitly designed for commercial applications.
48
+ This model combines technological innovation with ethical responsibility and legal security, setting a new standard in the AI industry.
49
+ Bria AI licenses the foundation model with full legal liability coverage.
50
+ Our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
51
+
52
+ For more information, please visit our [website](https://bria.ai/).
53
+
54
+ Join our [Discord community](https://discord.gg/Nxe9YW9zHS) for more information, tutorials, tools, and to connect with other users!
55
+
56
+ ### For Commercial License : click [Here](https://bria.ai/contact-us?hsCtaAttrib=114250296256).
57
+
58
+
59
+ # What's New vs pervious models:
60
+
61
+ - **Improved Aesthetics**:
62
+
63
+ - 65% user preference for BRIA 3.2 over BRIA 3.1.
64
+ - 76% user preference for BRIA 3.2 over BRIA 2.3.
65
+
66
+ - **Superior Text Rendering**: The model is optimized to generate short text consists of 1-6 words. OCR Score improvement from 5% (3.1) to 70% (3.2).
67
+
68
+ - **Consistent Prompt Alignment**: Maintains high-quality textual description adherence.
69
+
70
+
71
+
72
+ ### Get Access
73
+ Bria 3.2 is available everywhere you build, either as source-code and weights, ComfyUI nodes or API endpoints.
74
+
75
+ - **API Endpoint**: [Bria.ai](https://docs.bria.ai/image-generation/endpoints/text-to-image-base) , [Fal.ai](https://fal.ai/models/bria/text-to-image/3.2), [Replicate](https://replicate.com/bria/image-3.2)
76
+ - **ComfyUI**: [Use it in workflows](https://github.com/Bria-AI/ComfyUI-BRIA-API)
77
+ - **GitHub**: [github.com/Bria-AI/BRIA-3.2](https://github.com/Bria-AI/BRIA-3.2)
78
+ - **Interested in BRIA 3.2 source code and weights for commercial use?** Purchase is required to license BRIA 3.2 got commercial use, ensuring royalty management with our data partners and full liability coverage.
79
+ - Are you a startup or a student? We encourage you to apply for our [Startup Program](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to request access. This program are designed to support emerging businesses and academic pursuits with our cutting-edge technology.
80
+ - By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions](https://bria.ai/terms-and-conditions/).
81
+
82
+
83
+
84
+ # Key Features
85
+
86
+ - **Architecture**: 4B parameter, rectified flow transformer based model with T5 text encoder.
87
+
88
+ - **Legally Compliant**: Offers full legal liability coverage for copyright and privacy infringements. Thanks to training on 100% licensed data from leading data partners, we ensure the ethical use of content.
89
+
90
+ - **Patented Attribution Engine**: Our attribution engine is our way to compensate our data partners, powered by our proprietary and patented algorithms.
91
+
92
+ - **Enterprise-Ready**: Specifically designed for business applications, Bria AI 3.2 delivers high-quality, compliant imagery for a variety of commercial needs.
93
+
94
+ - **Customizable Technology**: Provides access to source code and weights for extensive customization, catering to specific business requirements.
95
+
96
+ ### Model Description
97
+
98
+ - **Developed by:** BRIA AI
99
+ - **Model type:** Latent diffusion text-to-image model
100
+ - **Resources for more information:** [BRIA AI](https://bria.ai/)
101
+
102
+
103
+
104
+
105
+ ### Code example using Diffusers
106
+
107
+
108
+ ```python
109
+ pip install diffusers, hf_hub_download
110
+ ```
111
+
112
+
113
+
114
+ ```python
115
+ from huggingface_hub import hf_hub_download
116
+ import os
117
+
118
+ try:
119
+ local_dir = os.path.dirname(__file__)
120
+ except:
121
+ local_dir = '.'
122
+
123
+ hf_hub_download(repo_id="briaai/BRIA-3.2", filename='pipeline_bria.py', local_dir=local_dir)
124
+ hf_hub_download(repo_id="briaai/BRIA-3.2", filename='transformer_bria.py', local_dir=local_dir)
125
+ hf_hub_download(repo_id="briaai/BRIA-3.2", filename='bria_utils.py', local_dir=local_dir)
126
+
127
+ import torch
128
+ from pipeline_bria import BriaPipeline, BriaTransformer2DModel
129
+
130
+ # trust_remote_code = True - allows loading a transformer which is not present at the transformers library(from transformer/bria_transformer.py)
131
+ pipe = BriaPipeline.from_pretrained("briaai/BRIA-3.2", torch_dtype=torch.bfloat16,trust_remote_code=True)
132
+ pipe.to(device="cuda")
133
+
134
+ prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background"
135
+ negative_prompt = "Logo,Watermark,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers"
136
+
137
+ images = pipe(prompt=prompt, negative_prompt=negative_prompt, height=1024, width=1024).images[0]
138
+ ```
139
+
140
+
141
+ ### Some tips for using our text-to-image model at inference:
142
+
143
+
144
+ 1. Using negative prompt is recommended.
145
+ 2. For Fine-tuning, use zeros instead of null text embedding.
146
+ 3. We support multiple aspect ratios, yet resolution should overall consists approximately `1024*1024=1M` pixels, for example:
147
+ `((1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)`
148
+
149
+ 4. Use 30-50 steps (higher is better)
150
+ 5. Use `guidance_scale` of 5.0
151
+
152
+
model_index.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "BriaPipeline",
3
+ "_diffusers_version": "0.33.1",
4
+ "_name_or_path": "briaai/BRIA-3.1",
5
+ "scheduler": [
6
+ "diffusers",
7
+ "FlowMatchEulerDiscreteScheduler"
8
+ ],
9
+ "text_encoder": [
10
+ "transformers",
11
+ "T5EncoderModel"
12
+ ],
13
+ "tokenizer": [
14
+ "transformers",
15
+ "T5TokenizerFast"
16
+ ],
17
+ "transformer": [
18
+ "transformer_bria",
19
+ "BriaTransformer2DModel"
20
+ ],
21
+ "vae": [
22
+ "diffusers",
23
+ "AutoencoderKL"
24
+ ]
25
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "FlowMatchEulerDiscreteScheduler",
3
+ "_diffusers_version": "0.33.1",
4
+ "base_image_seq_len": 256,
5
+ "base_shift": 0.5,
6
+ "invert_sigmas": false,
7
+ "max_image_seq_len": 4096,
8
+ "max_shift": 1.15,
9
+ "num_train_timesteps": 1000,
10
+ "shift": 3.0,
11
+ "shift_terminal": null,
12
+ "time_shift_type": "exponential",
13
+ "use_beta_sigmas": false,
14
+ "use_dynamic_shifting": true,
15
+ "use_exponential_sigmas": false,
16
+ "use_karras_sigmas": false
17
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "T5EncoderModel"
4
+ ],
5
+ "classifier_dropout": 0.0,
6
+ "d_ff": 10240,
7
+ "d_kv": 64,
8
+ "d_model": 4096,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "t5",
19
+ "num_decoder_layers": 24,
20
+ "num_heads": 64,
21
+ "num_layers": 24,
22
+ "output_past": true,
23
+ "pad_token_id": 0,
24
+ "relative_attention_max_distance": 128,
25
+ "relative_attention_num_buckets": 32,
26
+ "tie_word_embeddings": false,
27
+ "torch_dtype": "float16",
28
+ "transformers_version": "4.52.0",
29
+ "use_cache": true,
30
+ "vocab_size": 32128
31
+ }
text_encoder/model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2806b1cf07fc6eac6c5059811aea4e069d69df34b782a0a85cd6a2b57de48404
3
+ size 4994546896
text_encoder/model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51aa7ace7b240403ef440b7387445aee7dd585cd0240d7773567cad5a0f1ed61
3
+ size 4966239920
text_encoder/model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:366835891170b04927afee017584da714dfdb74aac0022d742943a46c612cae7
3
+ size 1577127552
text_encoder/model.safetensors.index.json ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 11537887232
4
+ },
5
+ "weight_map": {
6
+ "encoder.block.0.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
7
+ "encoder.block.0.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
8
+ "encoder.block.0.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
9
+ "encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "model-00001-of-00003.safetensors",
10
+ "encoder.block.0.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
11
+ "encoder.block.0.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
12
+ "encoder.block.0.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
13
+ "encoder.block.0.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
14
+ "encoder.block.0.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
15
+ "encoder.block.0.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
16
+ "encoder.block.1.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
17
+ "encoder.block.1.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
18
+ "encoder.block.1.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
19
+ "encoder.block.1.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
20
+ "encoder.block.1.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
21
+ "encoder.block.1.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
22
+ "encoder.block.1.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
23
+ "encoder.block.1.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
24
+ "encoder.block.1.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
25
+ "encoder.block.10.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
26
+ "encoder.block.10.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
27
+ "encoder.block.10.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
28
+ "encoder.block.10.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
29
+ "encoder.block.10.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
30
+ "encoder.block.10.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
31
+ "encoder.block.10.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
32
+ "encoder.block.10.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
33
+ "encoder.block.10.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
34
+ "encoder.block.11.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
35
+ "encoder.block.11.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
36
+ "encoder.block.11.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
37
+ "encoder.block.11.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
38
+ "encoder.block.11.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
39
+ "encoder.block.11.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
40
+ "encoder.block.11.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
41
+ "encoder.block.11.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
42
+ "encoder.block.11.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
43
+ "encoder.block.12.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
44
+ "encoder.block.12.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
45
+ "encoder.block.12.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
46
+ "encoder.block.12.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
47
+ "encoder.block.12.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
48
+ "encoder.block.12.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
49
+ "encoder.block.12.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
50
+ "encoder.block.12.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
51
+ "encoder.block.12.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
52
+ "encoder.block.13.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
53
+ "encoder.block.13.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
54
+ "encoder.block.13.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
55
+ "encoder.block.13.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
56
+ "encoder.block.13.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
57
+ "encoder.block.13.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
58
+ "encoder.block.13.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
59
+ "encoder.block.13.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
60
+ "encoder.block.13.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
61
+ "encoder.block.14.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
62
+ "encoder.block.14.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
63
+ "encoder.block.14.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
64
+ "encoder.block.14.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
65
+ "encoder.block.14.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
66
+ "encoder.block.14.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
67
+ "encoder.block.14.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
68
+ "encoder.block.14.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
69
+ "encoder.block.14.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
70
+ "encoder.block.15.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
71
+ "encoder.block.15.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
72
+ "encoder.block.15.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
73
+ "encoder.block.15.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
74
+ "encoder.block.15.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
75
+ "encoder.block.15.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
76
+ "encoder.block.15.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
77
+ "encoder.block.15.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
78
+ "encoder.block.15.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
79
+ "encoder.block.16.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
80
+ "encoder.block.16.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
81
+ "encoder.block.16.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
82
+ "encoder.block.16.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
83
+ "encoder.block.16.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
84
+ "encoder.block.16.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
85
+ "encoder.block.16.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
86
+ "encoder.block.16.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
87
+ "encoder.block.16.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
88
+ "encoder.block.17.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
89
+ "encoder.block.17.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
90
+ "encoder.block.17.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
91
+ "encoder.block.17.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
92
+ "encoder.block.17.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
93
+ "encoder.block.17.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
94
+ "encoder.block.17.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
95
+ "encoder.block.17.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
96
+ "encoder.block.17.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
97
+ "encoder.block.18.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
98
+ "encoder.block.18.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
99
+ "encoder.block.18.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
100
+ "encoder.block.18.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
101
+ "encoder.block.18.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
102
+ "encoder.block.18.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
103
+ "encoder.block.18.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
104
+ "encoder.block.18.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
105
+ "encoder.block.18.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
106
+ "encoder.block.19.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
107
+ "encoder.block.19.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
108
+ "encoder.block.19.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
109
+ "encoder.block.19.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
110
+ "encoder.block.19.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
111
+ "encoder.block.19.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
112
+ "encoder.block.19.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
113
+ "encoder.block.19.layer.1.DenseReluDense.wo.weight": "model-00002-of-00003.safetensors",
114
+ "encoder.block.19.layer.1.layer_norm.weight": "model-00002-of-00003.safetensors",
115
+ "encoder.block.2.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
116
+ "encoder.block.2.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
117
+ "encoder.block.2.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
118
+ "encoder.block.2.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
119
+ "encoder.block.2.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
120
+ "encoder.block.2.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
121
+ "encoder.block.2.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
122
+ "encoder.block.2.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
123
+ "encoder.block.2.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
124
+ "encoder.block.20.layer.0.SelfAttention.k.weight": "model-00002-of-00003.safetensors",
125
+ "encoder.block.20.layer.0.SelfAttention.o.weight": "model-00002-of-00003.safetensors",
126
+ "encoder.block.20.layer.0.SelfAttention.q.weight": "model-00002-of-00003.safetensors",
127
+ "encoder.block.20.layer.0.SelfAttention.v.weight": "model-00002-of-00003.safetensors",
128
+ "encoder.block.20.layer.0.layer_norm.weight": "model-00002-of-00003.safetensors",
129
+ "encoder.block.20.layer.1.DenseReluDense.wi_0.weight": "model-00002-of-00003.safetensors",
130
+ "encoder.block.20.layer.1.DenseReluDense.wi_1.weight": "model-00002-of-00003.safetensors",
131
+ "encoder.block.20.layer.1.DenseReluDense.wo.weight": "model-00003-of-00003.safetensors",
132
+ "encoder.block.20.layer.1.layer_norm.weight": "model-00003-of-00003.safetensors",
133
+ "encoder.block.21.layer.0.SelfAttention.k.weight": "model-00003-of-00003.safetensors",
134
+ "encoder.block.21.layer.0.SelfAttention.o.weight": "model-00003-of-00003.safetensors",
135
+ "encoder.block.21.layer.0.SelfAttention.q.weight": "model-00003-of-00003.safetensors",
136
+ "encoder.block.21.layer.0.SelfAttention.v.weight": "model-00003-of-00003.safetensors",
137
+ "encoder.block.21.layer.0.layer_norm.weight": "model-00003-of-00003.safetensors",
138
+ "encoder.block.21.layer.1.DenseReluDense.wi_0.weight": "model-00003-of-00003.safetensors",
139
+ "encoder.block.21.layer.1.DenseReluDense.wi_1.weight": "model-00003-of-00003.safetensors",
140
+ "encoder.block.21.layer.1.DenseReluDense.wo.weight": "model-00003-of-00003.safetensors",
141
+ "encoder.block.21.layer.1.layer_norm.weight": "model-00003-of-00003.safetensors",
142
+ "encoder.block.22.layer.0.SelfAttention.k.weight": "model-00003-of-00003.safetensors",
143
+ "encoder.block.22.layer.0.SelfAttention.o.weight": "model-00003-of-00003.safetensors",
144
+ "encoder.block.22.layer.0.SelfAttention.q.weight": "model-00003-of-00003.safetensors",
145
+ "encoder.block.22.layer.0.SelfAttention.v.weight": "model-00003-of-00003.safetensors",
146
+ "encoder.block.22.layer.0.layer_norm.weight": "model-00003-of-00003.safetensors",
147
+ "encoder.block.22.layer.1.DenseReluDense.wi_0.weight": "model-00003-of-00003.safetensors",
148
+ "encoder.block.22.layer.1.DenseReluDense.wi_1.weight": "model-00003-of-00003.safetensors",
149
+ "encoder.block.22.layer.1.DenseReluDense.wo.weight": "model-00003-of-00003.safetensors",
150
+ "encoder.block.22.layer.1.layer_norm.weight": "model-00003-of-00003.safetensors",
151
+ "encoder.block.23.layer.0.SelfAttention.k.weight": "model-00003-of-00003.safetensors",
152
+ "encoder.block.23.layer.0.SelfAttention.o.weight": "model-00003-of-00003.safetensors",
153
+ "encoder.block.23.layer.0.SelfAttention.q.weight": "model-00003-of-00003.safetensors",
154
+ "encoder.block.23.layer.0.SelfAttention.v.weight": "model-00003-of-00003.safetensors",
155
+ "encoder.block.23.layer.0.layer_norm.weight": "model-00003-of-00003.safetensors",
156
+ "encoder.block.23.layer.1.DenseReluDense.wi_0.weight": "model-00003-of-00003.safetensors",
157
+ "encoder.block.23.layer.1.DenseReluDense.wi_1.weight": "model-00003-of-00003.safetensors",
158
+ "encoder.block.23.layer.1.DenseReluDense.wo.weight": "model-00003-of-00003.safetensors",
159
+ "encoder.block.23.layer.1.layer_norm.weight": "model-00003-of-00003.safetensors",
160
+ "encoder.block.3.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
161
+ "encoder.block.3.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
162
+ "encoder.block.3.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
163
+ "encoder.block.3.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
164
+ "encoder.block.3.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
165
+ "encoder.block.3.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
166
+ "encoder.block.3.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
167
+ "encoder.block.3.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
168
+ "encoder.block.3.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
169
+ "encoder.block.4.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
170
+ "encoder.block.4.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
171
+ "encoder.block.4.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
172
+ "encoder.block.4.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
173
+ "encoder.block.4.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
174
+ "encoder.block.4.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
175
+ "encoder.block.4.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
176
+ "encoder.block.4.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
177
+ "encoder.block.4.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
178
+ "encoder.block.5.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
179
+ "encoder.block.5.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
180
+ "encoder.block.5.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
181
+ "encoder.block.5.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
182
+ "encoder.block.5.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
183
+ "encoder.block.5.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
184
+ "encoder.block.5.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
185
+ "encoder.block.5.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
186
+ "encoder.block.5.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
187
+ "encoder.block.6.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
188
+ "encoder.block.6.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
189
+ "encoder.block.6.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
190
+ "encoder.block.6.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
191
+ "encoder.block.6.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
192
+ "encoder.block.6.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
193
+ "encoder.block.6.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
194
+ "encoder.block.6.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
195
+ "encoder.block.6.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
196
+ "encoder.block.7.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
197
+ "encoder.block.7.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
198
+ "encoder.block.7.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
199
+ "encoder.block.7.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
200
+ "encoder.block.7.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
201
+ "encoder.block.7.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
202
+ "encoder.block.7.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
203
+ "encoder.block.7.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
204
+ "encoder.block.7.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
205
+ "encoder.block.8.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
206
+ "encoder.block.8.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
207
+ "encoder.block.8.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
208
+ "encoder.block.8.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
209
+ "encoder.block.8.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
210
+ "encoder.block.8.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
211
+ "encoder.block.8.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
212
+ "encoder.block.8.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
213
+ "encoder.block.8.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
214
+ "encoder.block.9.layer.0.SelfAttention.k.weight": "model-00001-of-00003.safetensors",
215
+ "encoder.block.9.layer.0.SelfAttention.o.weight": "model-00001-of-00003.safetensors",
216
+ "encoder.block.9.layer.0.SelfAttention.q.weight": "model-00001-of-00003.safetensors",
217
+ "encoder.block.9.layer.0.SelfAttention.v.weight": "model-00001-of-00003.safetensors",
218
+ "encoder.block.9.layer.0.layer_norm.weight": "model-00001-of-00003.safetensors",
219
+ "encoder.block.9.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00003.safetensors",
220
+ "encoder.block.9.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00003.safetensors",
221
+ "encoder.block.9.layer.1.DenseReluDense.wo.weight": "model-00001-of-00003.safetensors",
222
+ "encoder.block.9.layer.1.layer_norm.weight": "model-00001-of-00003.safetensors",
223
+ "encoder.final_layer_norm.weight": "model-00003-of-00003.safetensors",
224
+ "shared.weight": "model-00001-of-00003.safetensors"
225
+ }
226
+ }
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<extra_id_0>",
4
+ "<extra_id_1>",
5
+ "<extra_id_2>",
6
+ "<extra_id_3>",
7
+ "<extra_id_4>",
8
+ "<extra_id_5>",
9
+ "<extra_id_6>",
10
+ "<extra_id_7>",
11
+ "<extra_id_8>",
12
+ "<extra_id_9>",
13
+ "<extra_id_10>",
14
+ "<extra_id_11>",
15
+ "<extra_id_12>",
16
+ "<extra_id_13>",
17
+ "<extra_id_14>",
18
+ "<extra_id_15>",
19
+ "<extra_id_16>",
20
+ "<extra_id_17>",
21
+ "<extra_id_18>",
22
+ "<extra_id_19>",
23
+ "<extra_id_20>",
24
+ "<extra_id_21>",
25
+ "<extra_id_22>",
26
+ "<extra_id_23>",
27
+ "<extra_id_24>",
28
+ "<extra_id_25>",
29
+ "<extra_id_26>",
30
+ "<extra_id_27>",
31
+ "<extra_id_28>",
32
+ "<extra_id_29>",
33
+ "<extra_id_30>",
34
+ "<extra_id_31>",
35
+ "<extra_id_32>",
36
+ "<extra_id_33>",
37
+ "<extra_id_34>",
38
+ "<extra_id_35>",
39
+ "<extra_id_36>",
40
+ "<extra_id_37>",
41
+ "<extra_id_38>",
42
+ "<extra_id_39>",
43
+ "<extra_id_40>",
44
+ "<extra_id_41>",
45
+ "<extra_id_42>",
46
+ "<extra_id_43>",
47
+ "<extra_id_44>",
48
+ "<extra_id_45>",
49
+ "<extra_id_46>",
50
+ "<extra_id_47>",
51
+ "<extra_id_48>",
52
+ "<extra_id_49>",
53
+ "<extra_id_50>",
54
+ "<extra_id_51>",
55
+ "<extra_id_52>",
56
+ "<extra_id_53>",
57
+ "<extra_id_54>",
58
+ "<extra_id_55>",
59
+ "<extra_id_56>",
60
+ "<extra_id_57>",
61
+ "<extra_id_58>",
62
+ "<extra_id_59>",
63
+ "<extra_id_60>",
64
+ "<extra_id_61>",
65
+ "<extra_id_62>",
66
+ "<extra_id_63>",
67
+ "<extra_id_64>",
68
+ "<extra_id_65>",
69
+ "<extra_id_66>",
70
+ "<extra_id_67>",
71
+ "<extra_id_68>",
72
+ "<extra_id_69>",
73
+ "<extra_id_70>",
74
+ "<extra_id_71>",
75
+ "<extra_id_72>",
76
+ "<extra_id_73>",
77
+ "<extra_id_74>",
78
+ "<extra_id_75>",
79
+ "<extra_id_76>",
80
+ "<extra_id_77>",
81
+ "<extra_id_78>",
82
+ "<extra_id_79>",
83
+ "<extra_id_80>",
84
+ "<extra_id_81>",
85
+ "<extra_id_82>",
86
+ "<extra_id_83>",
87
+ "<extra_id_84>",
88
+ "<extra_id_85>",
89
+ "<extra_id_86>",
90
+ "<extra_id_87>",
91
+ "<extra_id_88>",
92
+ "<extra_id_89>",
93
+ "<extra_id_90>",
94
+ "<extra_id_91>",
95
+ "<extra_id_92>",
96
+ "<extra_id_93>",
97
+ "<extra_id_94>",
98
+ "<extra_id_95>",
99
+ "<extra_id_96>",
100
+ "<extra_id_97>",
101
+ "<extra_id_98>",
102
+ "<extra_id_99>"
103
+ ],
104
+ "eos_token": {
105
+ "content": "</s>",
106
+ "lstrip": false,
107
+ "normalized": false,
108
+ "rstrip": false,
109
+ "single_word": false
110
+ },
111
+ "pad_token": {
112
+ "content": "<pad>",
113
+ "lstrip": false,
114
+ "normalized": false,
115
+ "rstrip": false,
116
+ "single_word": false
117
+ },
118
+ "unk_token": {
119
+ "content": "<unk>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false
124
+ }
125
+ }
tokenizer/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
tokenizer/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,941 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<pad>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "</s>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "<unk>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "32000": {
29
+ "content": "<extra_id_99>",
30
+ "lstrip": true,
31
+ "normalized": false,
32
+ "rstrip": true,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "32001": {
37
+ "content": "<extra_id_98>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": true,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "32002": {
45
+ "content": "<extra_id_97>",
46
+ "lstrip": true,
47
+ "normalized": false,
48
+ "rstrip": true,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "32003": {
53
+ "content": "<extra_id_96>",
54
+ "lstrip": true,
55
+ "normalized": false,
56
+ "rstrip": true,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "32004": {
61
+ "content": "<extra_id_95>",
62
+ "lstrip": true,
63
+ "normalized": false,
64
+ "rstrip": true,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "32005": {
69
+ "content": "<extra_id_94>",
70
+ "lstrip": true,
71
+ "normalized": false,
72
+ "rstrip": true,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "32006": {
77
+ "content": "<extra_id_93>",
78
+ "lstrip": true,
79
+ "normalized": false,
80
+ "rstrip": true,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "32007": {
85
+ "content": "<extra_id_92>",
86
+ "lstrip": true,
87
+ "normalized": false,
88
+ "rstrip": true,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "32008": {
93
+ "content": "<extra_id_91>",
94
+ "lstrip": true,
95
+ "normalized": false,
96
+ "rstrip": true,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "32009": {
101
+ "content": "<extra_id_90>",
102
+ "lstrip": true,
103
+ "normalized": false,
104
+ "rstrip": true,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "32010": {
109
+ "content": "<extra_id_89>",
110
+ "lstrip": true,
111
+ "normalized": false,
112
+ "rstrip": true,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "32011": {
117
+ "content": "<extra_id_88>",
118
+ "lstrip": true,
119
+ "normalized": false,
120
+ "rstrip": true,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "32012": {
125
+ "content": "<extra_id_87>",
126
+ "lstrip": true,
127
+ "normalized": false,
128
+ "rstrip": true,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "32013": {
133
+ "content": "<extra_id_86>",
134
+ "lstrip": true,
135
+ "normalized": false,
136
+ "rstrip": true,
137
+ "single_word": false,
138
+ "special": true
139
+ },
140
+ "32014": {
141
+ "content": "<extra_id_85>",
142
+ "lstrip": true,
143
+ "normalized": false,
144
+ "rstrip": true,
145
+ "single_word": false,
146
+ "special": true
147
+ },
148
+ "32015": {
149
+ "content": "<extra_id_84>",
150
+ "lstrip": true,
151
+ "normalized": false,
152
+ "rstrip": true,
153
+ "single_word": false,
154
+ "special": true
155
+ },
156
+ "32016": {
157
+ "content": "<extra_id_83>",
158
+ "lstrip": true,
159
+ "normalized": false,
160
+ "rstrip": true,
161
+ "single_word": false,
162
+ "special": true
163
+ },
164
+ "32017": {
165
+ "content": "<extra_id_82>",
166
+ "lstrip": true,
167
+ "normalized": false,
168
+ "rstrip": true,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "32018": {
173
+ "content": "<extra_id_81>",
174
+ "lstrip": true,
175
+ "normalized": false,
176
+ "rstrip": true,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "32019": {
181
+ "content": "<extra_id_80>",
182
+ "lstrip": true,
183
+ "normalized": false,
184
+ "rstrip": true,
185
+ "single_word": false,
186
+ "special": true
187
+ },
188
+ "32020": {
189
+ "content": "<extra_id_79>",
190
+ "lstrip": true,
191
+ "normalized": false,
192
+ "rstrip": true,
193
+ "single_word": false,
194
+ "special": true
195
+ },
196
+ "32021": {
197
+ "content": "<extra_id_78>",
198
+ "lstrip": true,
199
+ "normalized": false,
200
+ "rstrip": true,
201
+ "single_word": false,
202
+ "special": true
203
+ },
204
+ "32022": {
205
+ "content": "<extra_id_77>",
206
+ "lstrip": true,
207
+ "normalized": false,
208
+ "rstrip": true,
209
+ "single_word": false,
210
+ "special": true
211
+ },
212
+ "32023": {
213
+ "content": "<extra_id_76>",
214
+ "lstrip": true,
215
+ "normalized": false,
216
+ "rstrip": true,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "32024": {
221
+ "content": "<extra_id_75>",
222
+ "lstrip": true,
223
+ "normalized": false,
224
+ "rstrip": true,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "32025": {
229
+ "content": "<extra_id_74>",
230
+ "lstrip": true,
231
+ "normalized": false,
232
+ "rstrip": true,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "32026": {
237
+ "content": "<extra_id_73>",
238
+ "lstrip": true,
239
+ "normalized": false,
240
+ "rstrip": true,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "32027": {
245
+ "content": "<extra_id_72>",
246
+ "lstrip": true,
247
+ "normalized": false,
248
+ "rstrip": true,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "32028": {
253
+ "content": "<extra_id_71>",
254
+ "lstrip": true,
255
+ "normalized": false,
256
+ "rstrip": true,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "32029": {
261
+ "content": "<extra_id_70>",
262
+ "lstrip": true,
263
+ "normalized": false,
264
+ "rstrip": true,
265
+ "single_word": false,
266
+ "special": true
267
+ },
268
+ "32030": {
269
+ "content": "<extra_id_69>",
270
+ "lstrip": true,
271
+ "normalized": false,
272
+ "rstrip": true,
273
+ "single_word": false,
274
+ "special": true
275
+ },
276
+ "32031": {
277
+ "content": "<extra_id_68>",
278
+ "lstrip": true,
279
+ "normalized": false,
280
+ "rstrip": true,
281
+ "single_word": false,
282
+ "special": true
283
+ },
284
+ "32032": {
285
+ "content": "<extra_id_67>",
286
+ "lstrip": true,
287
+ "normalized": false,
288
+ "rstrip": true,
289
+ "single_word": false,
290
+ "special": true
291
+ },
292
+ "32033": {
293
+ "content": "<extra_id_66>",
294
+ "lstrip": true,
295
+ "normalized": false,
296
+ "rstrip": true,
297
+ "single_word": false,
298
+ "special": true
299
+ },
300
+ "32034": {
301
+ "content": "<extra_id_65>",
302
+ "lstrip": true,
303
+ "normalized": false,
304
+ "rstrip": true,
305
+ "single_word": false,
306
+ "special": true
307
+ },
308
+ "32035": {
309
+ "content": "<extra_id_64>",
310
+ "lstrip": true,
311
+ "normalized": false,
312
+ "rstrip": true,
313
+ "single_word": false,
314
+ "special": true
315
+ },
316
+ "32036": {
317
+ "content": "<extra_id_63>",
318
+ "lstrip": true,
319
+ "normalized": false,
320
+ "rstrip": true,
321
+ "single_word": false,
322
+ "special": true
323
+ },
324
+ "32037": {
325
+ "content": "<extra_id_62>",
326
+ "lstrip": true,
327
+ "normalized": false,
328
+ "rstrip": true,
329
+ "single_word": false,
330
+ "special": true
331
+ },
332
+ "32038": {
333
+ "content": "<extra_id_61>",
334
+ "lstrip": true,
335
+ "normalized": false,
336
+ "rstrip": true,
337
+ "single_word": false,
338
+ "special": true
339
+ },
340
+ "32039": {
341
+ "content": "<extra_id_60>",
342
+ "lstrip": true,
343
+ "normalized": false,
344
+ "rstrip": true,
345
+ "single_word": false,
346
+ "special": true
347
+ },
348
+ "32040": {
349
+ "content": "<extra_id_59>",
350
+ "lstrip": true,
351
+ "normalized": false,
352
+ "rstrip": true,
353
+ "single_word": false,
354
+ "special": true
355
+ },
356
+ "32041": {
357
+ "content": "<extra_id_58>",
358
+ "lstrip": true,
359
+ "normalized": false,
360
+ "rstrip": true,
361
+ "single_word": false,
362
+ "special": true
363
+ },
364
+ "32042": {
365
+ "content": "<extra_id_57>",
366
+ "lstrip": true,
367
+ "normalized": false,
368
+ "rstrip": true,
369
+ "single_word": false,
370
+ "special": true
371
+ },
372
+ "32043": {
373
+ "content": "<extra_id_56>",
374
+ "lstrip": true,
375
+ "normalized": false,
376
+ "rstrip": true,
377
+ "single_word": false,
378
+ "special": true
379
+ },
380
+ "32044": {
381
+ "content": "<extra_id_55>",
382
+ "lstrip": true,
383
+ "normalized": false,
384
+ "rstrip": true,
385
+ "single_word": false,
386
+ "special": true
387
+ },
388
+ "32045": {
389
+ "content": "<extra_id_54>",
390
+ "lstrip": true,
391
+ "normalized": false,
392
+ "rstrip": true,
393
+ "single_word": false,
394
+ "special": true
395
+ },
396
+ "32046": {
397
+ "content": "<extra_id_53>",
398
+ "lstrip": true,
399
+ "normalized": false,
400
+ "rstrip": true,
401
+ "single_word": false,
402
+ "special": true
403
+ },
404
+ "32047": {
405
+ "content": "<extra_id_52>",
406
+ "lstrip": true,
407
+ "normalized": false,
408
+ "rstrip": true,
409
+ "single_word": false,
410
+ "special": true
411
+ },
412
+ "32048": {
413
+ "content": "<extra_id_51>",
414
+ "lstrip": true,
415
+ "normalized": false,
416
+ "rstrip": true,
417
+ "single_word": false,
418
+ "special": true
419
+ },
420
+ "32049": {
421
+ "content": "<extra_id_50>",
422
+ "lstrip": true,
423
+ "normalized": false,
424
+ "rstrip": true,
425
+ "single_word": false,
426
+ "special": true
427
+ },
428
+ "32050": {
429
+ "content": "<extra_id_49>",
430
+ "lstrip": true,
431
+ "normalized": false,
432
+ "rstrip": true,
433
+ "single_word": false,
434
+ "special": true
435
+ },
436
+ "32051": {
437
+ "content": "<extra_id_48>",
438
+ "lstrip": true,
439
+ "normalized": false,
440
+ "rstrip": true,
441
+ "single_word": false,
442
+ "special": true
443
+ },
444
+ "32052": {
445
+ "content": "<extra_id_47>",
446
+ "lstrip": true,
447
+ "normalized": false,
448
+ "rstrip": true,
449
+ "single_word": false,
450
+ "special": true
451
+ },
452
+ "32053": {
453
+ "content": "<extra_id_46>",
454
+ "lstrip": true,
455
+ "normalized": false,
456
+ "rstrip": true,
457
+ "single_word": false,
458
+ "special": true
459
+ },
460
+ "32054": {
461
+ "content": "<extra_id_45>",
462
+ "lstrip": true,
463
+ "normalized": false,
464
+ "rstrip": true,
465
+ "single_word": false,
466
+ "special": true
467
+ },
468
+ "32055": {
469
+ "content": "<extra_id_44>",
470
+ "lstrip": true,
471
+ "normalized": false,
472
+ "rstrip": true,
473
+ "single_word": false,
474
+ "special": true
475
+ },
476
+ "32056": {
477
+ "content": "<extra_id_43>",
478
+ "lstrip": true,
479
+ "normalized": false,
480
+ "rstrip": true,
481
+ "single_word": false,
482
+ "special": true
483
+ },
484
+ "32057": {
485
+ "content": "<extra_id_42>",
486
+ "lstrip": true,
487
+ "normalized": false,
488
+ "rstrip": true,
489
+ "single_word": false,
490
+ "special": true
491
+ },
492
+ "32058": {
493
+ "content": "<extra_id_41>",
494
+ "lstrip": true,
495
+ "normalized": false,
496
+ "rstrip": true,
497
+ "single_word": false,
498
+ "special": true
499
+ },
500
+ "32059": {
501
+ "content": "<extra_id_40>",
502
+ "lstrip": true,
503
+ "normalized": false,
504
+ "rstrip": true,
505
+ "single_word": false,
506
+ "special": true
507
+ },
508
+ "32060": {
509
+ "content": "<extra_id_39>",
510
+ "lstrip": true,
511
+ "normalized": false,
512
+ "rstrip": true,
513
+ "single_word": false,
514
+ "special": true
515
+ },
516
+ "32061": {
517
+ "content": "<extra_id_38>",
518
+ "lstrip": true,
519
+ "normalized": false,
520
+ "rstrip": true,
521
+ "single_word": false,
522
+ "special": true
523
+ },
524
+ "32062": {
525
+ "content": "<extra_id_37>",
526
+ "lstrip": true,
527
+ "normalized": false,
528
+ "rstrip": true,
529
+ "single_word": false,
530
+ "special": true
531
+ },
532
+ "32063": {
533
+ "content": "<extra_id_36>",
534
+ "lstrip": true,
535
+ "normalized": false,
536
+ "rstrip": true,
537
+ "single_word": false,
538
+ "special": true
539
+ },
540
+ "32064": {
541
+ "content": "<extra_id_35>",
542
+ "lstrip": true,
543
+ "normalized": false,
544
+ "rstrip": true,
545
+ "single_word": false,
546
+ "special": true
547
+ },
548
+ "32065": {
549
+ "content": "<extra_id_34>",
550
+ "lstrip": true,
551
+ "normalized": false,
552
+ "rstrip": true,
553
+ "single_word": false,
554
+ "special": true
555
+ },
556
+ "32066": {
557
+ "content": "<extra_id_33>",
558
+ "lstrip": true,
559
+ "normalized": false,
560
+ "rstrip": true,
561
+ "single_word": false,
562
+ "special": true
563
+ },
564
+ "32067": {
565
+ "content": "<extra_id_32>",
566
+ "lstrip": true,
567
+ "normalized": false,
568
+ "rstrip": true,
569
+ "single_word": false,
570
+ "special": true
571
+ },
572
+ "32068": {
573
+ "content": "<extra_id_31>",
574
+ "lstrip": true,
575
+ "normalized": false,
576
+ "rstrip": true,
577
+ "single_word": false,
578
+ "special": true
579
+ },
580
+ "32069": {
581
+ "content": "<extra_id_30>",
582
+ "lstrip": true,
583
+ "normalized": false,
584
+ "rstrip": true,
585
+ "single_word": false,
586
+ "special": true
587
+ },
588
+ "32070": {
589
+ "content": "<extra_id_29>",
590
+ "lstrip": true,
591
+ "normalized": false,
592
+ "rstrip": true,
593
+ "single_word": false,
594
+ "special": true
595
+ },
596
+ "32071": {
597
+ "content": "<extra_id_28>",
598
+ "lstrip": true,
599
+ "normalized": false,
600
+ "rstrip": true,
601
+ "single_word": false,
602
+ "special": true
603
+ },
604
+ "32072": {
605
+ "content": "<extra_id_27>",
606
+ "lstrip": true,
607
+ "normalized": false,
608
+ "rstrip": true,
609
+ "single_word": false,
610
+ "special": true
611
+ },
612
+ "32073": {
613
+ "content": "<extra_id_26>",
614
+ "lstrip": true,
615
+ "normalized": false,
616
+ "rstrip": true,
617
+ "single_word": false,
618
+ "special": true
619
+ },
620
+ "32074": {
621
+ "content": "<extra_id_25>",
622
+ "lstrip": true,
623
+ "normalized": false,
624
+ "rstrip": true,
625
+ "single_word": false,
626
+ "special": true
627
+ },
628
+ "32075": {
629
+ "content": "<extra_id_24>",
630
+ "lstrip": true,
631
+ "normalized": false,
632
+ "rstrip": true,
633
+ "single_word": false,
634
+ "special": true
635
+ },
636
+ "32076": {
637
+ "content": "<extra_id_23>",
638
+ "lstrip": true,
639
+ "normalized": false,
640
+ "rstrip": true,
641
+ "single_word": false,
642
+ "special": true
643
+ },
644
+ "32077": {
645
+ "content": "<extra_id_22>",
646
+ "lstrip": true,
647
+ "normalized": false,
648
+ "rstrip": true,
649
+ "single_word": false,
650
+ "special": true
651
+ },
652
+ "32078": {
653
+ "content": "<extra_id_21>",
654
+ "lstrip": true,
655
+ "normalized": false,
656
+ "rstrip": true,
657
+ "single_word": false,
658
+ "special": true
659
+ },
660
+ "32079": {
661
+ "content": "<extra_id_20>",
662
+ "lstrip": true,
663
+ "normalized": false,
664
+ "rstrip": true,
665
+ "single_word": false,
666
+ "special": true
667
+ },
668
+ "32080": {
669
+ "content": "<extra_id_19>",
670
+ "lstrip": true,
671
+ "normalized": false,
672
+ "rstrip": true,
673
+ "single_word": false,
674
+ "special": true
675
+ },
676
+ "32081": {
677
+ "content": "<extra_id_18>",
678
+ "lstrip": true,
679
+ "normalized": false,
680
+ "rstrip": true,
681
+ "single_word": false,
682
+ "special": true
683
+ },
684
+ "32082": {
685
+ "content": "<extra_id_17>",
686
+ "lstrip": true,
687
+ "normalized": false,
688
+ "rstrip": true,
689
+ "single_word": false,
690
+ "special": true
691
+ },
692
+ "32083": {
693
+ "content": "<extra_id_16>",
694
+ "lstrip": true,
695
+ "normalized": false,
696
+ "rstrip": true,
697
+ "single_word": false,
698
+ "special": true
699
+ },
700
+ "32084": {
701
+ "content": "<extra_id_15>",
702
+ "lstrip": true,
703
+ "normalized": false,
704
+ "rstrip": true,
705
+ "single_word": false,
706
+ "special": true
707
+ },
708
+ "32085": {
709
+ "content": "<extra_id_14>",
710
+ "lstrip": true,
711
+ "normalized": false,
712
+ "rstrip": true,
713
+ "single_word": false,
714
+ "special": true
715
+ },
716
+ "32086": {
717
+ "content": "<extra_id_13>",
718
+ "lstrip": true,
719
+ "normalized": false,
720
+ "rstrip": true,
721
+ "single_word": false,
722
+ "special": true
723
+ },
724
+ "32087": {
725
+ "content": "<extra_id_12>",
726
+ "lstrip": true,
727
+ "normalized": false,
728
+ "rstrip": true,
729
+ "single_word": false,
730
+ "special": true
731
+ },
732
+ "32088": {
733
+ "content": "<extra_id_11>",
734
+ "lstrip": true,
735
+ "normalized": false,
736
+ "rstrip": true,
737
+ "single_word": false,
738
+ "special": true
739
+ },
740
+ "32089": {
741
+ "content": "<extra_id_10>",
742
+ "lstrip": true,
743
+ "normalized": false,
744
+ "rstrip": true,
745
+ "single_word": false,
746
+ "special": true
747
+ },
748
+ "32090": {
749
+ "content": "<extra_id_9>",
750
+ "lstrip": true,
751
+ "normalized": false,
752
+ "rstrip": true,
753
+ "single_word": false,
754
+ "special": true
755
+ },
756
+ "32091": {
757
+ "content": "<extra_id_8>",
758
+ "lstrip": true,
759
+ "normalized": false,
760
+ "rstrip": true,
761
+ "single_word": false,
762
+ "special": true
763
+ },
764
+ "32092": {
765
+ "content": "<extra_id_7>",
766
+ "lstrip": true,
767
+ "normalized": false,
768
+ "rstrip": true,
769
+ "single_word": false,
770
+ "special": true
771
+ },
772
+ "32093": {
773
+ "content": "<extra_id_6>",
774
+ "lstrip": true,
775
+ "normalized": false,
776
+ "rstrip": true,
777
+ "single_word": false,
778
+ "special": true
779
+ },
780
+ "32094": {
781
+ "content": "<extra_id_5>",
782
+ "lstrip": true,
783
+ "normalized": false,
784
+ "rstrip": true,
785
+ "single_word": false,
786
+ "special": true
787
+ },
788
+ "32095": {
789
+ "content": "<extra_id_4>",
790
+ "lstrip": true,
791
+ "normalized": false,
792
+ "rstrip": true,
793
+ "single_word": false,
794
+ "special": true
795
+ },
796
+ "32096": {
797
+ "content": "<extra_id_3>",
798
+ "lstrip": true,
799
+ "normalized": false,
800
+ "rstrip": true,
801
+ "single_word": false,
802
+ "special": true
803
+ },
804
+ "32097": {
805
+ "content": "<extra_id_2>",
806
+ "lstrip": true,
807
+ "normalized": false,
808
+ "rstrip": true,
809
+ "single_word": false,
810
+ "special": true
811
+ },
812
+ "32098": {
813
+ "content": "<extra_id_1>",
814
+ "lstrip": true,
815
+ "normalized": false,
816
+ "rstrip": true,
817
+ "single_word": false,
818
+ "special": true
819
+ },
820
+ "32099": {
821
+ "content": "<extra_id_0>",
822
+ "lstrip": true,
823
+ "normalized": false,
824
+ "rstrip": true,
825
+ "single_word": false,
826
+ "special": true
827
+ }
828
+ },
829
+ "additional_special_tokens": [
830
+ "<extra_id_0>",
831
+ "<extra_id_1>",
832
+ "<extra_id_2>",
833
+ "<extra_id_3>",
834
+ "<extra_id_4>",
835
+ "<extra_id_5>",
836
+ "<extra_id_6>",
837
+ "<extra_id_7>",
838
+ "<extra_id_8>",
839
+ "<extra_id_9>",
840
+ "<extra_id_10>",
841
+ "<extra_id_11>",
842
+ "<extra_id_12>",
843
+ "<extra_id_13>",
844
+ "<extra_id_14>",
845
+ "<extra_id_15>",
846
+ "<extra_id_16>",
847
+ "<extra_id_17>",
848
+ "<extra_id_18>",
849
+ "<extra_id_19>",
850
+ "<extra_id_20>",
851
+ "<extra_id_21>",
852
+ "<extra_id_22>",
853
+ "<extra_id_23>",
854
+ "<extra_id_24>",
855
+ "<extra_id_25>",
856
+ "<extra_id_26>",
857
+ "<extra_id_27>",
858
+ "<extra_id_28>",
859
+ "<extra_id_29>",
860
+ "<extra_id_30>",
861
+ "<extra_id_31>",
862
+ "<extra_id_32>",
863
+ "<extra_id_33>",
864
+ "<extra_id_34>",
865
+ "<extra_id_35>",
866
+ "<extra_id_36>",
867
+ "<extra_id_37>",
868
+ "<extra_id_38>",
869
+ "<extra_id_39>",
870
+ "<extra_id_40>",
871
+ "<extra_id_41>",
872
+ "<extra_id_42>",
873
+ "<extra_id_43>",
874
+ "<extra_id_44>",
875
+ "<extra_id_45>",
876
+ "<extra_id_46>",
877
+ "<extra_id_47>",
878
+ "<extra_id_48>",
879
+ "<extra_id_49>",
880
+ "<extra_id_50>",
881
+ "<extra_id_51>",
882
+ "<extra_id_52>",
883
+ "<extra_id_53>",
884
+ "<extra_id_54>",
885
+ "<extra_id_55>",
886
+ "<extra_id_56>",
887
+ "<extra_id_57>",
888
+ "<extra_id_58>",
889
+ "<extra_id_59>",
890
+ "<extra_id_60>",
891
+ "<extra_id_61>",
892
+ "<extra_id_62>",
893
+ "<extra_id_63>",
894
+ "<extra_id_64>",
895
+ "<extra_id_65>",
896
+ "<extra_id_66>",
897
+ "<extra_id_67>",
898
+ "<extra_id_68>",
899
+ "<extra_id_69>",
900
+ "<extra_id_70>",
901
+ "<extra_id_71>",
902
+ "<extra_id_72>",
903
+ "<extra_id_73>",
904
+ "<extra_id_74>",
905
+ "<extra_id_75>",
906
+ "<extra_id_76>",
907
+ "<extra_id_77>",
908
+ "<extra_id_78>",
909
+ "<extra_id_79>",
910
+ "<extra_id_80>",
911
+ "<extra_id_81>",
912
+ "<extra_id_82>",
913
+ "<extra_id_83>",
914
+ "<extra_id_84>",
915
+ "<extra_id_85>",
916
+ "<extra_id_86>",
917
+ "<extra_id_87>",
918
+ "<extra_id_88>",
919
+ "<extra_id_89>",
920
+ "<extra_id_90>",
921
+ "<extra_id_91>",
922
+ "<extra_id_92>",
923
+ "<extra_id_93>",
924
+ "<extra_id_94>",
925
+ "<extra_id_95>",
926
+ "<extra_id_96>",
927
+ "<extra_id_97>",
928
+ "<extra_id_98>",
929
+ "<extra_id_99>"
930
+ ],
931
+ "clean_up_tokenization_spaces": true,
932
+ "eos_token": "</s>",
933
+ "extra_ids": 100,
934
+ "extra_special_tokens": {},
935
+ "legacy": true,
936
+ "model_max_length": 512,
937
+ "pad_token": "<pad>",
938
+ "sp_model_kwargs": {},
939
+ "tokenizer_class": "T5Tokenizer",
940
+ "unk_token": "<unk>"
941
+ }
transformer/config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "BriaTransformer2DModel",
3
+ "_diffusers_version": "0.33.1",
4
+ "attention_head_dim": 96,
5
+ "axes_dims_rope": [
6
+ 0,
7
+ 48,
8
+ 48
9
+ ],
10
+ "guidance_embeds": false,
11
+ "in_channels": 16,
12
+ "joint_attention_dim": 4096,
13
+ "num_attention_heads": 24,
14
+ "num_layers": 8,
15
+ "num_single_layers": 28,
16
+ "patch_size": 1,
17
+ "pooled_projection_dim": null,
18
+ "rope_theta": 10000,
19
+ "time_theta": 10000
20
+ }
transformer/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d68fcfb59b377192ab53d3e9ac409e6bb84f1a728878f2518b5a238bddbb6519
3
+ size 7571416768
transformer/transformer_bria.py ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Any, Dict, List, Optional, Union
2
+ import numpy as np
3
+ import torch
4
+ import torch.nn as nn
5
+ from diffusers.configuration_utils import ConfigMixin, register_to_config
6
+ from diffusers.loaders import PeftAdapterMixin, FromOriginalModelMixin
7
+ from diffusers.models.modeling_utils import ModelMixin
8
+ from diffusers.models.normalization import AdaLayerNormContinuous
9
+ from diffusers.utils import USE_PEFT_BACKEND, is_torch_version, logging, scale_lora_layers, unscale_lora_layers
10
+ from diffusers.models.modeling_outputs import Transformer2DModelOutput
11
+ from diffusers.models.embeddings import TimestepEmbedding, get_timestep_embedding
12
+ from diffusers.models.transformers.transformer_flux import FluxSingleTransformerBlock, FluxTransformerBlock
13
+ from bria_utils import FluxPosEmbed as EmbedND
14
+
15
+ logger = logging.get_logger(__name__) # pylint: disable=invalid-name
16
+
17
+ class Timesteps(nn.Module):
18
+ def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float, scale: int = 1,time_theta=10000):
19
+ super().__init__()
20
+ self.num_channels = num_channels
21
+ self.flip_sin_to_cos = flip_sin_to_cos
22
+ self.downscale_freq_shift = downscale_freq_shift
23
+ self.scale = scale
24
+ self.time_theta=time_theta
25
+
26
+ def forward(self, timesteps):
27
+ t_emb = get_timestep_embedding(
28
+ timesteps,
29
+ self.num_channels,
30
+ flip_sin_to_cos=self.flip_sin_to_cos,
31
+ downscale_freq_shift=self.downscale_freq_shift,
32
+ scale=self.scale,
33
+ max_period=self.time_theta
34
+ )
35
+ return t_emb
36
+
37
+ class TimestepProjEmbeddings(nn.Module):
38
+ def __init__(self, embedding_dim, time_theta):
39
+ super().__init__()
40
+
41
+ self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=0,time_theta=time_theta)
42
+ self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim)
43
+
44
+ def forward(self, timestep, dtype):
45
+ timesteps_proj = self.time_proj(timestep)
46
+ timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=dtype)) # (N, D)
47
+ return timesteps_emb
48
+
49
+ """
50
+ Based on FluxPipeline with several changes:
51
+ - no pooled embeddings
52
+ - We use zero padding for prompts
53
+ - No guidance embedding since this is not a distilled version
54
+ """
55
+ class BriaTransformer2DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin):
56
+ """
57
+ The Transformer model introduced in Flux.
58
+
59
+ Reference: https://blackforestlabs.ai/announcing-black-forest-labs/
60
+
61
+ Parameters:
62
+ patch_size (`int`): Patch size to turn the input data into small patches.
63
+ in_channels (`int`, *optional*, defaults to 16): The number of channels in the input.
64
+ num_layers (`int`, *optional*, defaults to 18): The number of layers of MMDiT blocks to use.
65
+ num_single_layers (`int`, *optional*, defaults to 18): The number of layers of single DiT blocks to use.
66
+ attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head.
67
+ num_attention_heads (`int`, *optional*, defaults to 18): The number of heads to use for multi-head attention.
68
+ joint_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use.
69
+ pooled_projection_dim (`int`): Number of dimensions to use when projecting the `pooled_projections`.
70
+ guidance_embeds (`bool`, defaults to False): Whether to use guidance embeddings.
71
+ """
72
+
73
+ _supports_gradient_checkpointing = True
74
+
75
+ @register_to_config
76
+ def __init__(
77
+ self,
78
+ patch_size: int = 1,
79
+ in_channels: int = 64,
80
+ num_layers: int = 19,
81
+ num_single_layers: int = 38,
82
+ attention_head_dim: int = 128,
83
+ num_attention_heads: int = 24,
84
+ joint_attention_dim: int = 4096,
85
+ pooled_projection_dim: int = None,
86
+ guidance_embeds: bool = False,
87
+ axes_dims_rope: List[int] = [16, 56, 56],
88
+ rope_theta = 10000,
89
+ time_theta = 10000
90
+ ):
91
+ super().__init__()
92
+ self.out_channels = in_channels
93
+ self.inner_dim = self.config.num_attention_heads * self.config.attention_head_dim
94
+
95
+ self.pos_embed = EmbedND(theta=rope_theta, axes_dim=axes_dims_rope)
96
+
97
+
98
+ self.time_embed = TimestepProjEmbeddings(
99
+ embedding_dim=self.inner_dim,time_theta=time_theta
100
+ )
101
+
102
+ # if pooled_projection_dim:
103
+ # self.pooled_text_embed = PixArtAlphaTextProjection(pooled_projection_dim, embedding_dim=self.inner_dim, act_fn="silu")
104
+
105
+ if guidance_embeds:
106
+ self.guidance_embed = TimestepProjEmbeddings(embedding_dim=self.inner_dim)
107
+
108
+ self.context_embedder = nn.Linear(self.config.joint_attention_dim, self.inner_dim)
109
+ self.x_embedder = torch.nn.Linear(self.config.in_channels, self.inner_dim)
110
+
111
+ self.transformer_blocks = nn.ModuleList(
112
+ [
113
+ FluxTransformerBlock(
114
+ dim=self.inner_dim,
115
+ num_attention_heads=self.config.num_attention_heads,
116
+ attention_head_dim=self.config.attention_head_dim,
117
+ )
118
+ for i in range(self.config.num_layers)
119
+ ]
120
+ )
121
+
122
+ self.single_transformer_blocks = nn.ModuleList(
123
+ [
124
+ FluxSingleTransformerBlock(
125
+ dim=self.inner_dim,
126
+ num_attention_heads=self.config.num_attention_heads,
127
+ attention_head_dim=self.config.attention_head_dim,
128
+ )
129
+ for i in range(self.config.num_single_layers)
130
+ ]
131
+ )
132
+
133
+ self.norm_out = AdaLayerNormContinuous(self.inner_dim, self.inner_dim, elementwise_affine=False, eps=1e-6)
134
+ self.proj_out = nn.Linear(self.inner_dim, patch_size * patch_size * self.out_channels, bias=True)
135
+
136
+ self.gradient_checkpointing = False
137
+
138
+ def _set_gradient_checkpointing(self, module, value=False):
139
+ if hasattr(module, "gradient_checkpointing"):
140
+ module.gradient_checkpointing = value
141
+
142
+ def forward(
143
+ self,
144
+ hidden_states: torch.Tensor,
145
+ encoder_hidden_states: torch.Tensor = None,
146
+ pooled_projections: torch.Tensor = None,
147
+ timestep: torch.LongTensor = None,
148
+ img_ids: torch.Tensor = None,
149
+ txt_ids: torch.Tensor = None,
150
+ guidance: torch.Tensor = None,
151
+ joint_attention_kwargs: Optional[Dict[str, Any]] = None,
152
+ return_dict: bool = True,
153
+ controlnet_block_samples = None,
154
+ controlnet_single_block_samples=None,
155
+
156
+ ) -> Union[torch.FloatTensor, Transformer2DModelOutput]:
157
+ """
158
+ The [`FluxTransformer2DModel`] forward method.
159
+
160
+ Args:
161
+ hidden_states (`torch.FloatTensor` of shape `(batch size, channel, height, width)`):
162
+ Input `hidden_states`.
163
+ encoder_hidden_states (`torch.FloatTensor` of shape `(batch size, sequence_len, embed_dims)`):
164
+ Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
165
+ pooled_projections (`torch.FloatTensor` of shape `(batch_size, projection_dim)`): Embeddings projected
166
+ from the embeddings of input conditions.
167
+ timestep ( `torch.LongTensor`):
168
+ Used to indicate denoising step.
169
+ block_controlnet_hidden_states: (`list` of `torch.Tensor`):
170
+ A list of tensors that if specified are added to the residuals of transformer blocks.
171
+ joint_attention_kwargs (`dict`, *optional*):
172
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
173
+ `self.processor` in
174
+ [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
175
+ return_dict (`bool`, *optional*, defaults to `True`):
176
+ Whether or not to return a [`~models.transformer_2d.Transformer2DModelOutput`] instead of a plain
177
+ tuple.
178
+
179
+ Returns:
180
+ If `return_dict` is True, an [`~models.transformer_2d.Transformer2DModelOutput`] is returned, otherwise a
181
+ `tuple` where the first element is the sample tensor.
182
+ """
183
+ if joint_attention_kwargs is not None:
184
+ joint_attention_kwargs = joint_attention_kwargs.copy()
185
+ lora_scale = joint_attention_kwargs.pop("scale", 1.0)
186
+ else:
187
+ lora_scale = 1.0
188
+
189
+ if USE_PEFT_BACKEND:
190
+ # weight the lora layers by setting `lora_scale` for each PEFT layer
191
+ scale_lora_layers(self, lora_scale)
192
+ else:
193
+ if joint_attention_kwargs is not None and joint_attention_kwargs.get("scale", None) is not None:
194
+ logger.warning(
195
+ "Passing `scale` via `joint_attention_kwargs` when not using the PEFT backend is ineffective."
196
+ )
197
+ hidden_states = self.x_embedder(hidden_states)
198
+
199
+ timestep = timestep.to(hidden_states.dtype)
200
+ if guidance is not None:
201
+ guidance = guidance.to(hidden_states.dtype)
202
+ else:
203
+ guidance = None
204
+
205
+ # temb = (
206
+ # self.time_text_embed(timestep, pooled_projections)
207
+ # if guidance is None
208
+ # else self.time_text_embed(timestep, guidance, pooled_projections)
209
+ # )
210
+
211
+ temb = self.time_embed(timestep,dtype=hidden_states.dtype)
212
+
213
+ # if pooled_projections:
214
+ # temb+=self.pooled_text_embed(pooled_projections)
215
+
216
+ if guidance:
217
+ temb+=self.guidance_embed(guidance,dtype=hidden_states.dtype)
218
+
219
+ encoder_hidden_states = self.context_embedder(encoder_hidden_states)
220
+
221
+ if len(txt_ids.shape)==2:
222
+ ids = torch.cat((txt_ids, img_ids), dim=0)
223
+ else:
224
+ ids = torch.cat((txt_ids, img_ids), dim=1)
225
+ image_rotary_emb = self.pos_embed(ids)
226
+
227
+ for index_block, block in enumerate(self.transformer_blocks):
228
+ if self.training and self.gradient_checkpointing:
229
+
230
+ def create_custom_forward(module, return_dict=None):
231
+ def custom_forward(*inputs):
232
+ if return_dict is not None:
233
+ return module(*inputs, return_dict=return_dict)
234
+ else:
235
+ return module(*inputs)
236
+
237
+ return custom_forward
238
+
239
+ ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
240
+ encoder_hidden_states, hidden_states = torch.utils.checkpoint.checkpoint(
241
+ create_custom_forward(block),
242
+ hidden_states,
243
+ encoder_hidden_states,
244
+ temb,
245
+ image_rotary_emb,
246
+ **ckpt_kwargs,
247
+ )
248
+
249
+ else:
250
+ encoder_hidden_states, hidden_states = block(
251
+ hidden_states=hidden_states,
252
+ encoder_hidden_states=encoder_hidden_states,
253
+ temb=temb,
254
+ image_rotary_emb=image_rotary_emb,
255
+ )
256
+
257
+ # controlnet residual
258
+ if controlnet_block_samples is not None:
259
+ interval_control = len(self.transformer_blocks) / len(controlnet_block_samples)
260
+ interval_control = int(np.ceil(interval_control))
261
+ hidden_states = hidden_states + controlnet_block_samples[index_block // interval_control]
262
+
263
+
264
+ hidden_states = torch.cat([encoder_hidden_states, hidden_states], dim=1)
265
+
266
+ for index_block, block in enumerate(self.single_transformer_blocks):
267
+ if self.training and self.gradient_checkpointing:
268
+
269
+ def create_custom_forward(module, return_dict=None):
270
+ def custom_forward(*inputs):
271
+ if return_dict is not None:
272
+ return module(*inputs, return_dict=return_dict)
273
+ else:
274
+ return module(*inputs)
275
+
276
+ return custom_forward
277
+
278
+ ckpt_kwargs: Dict[str, Any] = {"use_reentrant": False} if is_torch_version(">=", "1.11.0") else {}
279
+ hidden_states = torch.utils.checkpoint.checkpoint(
280
+ create_custom_forward(block),
281
+ hidden_states,
282
+ temb,
283
+ image_rotary_emb,
284
+ **ckpt_kwargs,
285
+ )
286
+
287
+ else:
288
+ hidden_states = block(
289
+ hidden_states=hidden_states,
290
+ temb=temb,
291
+ image_rotary_emb=image_rotary_emb,
292
+ )
293
+
294
+ # controlnet residual
295
+ if controlnet_single_block_samples is not None:
296
+ interval_control = len(self.single_transformer_blocks) / len(controlnet_single_block_samples)
297
+ interval_control = int(np.ceil(interval_control))
298
+ hidden_states[:, encoder_hidden_states.shape[1] :, ...] = (
299
+ hidden_states[:, encoder_hidden_states.shape[1] :, ...]
300
+ + controlnet_single_block_samples[index_block // interval_control]
301
+ )
302
+
303
+ hidden_states = hidden_states[:, encoder_hidden_states.shape[1] :, ...]
304
+
305
+ hidden_states = self.norm_out(hidden_states, temb)
306
+ output = self.proj_out(hidden_states)
307
+
308
+ if USE_PEFT_BACKEND:
309
+ # remove `lora_scale` from each PEFT layer
310
+ unscale_lora_layers(self, lora_scale)
311
+
312
+ if not return_dict:
313
+ return (output,)
314
+
315
+ return Transformer2DModelOutput(sample=output)
vae/config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.33.1",
4
+ "_name_or_path": "/home/ubuntu/.cache/huggingface/hub/models--briaai--BRIA-3.1/snapshots/82e82697e2a3a67726e099f5d2ba15e4520ff93c/vae",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "force_upcast": false,
19
+ "in_channels": 3,
20
+ "latent_channels": 4,
21
+ "latents_mean": null,
22
+ "latents_std": null,
23
+ "layers_per_block": 2,
24
+ "mid_block_add_attention": true,
25
+ "norm_num_groups": 32,
26
+ "out_channels": 3,
27
+ "sample_size": 512,
28
+ "scaling_factor": 0.13025,
29
+ "shift_factor": null,
30
+ "up_block_types": [
31
+ "UpDecoderBlock2D",
32
+ "UpDecoderBlock2D",
33
+ "UpDecoderBlock2D",
34
+ "UpDecoderBlock2D"
35
+ ],
36
+ "use_post_quant_conv": true,
37
+ "use_quant_conv": true
38
+ }
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba4c83179928f4f890c402a297d86a435ea020c844e323d350ca786c8a75c6c1
3
+ size 334643268