benjamin commited on
Commit
23010f5
·
verified ·
1 Parent(s): c7bb739

Upload FlaxTPULlamaForCausalLM

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_qk_norm": false,
3
+ "architectures": [
4
+ "TPULlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "auto_map": {
9
+ "FlaxAutoModelForCausalLM": "modelling_flax_tpu_llama.FlaxTPULlamaForCausalLM"
10
+ },
11
+ "bos_token_id": 128000,
12
+ "dtype": "float32",
13
+ "eos_token_id": 128001,
14
+ "expand_input_ids": false,
15
+ "expand_input_ids_dict": null,
16
+ "expand_input_ids_maxlen": null,
17
+ "expand_input_ids_vocab_size": null,
18
+ "head_dim": 128,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 4096,
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 14336,
23
+ "max_length": 8192,
24
+ "max_position_embeddings": 131072,
25
+ "mlp_bias": false,
26
+ "model_type": "tpu_llama",
27
+ "norm_position": "pre",
28
+ "num_attention_heads": 32,
29
+ "num_hidden_layers": 32,
30
+ "num_key_value_heads": 8,
31
+ "pretraining_tp": 1,
32
+ "qk_norm_position": "post_split",
33
+ "rms_norm_eps": 1e-05,
34
+ "rope_scaling": {
35
+ "factor": 8.0,
36
+ "high_freq_factor": 4.0,
37
+ "low_freq_factor": 1.0,
38
+ "original_max_position_embeddings": 8192,
39
+ "rope_type": "llama3"
40
+ },
41
+ "rope_theta": 500000.0,
42
+ "skip_out_norm": false,
43
+ "tie_word_embeddings": false,
44
+ "transformers_version": "4.57.1",
45
+ "use_cache": true,
46
+ "vocab_size": 128256
47
+ }
configuration_tpu_llama.py ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """TPU LLaMA model configuration"""
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+ from transformers.modeling_rope_utils import rope_config_validation
5
+
6
+
7
+ class TPULlamaConfig(PretrainedConfig):
8
+ r"""
9
+ This is the configuration class to store the configuration of a [`LlamaModel`]. It is used to instantiate an LLaMA
10
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
11
+ defaults will yield a similar configuration to that of the LLaMA-7B.
12
+
13
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
14
+ documentation from [`PretrainedConfig`] for more information.
15
+
16
+
17
+ Args:
18
+ vocab_size (`int`, *optional*, defaults to 32000):
19
+ Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
20
+ `inputs_ids` passed when calling [`TPULlamaModel`]
21
+ hidden_size (`int`, *optional*, defaults to 4096):
22
+ Dimension of the hidden representations.
23
+ intermediate_size (`int`, *optional*, defaults to 11008):
24
+ Dimension of the MLP representations.
25
+ num_hidden_layers (`int`, *optional*, defaults to 32):
26
+ Number of hidden layers in the Transformer decoder.
27
+ num_attention_heads (`int`, *optional*, defaults to 32):
28
+ Number of attention heads for each attention layer in the Transformer decoder.
29
+ num_key_value_heads (`int`, *optional*):
30
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
31
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
32
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
33
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
34
+ by meanpooling all the original heads within that group. For more details checkout [this
35
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
36
+ `num_attention_heads`.
37
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
38
+ The non-linear activation function (function or string) in the decoder.
39
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
40
+ The maximum sequence length that this model might ever be used with. Llama 1 supports up to 2048 tokens,
41
+ Llama 2 up to 4096, CodeLlama up to 16384.
42
+ initializer_range (`float`, *optional*, defaults to 0.02):
43
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
44
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
45
+ The epsilon used by the rms normalization layers.
46
+ use_cache (`bool`, *optional*, defaults to `True`):
47
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
48
+ relevant if `config.is_decoder=True`.
49
+ pad_token_id (`int`, *optional*):
50
+ Padding token id.
51
+ bos_token_id (`int`, *optional*, defaults to 1):
52
+ Beginning of stream token id.
53
+ eos_token_id (`int`, *optional*, defaults to 2):
54
+ End of stream token id.
55
+ pretraining_tp (`int`, *optional*, defaults to 1):
56
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
57
+ document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to
58
+ understand more about it. This value is necessary to ensure exact reproducibility of the pretraining
59
+ results. Please refer to [this issue](https://github.com/pytorch/pytorch/issues/76232).
60
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
61
+ Whether to tie weight embeddings
62
+ rope_theta (`float`, *optional*, defaults to 10000.0):
63
+ The base period of the RoPE embeddings.
64
+ rope_scaling (`Dict`, *optional*):
65
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
66
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
67
+ accordingly.
68
+ Expected contents:
69
+ `rope_type` (`str`):
70
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
71
+ 'llama3'], with 'default' being the original RoPE implementation.
72
+ `factor` (`float`, *optional*):
73
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
74
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
75
+ original maximum pre-trained length.
76
+ `original_max_position_embeddings` (`int`, *optional*):
77
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
78
+ pretraining.
79
+ `attention_factor` (`float`, *optional*):
80
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
81
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
82
+ `factor` field to infer the suggested value.
83
+ `beta_fast` (`float`, *optional*):
84
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
85
+ ramp function. If unspecified, it defaults to 32.
86
+ `beta_slow` (`float`, *optional*):
87
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
88
+ ramp function. If unspecified, it defaults to 1.
89
+ `short_factor` (`List[float]`, *optional*):
90
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
91
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
92
+ size divided by the number of attention heads divided by 2
93
+ `long_factor` (`List[float]`, *optional*):
94
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
95
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
96
+ size divided by the number of attention heads divided by 2
97
+ `low_freq_factor` (`float`, *optional*):
98
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
99
+ `high_freq_factor` (`float`, *optional*):
100
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
101
+ attention_bias (`bool`, *optional*, defaults to `False`):
102
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
103
+ attention_dropout (`float`, *optional*, defaults to 0.0):
104
+ The dropout ratio for the attention probabilities.
105
+ mlp_bias (`bool`, *optional*, defaults to `False`):
106
+ Whether to use a bias in up_proj, down_proj and gate_proj layers in the MLP layers.
107
+ head_dim (`int`, *optional*):
108
+ The attention head dimension. If None, it will default to hidden_size // num_heads
109
+
110
+ ```python
111
+ >>> from transformers import LlamaModel, LlamaConfig
112
+
113
+ >>> # Initializing a LLaMA llama-7b style configuration
114
+ >>> configuration = LlamaConfig()
115
+
116
+ >>> # Initializing a model from the llama-7b style configuration
117
+ >>> model = LlamaModel(configuration)
118
+
119
+ >>> # Accessing the model configuration
120
+ >>> configuration = model.config
121
+ ```"""
122
+
123
+ model_type = "tpu_llama"
124
+ keys_to_ignore_at_inference = ["past_key_values"]
125
+
126
+ def __init__(
127
+ self,
128
+ vocab_size=32000,
129
+ hidden_size=4096,
130
+ intermediate_size=11008,
131
+ num_hidden_layers=32,
132
+ num_attention_heads=32,
133
+ num_key_value_heads=None,
134
+ hidden_act="silu",
135
+ max_position_embeddings=2048,
136
+ initializer_range=0.02,
137
+ rms_norm_eps=1e-6,
138
+ use_cache=True,
139
+ pad_token_id=None,
140
+ bos_token_id=1,
141
+ eos_token_id=2,
142
+ pretraining_tp=1,
143
+ tie_word_embeddings=False,
144
+ rope_theta=10000.0,
145
+ rope_scaling=None,
146
+ attention_bias=False,
147
+ attention_dropout=0.0,
148
+ mlp_bias=False,
149
+ head_dim=None,
150
+ add_qk_norm=False, # Qwen3 compatibility
151
+ expand_input_ids=False, # Transformers-native PyTorch generation support
152
+ expand_input_ids_maxlen=None,
153
+ expand_input_ids_vocab_size=None,
154
+ expand_input_ids_dict=None,
155
+ skip_out_norm=False,
156
+ norm_position: str = "pre", # to support OLMo2
157
+ qk_norm_position: str = "post_split", # to support OLMo2
158
+ **kwargs,
159
+ ):
160
+ self.vocab_size = vocab_size
161
+ self.max_position_embeddings = max_position_embeddings
162
+ self.hidden_size = hidden_size
163
+ self.intermediate_size = intermediate_size
164
+ self.num_hidden_layers = num_hidden_layers
165
+ self.num_attention_heads = num_attention_heads
166
+
167
+ # for backward compatibility
168
+ if num_key_value_heads is None:
169
+ num_key_value_heads = num_attention_heads
170
+
171
+ self.num_key_value_heads = num_key_value_heads
172
+ self.hidden_act = hidden_act
173
+ self.initializer_range = initializer_range
174
+ self.rms_norm_eps = rms_norm_eps
175
+ self.pretraining_tp = pretraining_tp
176
+ self.use_cache = use_cache
177
+ self.rope_theta = rope_theta
178
+ self.rope_scaling = rope_scaling
179
+ self.attention_bias = attention_bias
180
+ self.attention_dropout = attention_dropout
181
+ self.mlp_bias = mlp_bias
182
+ self.head_dim = head_dim if head_dim is not None else self.hidden_size // self.num_attention_heads
183
+ # Validate the correctness of rotary position embeddings parameters
184
+ # BC: if there is a 'type' field, copy it it to 'rope_type'.
185
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
186
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
187
+ rope_config_validation(self)
188
+
189
+ self.add_qk_norm = add_qk_norm # Qwen3 compatibility
190
+
191
+ self.expand_input_ids = expand_input_ids
192
+ self.expand_input_ids_maxlen = expand_input_ids_maxlen
193
+ self.expand_input_ids_vocab_size = expand_input_ids_vocab_size
194
+ self.expand_input_ids_dict = expand_input_ids_dict
195
+
196
+ self.skip_out_norm = skip_out_norm
197
+ self.norm_position = norm_position
198
+ self.qk_norm_position = qk_norm_position
199
+
200
+ super().__init__(
201
+ pad_token_id=pad_token_id,
202
+ bos_token_id=bos_token_id,
203
+ eos_token_id=eos_token_id,
204
+ tie_word_embeddings=tie_word_embeddings,
205
+ **kwargs,
206
+ )
flax_model-00001-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba3168a3c9d16a2f9e3978a77610a24990095b6dab6fed05492131ae374f769e
3
+ size 4991255139
flax_model-00002-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:596fd6bb52205ebc6ac6b65290b5db6cca0b8b46727f2d3759a229bd21c25d2e
3
+ size 4915906858
flax_model-00003-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fd07e012d88046dae74e022db3a146b3e6b0d969aa948995b718fba1e288908
3
+ size 4999809463
flax_model-00004-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:276c05698db87248b3651da908804af2ed4917fef51201faeb99c997f93f9ed8
3
+ size 4999809458
flax_model-00005-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e73135900961917c8303fa2b597b441cd9feb043d3a9c204a3bdbdba4e2dbdc5
3
+ size 4832004252
flax_model-00006-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf9abb5b96b2e7c1639a65a8b1d2a1701f1301b5da6bbbc44b6e8b82dec186cb
3
+ size 4999809459
flax_model-00007-of-00007.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55cd2269edb8f436fe508be7702a6414aeb53b461cd725ff955357d631e9af0b
3
+ size 2382464187
flax_model.msgpack.index.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 32121044992
4
+ },
5
+ "weight_map": {
6
+ "lm_head/kernel": "flax_model-00001-of-00007.msgpack",
7
+ "model/embed_tokens/embedding": "flax_model-00001-of-00007.msgpack",
8
+ "model/layers/0/input_layernorm/weight": "flax_model-00001-of-00007.msgpack",
9
+ "model/layers/0/mlp/down_proj/kernel": "flax_model-00001-of-00007.msgpack",
10
+ "model/layers/0/mlp/gate_proj/kernel": "flax_model-00001-of-00007.msgpack",
11
+ "model/layers/0/mlp/up_proj/kernel": "flax_model-00001-of-00007.msgpack",
12
+ "model/layers/0/post_attention_layernorm/weight": "flax_model-00001-of-00007.msgpack",
13
+ "model/layers/0/self_attn/k_proj/kernel": "flax_model-00001-of-00007.msgpack",
14
+ "model/layers/0/self_attn/o_proj/kernel": "flax_model-00001-of-00007.msgpack",
15
+ "model/layers/0/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
16
+ "model/layers/0/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
17
+ "model/layers/1/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
18
+ "model/layers/1/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
19
+ "model/layers/1/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
20
+ "model/layers/1/mlp/up_proj/kernel": "flax_model-00002-of-00007.msgpack",
21
+ "model/layers/1/post_attention_layernorm/weight": "flax_model-00002-of-00007.msgpack",
22
+ "model/layers/1/self_attn/k_proj/kernel": "flax_model-00002-of-00007.msgpack",
23
+ "model/layers/1/self_attn/o_proj/kernel": "flax_model-00002-of-00007.msgpack",
24
+ "model/layers/1/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
25
+ "model/layers/1/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
26
+ "model/layers/10/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
27
+ "model/layers/10/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
28
+ "model/layers/10/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
29
+ "model/layers/10/mlp/up_proj/kernel": "flax_model-00002-of-00007.msgpack",
30
+ "model/layers/10/post_attention_layernorm/weight": "flax_model-00002-of-00007.msgpack",
31
+ "model/layers/10/self_attn/k_proj/kernel": "flax_model-00002-of-00007.msgpack",
32
+ "model/layers/10/self_attn/o_proj/kernel": "flax_model-00002-of-00007.msgpack",
33
+ "model/layers/10/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
34
+ "model/layers/10/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
35
+ "model/layers/11/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
36
+ "model/layers/11/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
37
+ "model/layers/11/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
38
+ "model/layers/11/mlp/up_proj/kernel": "flax_model-00002-of-00007.msgpack",
39
+ "model/layers/11/post_attention_layernorm/weight": "flax_model-00002-of-00007.msgpack",
40
+ "model/layers/11/self_attn/k_proj/kernel": "flax_model-00002-of-00007.msgpack",
41
+ "model/layers/11/self_attn/o_proj/kernel": "flax_model-00002-of-00007.msgpack",
42
+ "model/layers/11/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
43
+ "model/layers/11/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
44
+ "model/layers/12/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
45
+ "model/layers/12/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
46
+ "model/layers/12/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
47
+ "model/layers/12/mlp/up_proj/kernel": "flax_model-00002-of-00007.msgpack",
48
+ "model/layers/12/post_attention_layernorm/weight": "flax_model-00002-of-00007.msgpack",
49
+ "model/layers/12/self_attn/k_proj/kernel": "flax_model-00002-of-00007.msgpack",
50
+ "model/layers/12/self_attn/o_proj/kernel": "flax_model-00002-of-00007.msgpack",
51
+ "model/layers/12/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
52
+ "model/layers/12/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
53
+ "model/layers/13/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
54
+ "model/layers/13/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
55
+ "model/layers/13/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
56
+ "model/layers/13/mlp/up_proj/kernel": "flax_model-00002-of-00007.msgpack",
57
+ "model/layers/13/post_attention_layernorm/weight": "flax_model-00002-of-00007.msgpack",
58
+ "model/layers/13/self_attn/k_proj/kernel": "flax_model-00002-of-00007.msgpack",
59
+ "model/layers/13/self_attn/o_proj/kernel": "flax_model-00002-of-00007.msgpack",
60
+ "model/layers/13/self_attn/q_proj/kernel": "flax_model-00002-of-00007.msgpack",
61
+ "model/layers/13/self_attn/v_proj/kernel": "flax_model-00002-of-00007.msgpack",
62
+ "model/layers/14/input_layernorm/weight": "flax_model-00002-of-00007.msgpack",
63
+ "model/layers/14/mlp/down_proj/kernel": "flax_model-00002-of-00007.msgpack",
64
+ "model/layers/14/mlp/gate_proj/kernel": "flax_model-00002-of-00007.msgpack",
65
+ "model/layers/14/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
66
+ "model/layers/14/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
67
+ "model/layers/14/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
68
+ "model/layers/14/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
69
+ "model/layers/14/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
70
+ "model/layers/14/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
71
+ "model/layers/15/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
72
+ "model/layers/15/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
73
+ "model/layers/15/mlp/gate_proj/kernel": "flax_model-00003-of-00007.msgpack",
74
+ "model/layers/15/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
75
+ "model/layers/15/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
76
+ "model/layers/15/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
77
+ "model/layers/15/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
78
+ "model/layers/15/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
79
+ "model/layers/15/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
80
+ "model/layers/16/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
81
+ "model/layers/16/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
82
+ "model/layers/16/mlp/gate_proj/kernel": "flax_model-00003-of-00007.msgpack",
83
+ "model/layers/16/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
84
+ "model/layers/16/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
85
+ "model/layers/16/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
86
+ "model/layers/16/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
87
+ "model/layers/16/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
88
+ "model/layers/16/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
89
+ "model/layers/17/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
90
+ "model/layers/17/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
91
+ "model/layers/17/mlp/gate_proj/kernel": "flax_model-00003-of-00007.msgpack",
92
+ "model/layers/17/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
93
+ "model/layers/17/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
94
+ "model/layers/17/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
95
+ "model/layers/17/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
96
+ "model/layers/17/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
97
+ "model/layers/17/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
98
+ "model/layers/18/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
99
+ "model/layers/18/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
100
+ "model/layers/18/mlp/gate_proj/kernel": "flax_model-00003-of-00007.msgpack",
101
+ "model/layers/18/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
102
+ "model/layers/18/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
103
+ "model/layers/18/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
104
+ "model/layers/18/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
105
+ "model/layers/18/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
106
+ "model/layers/18/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
107
+ "model/layers/19/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
108
+ "model/layers/19/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
109
+ "model/layers/19/mlp/gate_proj/kernel": "flax_model-00003-of-00007.msgpack",
110
+ "model/layers/19/mlp/up_proj/kernel": "flax_model-00003-of-00007.msgpack",
111
+ "model/layers/19/post_attention_layernorm/weight": "flax_model-00003-of-00007.msgpack",
112
+ "model/layers/19/self_attn/k_proj/kernel": "flax_model-00003-of-00007.msgpack",
113
+ "model/layers/19/self_attn/o_proj/kernel": "flax_model-00003-of-00007.msgpack",
114
+ "model/layers/19/self_attn/q_proj/kernel": "flax_model-00003-of-00007.msgpack",
115
+ "model/layers/19/self_attn/v_proj/kernel": "flax_model-00003-of-00007.msgpack",
116
+ "model/layers/2/input_layernorm/weight": "flax_model-00003-of-00007.msgpack",
117
+ "model/layers/2/mlp/down_proj/kernel": "flax_model-00003-of-00007.msgpack",
118
+ "model/layers/2/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
119
+ "model/layers/2/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
120
+ "model/layers/2/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
121
+ "model/layers/2/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
122
+ "model/layers/2/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
123
+ "model/layers/2/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
124
+ "model/layers/2/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
125
+ "model/layers/20/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
126
+ "model/layers/20/mlp/down_proj/kernel": "flax_model-00004-of-00007.msgpack",
127
+ "model/layers/20/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
128
+ "model/layers/20/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
129
+ "model/layers/20/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
130
+ "model/layers/20/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
131
+ "model/layers/20/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
132
+ "model/layers/20/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
133
+ "model/layers/20/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
134
+ "model/layers/21/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
135
+ "model/layers/21/mlp/down_proj/kernel": "flax_model-00004-of-00007.msgpack",
136
+ "model/layers/21/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
137
+ "model/layers/21/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
138
+ "model/layers/21/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
139
+ "model/layers/21/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
140
+ "model/layers/21/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
141
+ "model/layers/21/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
142
+ "model/layers/21/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
143
+ "model/layers/22/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
144
+ "model/layers/22/mlp/down_proj/kernel": "flax_model-00004-of-00007.msgpack",
145
+ "model/layers/22/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
146
+ "model/layers/22/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
147
+ "model/layers/22/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
148
+ "model/layers/22/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
149
+ "model/layers/22/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
150
+ "model/layers/22/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
151
+ "model/layers/22/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
152
+ "model/layers/23/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
153
+ "model/layers/23/mlp/down_proj/kernel": "flax_model-00004-of-00007.msgpack",
154
+ "model/layers/23/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
155
+ "model/layers/23/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
156
+ "model/layers/23/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
157
+ "model/layers/23/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
158
+ "model/layers/23/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
159
+ "model/layers/23/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
160
+ "model/layers/23/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
161
+ "model/layers/24/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
162
+ "model/layers/24/mlp/down_proj/kernel": "flax_model-00004-of-00007.msgpack",
163
+ "model/layers/24/mlp/gate_proj/kernel": "flax_model-00004-of-00007.msgpack",
164
+ "model/layers/24/mlp/up_proj/kernel": "flax_model-00004-of-00007.msgpack",
165
+ "model/layers/24/post_attention_layernorm/weight": "flax_model-00004-of-00007.msgpack",
166
+ "model/layers/24/self_attn/k_proj/kernel": "flax_model-00004-of-00007.msgpack",
167
+ "model/layers/24/self_attn/o_proj/kernel": "flax_model-00004-of-00007.msgpack",
168
+ "model/layers/24/self_attn/q_proj/kernel": "flax_model-00004-of-00007.msgpack",
169
+ "model/layers/24/self_attn/v_proj/kernel": "flax_model-00004-of-00007.msgpack",
170
+ "model/layers/25/input_layernorm/weight": "flax_model-00004-of-00007.msgpack",
171
+ "model/layers/25/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
172
+ "model/layers/25/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
173
+ "model/layers/25/mlp/up_proj/kernel": "flax_model-00005-of-00007.msgpack",
174
+ "model/layers/25/post_attention_layernorm/weight": "flax_model-00005-of-00007.msgpack",
175
+ "model/layers/25/self_attn/k_proj/kernel": "flax_model-00005-of-00007.msgpack",
176
+ "model/layers/25/self_attn/o_proj/kernel": "flax_model-00005-of-00007.msgpack",
177
+ "model/layers/25/self_attn/q_proj/kernel": "flax_model-00005-of-00007.msgpack",
178
+ "model/layers/25/self_attn/v_proj/kernel": "flax_model-00005-of-00007.msgpack",
179
+ "model/layers/26/input_layernorm/weight": "flax_model-00005-of-00007.msgpack",
180
+ "model/layers/26/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
181
+ "model/layers/26/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
182
+ "model/layers/26/mlp/up_proj/kernel": "flax_model-00005-of-00007.msgpack",
183
+ "model/layers/26/post_attention_layernorm/weight": "flax_model-00005-of-00007.msgpack",
184
+ "model/layers/26/self_attn/k_proj/kernel": "flax_model-00005-of-00007.msgpack",
185
+ "model/layers/26/self_attn/o_proj/kernel": "flax_model-00005-of-00007.msgpack",
186
+ "model/layers/26/self_attn/q_proj/kernel": "flax_model-00005-of-00007.msgpack",
187
+ "model/layers/26/self_attn/v_proj/kernel": "flax_model-00005-of-00007.msgpack",
188
+ "model/layers/27/input_layernorm/weight": "flax_model-00005-of-00007.msgpack",
189
+ "model/layers/27/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
190
+ "model/layers/27/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
191
+ "model/layers/27/mlp/up_proj/kernel": "flax_model-00005-of-00007.msgpack",
192
+ "model/layers/27/post_attention_layernorm/weight": "flax_model-00005-of-00007.msgpack",
193
+ "model/layers/27/self_attn/k_proj/kernel": "flax_model-00005-of-00007.msgpack",
194
+ "model/layers/27/self_attn/o_proj/kernel": "flax_model-00005-of-00007.msgpack",
195
+ "model/layers/27/self_attn/q_proj/kernel": "flax_model-00005-of-00007.msgpack",
196
+ "model/layers/27/self_attn/v_proj/kernel": "flax_model-00005-of-00007.msgpack",
197
+ "model/layers/28/input_layernorm/weight": "flax_model-00005-of-00007.msgpack",
198
+ "model/layers/28/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
199
+ "model/layers/28/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
200
+ "model/layers/28/mlp/up_proj/kernel": "flax_model-00005-of-00007.msgpack",
201
+ "model/layers/28/post_attention_layernorm/weight": "flax_model-00005-of-00007.msgpack",
202
+ "model/layers/28/self_attn/k_proj/kernel": "flax_model-00005-of-00007.msgpack",
203
+ "model/layers/28/self_attn/o_proj/kernel": "flax_model-00005-of-00007.msgpack",
204
+ "model/layers/28/self_attn/q_proj/kernel": "flax_model-00005-of-00007.msgpack",
205
+ "model/layers/28/self_attn/v_proj/kernel": "flax_model-00005-of-00007.msgpack",
206
+ "model/layers/29/input_layernorm/weight": "flax_model-00005-of-00007.msgpack",
207
+ "model/layers/29/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
208
+ "model/layers/29/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
209
+ "model/layers/29/mlp/up_proj/kernel": "flax_model-00005-of-00007.msgpack",
210
+ "model/layers/29/post_attention_layernorm/weight": "flax_model-00005-of-00007.msgpack",
211
+ "model/layers/29/self_attn/k_proj/kernel": "flax_model-00005-of-00007.msgpack",
212
+ "model/layers/29/self_attn/o_proj/kernel": "flax_model-00005-of-00007.msgpack",
213
+ "model/layers/29/self_attn/q_proj/kernel": "flax_model-00005-of-00007.msgpack",
214
+ "model/layers/29/self_attn/v_proj/kernel": "flax_model-00005-of-00007.msgpack",
215
+ "model/layers/3/input_layernorm/weight": "flax_model-00005-of-00007.msgpack",
216
+ "model/layers/3/mlp/down_proj/kernel": "flax_model-00005-of-00007.msgpack",
217
+ "model/layers/3/mlp/gate_proj/kernel": "flax_model-00005-of-00007.msgpack",
218
+ "model/layers/3/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
219
+ "model/layers/3/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
220
+ "model/layers/3/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
221
+ "model/layers/3/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
222
+ "model/layers/3/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
223
+ "model/layers/3/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
224
+ "model/layers/30/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
225
+ "model/layers/30/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
226
+ "model/layers/30/mlp/gate_proj/kernel": "flax_model-00006-of-00007.msgpack",
227
+ "model/layers/30/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
228
+ "model/layers/30/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
229
+ "model/layers/30/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
230
+ "model/layers/30/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
231
+ "model/layers/30/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
232
+ "model/layers/30/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
233
+ "model/layers/31/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
234
+ "model/layers/31/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
235
+ "model/layers/31/mlp/gate_proj/kernel": "flax_model-00006-of-00007.msgpack",
236
+ "model/layers/31/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
237
+ "model/layers/31/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
238
+ "model/layers/31/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
239
+ "model/layers/31/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
240
+ "model/layers/31/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
241
+ "model/layers/31/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
242
+ "model/layers/4/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
243
+ "model/layers/4/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
244
+ "model/layers/4/mlp/gate_proj/kernel": "flax_model-00006-of-00007.msgpack",
245
+ "model/layers/4/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
246
+ "model/layers/4/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
247
+ "model/layers/4/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
248
+ "model/layers/4/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
249
+ "model/layers/4/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
250
+ "model/layers/4/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
251
+ "model/layers/5/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
252
+ "model/layers/5/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
253
+ "model/layers/5/mlp/gate_proj/kernel": "flax_model-00006-of-00007.msgpack",
254
+ "model/layers/5/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
255
+ "model/layers/5/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
256
+ "model/layers/5/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
257
+ "model/layers/5/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
258
+ "model/layers/5/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
259
+ "model/layers/5/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
260
+ "model/layers/6/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
261
+ "model/layers/6/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
262
+ "model/layers/6/mlp/gate_proj/kernel": "flax_model-00006-of-00007.msgpack",
263
+ "model/layers/6/mlp/up_proj/kernel": "flax_model-00006-of-00007.msgpack",
264
+ "model/layers/6/post_attention_layernorm/weight": "flax_model-00006-of-00007.msgpack",
265
+ "model/layers/6/self_attn/k_proj/kernel": "flax_model-00006-of-00007.msgpack",
266
+ "model/layers/6/self_attn/o_proj/kernel": "flax_model-00006-of-00007.msgpack",
267
+ "model/layers/6/self_attn/q_proj/kernel": "flax_model-00006-of-00007.msgpack",
268
+ "model/layers/6/self_attn/v_proj/kernel": "flax_model-00006-of-00007.msgpack",
269
+ "model/layers/7/input_layernorm/weight": "flax_model-00006-of-00007.msgpack",
270
+ "model/layers/7/mlp/down_proj/kernel": "flax_model-00006-of-00007.msgpack",
271
+ "model/layers/7/mlp/gate_proj/kernel": "flax_model-00007-of-00007.msgpack",
272
+ "model/layers/7/mlp/up_proj/kernel": "flax_model-00007-of-00007.msgpack",
273
+ "model/layers/7/post_attention_layernorm/weight": "flax_model-00007-of-00007.msgpack",
274
+ "model/layers/7/self_attn/k_proj/kernel": "flax_model-00007-of-00007.msgpack",
275
+ "model/layers/7/self_attn/o_proj/kernel": "flax_model-00007-of-00007.msgpack",
276
+ "model/layers/7/self_attn/q_proj/kernel": "flax_model-00007-of-00007.msgpack",
277
+ "model/layers/7/self_attn/v_proj/kernel": "flax_model-00007-of-00007.msgpack",
278
+ "model/layers/8/input_layernorm/weight": "flax_model-00007-of-00007.msgpack",
279
+ "model/layers/8/mlp/down_proj/kernel": "flax_model-00007-of-00007.msgpack",
280
+ "model/layers/8/mlp/gate_proj/kernel": "flax_model-00007-of-00007.msgpack",
281
+ "model/layers/8/mlp/up_proj/kernel": "flax_model-00007-of-00007.msgpack",
282
+ "model/layers/8/post_attention_layernorm/weight": "flax_model-00007-of-00007.msgpack",
283
+ "model/layers/8/self_attn/k_proj/kernel": "flax_model-00007-of-00007.msgpack",
284
+ "model/layers/8/self_attn/o_proj/kernel": "flax_model-00007-of-00007.msgpack",
285
+ "model/layers/8/self_attn/q_proj/kernel": "flax_model-00007-of-00007.msgpack",
286
+ "model/layers/8/self_attn/v_proj/kernel": "flax_model-00007-of-00007.msgpack",
287
+ "model/layers/9/input_layernorm/weight": "flax_model-00007-of-00007.msgpack",
288
+ "model/layers/9/mlp/down_proj/kernel": "flax_model-00007-of-00007.msgpack",
289
+ "model/layers/9/mlp/gate_proj/kernel": "flax_model-00007-of-00007.msgpack",
290
+ "model/layers/9/mlp/up_proj/kernel": "flax_model-00007-of-00007.msgpack",
291
+ "model/layers/9/post_attention_layernorm/weight": "flax_model-00007-of-00007.msgpack",
292
+ "model/layers/9/self_attn/k_proj/kernel": "flax_model-00007-of-00007.msgpack",
293
+ "model/layers/9/self_attn/o_proj/kernel": "flax_model-00007-of-00007.msgpack",
294
+ "model/layers/9/self_attn/q_proj/kernel": "flax_model-00007-of-00007.msgpack",
295
+ "model/layers/9/self_attn/v_proj/kernel": "flax_model-00007-of-00007.msgpack",
296
+ "model/norm/weight": "flax_model-00007-of-00007.msgpack"
297
+ }
298
+ }
modelling_flax_tpu_llama.py ADDED
@@ -0,0 +1,1156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Flax TPU LLaMA model."""
2
+
3
+ import math
4
+ from functools import partial
5
+ from typing import Optional, Tuple
6
+
7
+ import flax.linen as nn
8
+ import jax
9
+ import jax.numpy as jnp
10
+ import numpy as np
11
+ from flax.core.frozen_dict import FrozenDict, freeze, unfreeze
12
+ from flax.linen import combine_masks, make_causal_mask
13
+ from flax.linen.attention import dot_product_attention_weights
14
+ from flax.linen import partitioning as nn_partitioning
15
+ from flax.traverse_util import flatten_dict, unflatten_dict
16
+ from jax import lax
17
+ from jax.experimental.pallas.ops.tpu.flash_attention import (
18
+ flash_attention as pallas_flash_attention,
19
+ )
20
+ from jax.experimental.shard_map import shard_map
21
+ from jax.sharding import PartitionSpec as P
22
+
23
+ from transformers.modeling_flax_outputs import FlaxBaseModelOutput, FlaxCausalLMOutput
24
+ from transformers.modeling_flax_utils import (
25
+ ACT2FN,
26
+ FlaxPreTrainedModel,
27
+ append_call_sample_docstring,
28
+ )
29
+ from transformers.utils import (
30
+ add_start_docstrings,
31
+ add_start_docstrings_to_model_forward,
32
+ logging,
33
+ )
34
+ from .configuration_tpu_llama import TPULlamaConfig
35
+
36
+ logger = logging.get_logger(__name__)
37
+
38
+ _CONFIG_FOR_DOC = "TPULlamaConfig"
39
+ _CHECKPOINT_FOR_DOC = "afmck/testing-llama-tiny"
40
+ _REAL_CHECKPOINT_FOR_DOC = "openlm-research/open_llama_3b_v2"
41
+
42
+ LLAMA_START_DOCSTRING = r"""
43
+
44
+ This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the
45
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
46
+ etc.)
47
+
48
+ This model is also a Flax Linen
49
+ [flax.nn.Module](https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html) subclass. Use it as a
50
+ regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
51
+
52
+ Finally, this model supports inherent JAX features such as:
53
+
54
+ - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit)
55
+ - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation)
56
+ - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap)
57
+ - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap)
58
+
59
+ Parameters:
60
+ config ([`LlamaConfig`]): Model configuration class with all the parameters of the model.
61
+ Initializing with a config file does not load the weights associated with the model, only the
62
+ configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights.
63
+ dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`):
64
+ The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16`, or
65
+ `jax.numpy.bfloat16`.
66
+
67
+ This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
68
+ specified all the computation will be performed with the given `dtype`.
69
+
70
+ **Note that this only specifies the dtype of the computation and does not influence the dtype of model
71
+ parameters.**
72
+
73
+ If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and
74
+ [`~FlaxPreTrainedModel.to_bf16`].
75
+ """
76
+
77
+ LLAMA_INPUTS_DOCSTRING = r"""
78
+ Args:
79
+ input_ids (`numpy.ndarray` of shape `(batch_size, input_ids_length)`):
80
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
81
+ it.
82
+
83
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
84
+ [`PreTrainedTokenizer.__call__`] for details.
85
+
86
+ [What are input IDs?](../glossary#input-ids)
87
+ attention_mask (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
88
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
89
+
90
+ - 1 for tokens that are **not masked**,
91
+ - 0 for tokens that are **masked**.
92
+
93
+ [What are attention masks?](../glossary#attention-mask)
94
+
95
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
96
+ [`PreTrainedTokenizer.__call__`] for details.
97
+
98
+ If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
99
+ `past_key_values`).
100
+
101
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
102
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
103
+ information on the default strategy.
104
+
105
+ - 1 indicates the head is **not masked**,
106
+ - 0 indicates the head is **masked**.
107
+ position_ids (`numpy.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
108
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
109
+ config.n_positions - 1]`.
110
+
111
+ [What are position IDs?](../glossary#position-ids)
112
+ past_key_values (`Dict[str, np.ndarray]`, *optional*, returned by `init_cache` or when passing previous `past_key_values`):
113
+ Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
114
+ auto-regressive decoding. Pre-computed key and value hidden-states are of shape *[batch_size, max_length]*.
115
+ output_attentions (`bool`, *optional*):
116
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
117
+ tensors for more detail.
118
+ output_hidden_states (`bool`, *optional*):
119
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
120
+ more detail.
121
+ return_dict (`bool`, *optional*):
122
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
123
+ """
124
+
125
+ remat = nn_partitioning.remat
126
+
127
+ # adapted from modeling_rope_utils
128
+ def _compute_default_rope_parameters(
129
+ config=None,
130
+ seq_len: Optional[int] = None,
131
+ **rope_kwargs,
132
+ ):
133
+ if config is not None and len(rope_kwargs) > 0:
134
+ raise ValueError(
135
+ "Unexpected arguments: `**rope_kwargs` and `config` are mutually exclusive in "
136
+ f"`_compute_default_rope_parameters`, got `rope_kwargs`={rope_kwargs} and `config`={config}"
137
+ )
138
+ if len(rope_kwargs) > 0:
139
+ base = rope_kwargs["base"]
140
+ dim = rope_kwargs["dim"]
141
+ elif config is not None:
142
+ base = config.rope_theta
143
+ partial_rotary_factor = config.partial_rotary_factor if hasattr(config, "partial_rotary_factor") else 1.0
144
+ head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
145
+ dim = int(head_dim * partial_rotary_factor)
146
+
147
+ attention_factor = 1.0 # Unused in this type of RoPE
148
+
149
+ # Compute the inverse frequencies
150
+ inv_freq = 1.0 / (base ** (jnp.arange(0, dim, 2, dtype=jnp.int32).astype(jnp.float32) / dim))
151
+ return inv_freq, attention_factor
152
+
153
+
154
+ def _compute_longrope_parameters(
155
+ config, seq_len: Optional[int] = None, **rope_kwargs
156
+ ):
157
+ # TODO (joao): use the new `original_max_position_embeddings` from rope_scaling
158
+ # No need to keep BC with longrope, unreleased when this new pattern was created.
159
+ if len(rope_kwargs) > 0:
160
+ raise ValueError(
161
+ "Unexpected arguments: `**rope_kwargs` should be unset in `_compute_longrope_parameters`, got "
162
+ f"{rope_kwargs}"
163
+ )
164
+
165
+ base = config.rope_theta
166
+ partial_rotary_factor = config.partial_rotary_factor if hasattr(config, "partial_rotary_factor") else 1.0
167
+ head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
168
+ dim = int(head_dim * partial_rotary_factor)
169
+ long_factor = config.rope_scaling["long_factor"]
170
+ short_factor = config.rope_scaling["short_factor"]
171
+ factor = config.rope_scaling.get("factor")
172
+ attention_factor = config.rope_scaling.get("attention_factor")
173
+
174
+ # NOTE: Phi3 (and potentially other models) modify `max_position_embeddings` and have a
175
+ # `original_max_position_embeddings` field containing the pretrained value. They use the ratio between these two
176
+ # values to compute the default attention scaling factor, instead of using `factor`.
177
+ if hasattr(config, "original_max_position_embeddings"):
178
+ if seq_len and seq_len < config.original_max_position_embeddings:
179
+ expanded_max_position_embeddings = config.original_max_position_embeddings
180
+ else:
181
+ expanded_max_position_embeddings = config.max_position_embeddings
182
+ max_position_embeddings = config.original_max_position_embeddings
183
+ factor = expanded_max_position_embeddings / max_position_embeddings
184
+ else:
185
+ max_position_embeddings = config.max_position_embeddings
186
+ expanded_max_position_embeddings = max_position_embeddings * factor
187
+
188
+ # Sets the attention factor as suggested in the paper
189
+ if attention_factor is None:
190
+ if factor <= 1.0:
191
+ attention_factor = 1.0
192
+ else:
193
+ attention_factor = math.sqrt(1 + math.log(factor) / math.log(max_position_embeddings))
194
+
195
+ # Compute the inverse frequencies -- scaled based on the target sequence length
196
+ if expanded_max_position_embeddings > max_position_embeddings:
197
+ ext_factors = jnp.array(long_factor, dtype=jnp.float32)
198
+ else:
199
+ ext_factors = jnp.array(short_factor, dtype=jnp.float32)
200
+ inv_freq_shape = jnp.arange(0, dim, 2, dtype=jnp.int64).astype(jnp.float32) / dim
201
+ inv_freq = 1.0 / (ext_factors * base**inv_freq_shape)
202
+
203
+ return inv_freq, attention_factor
204
+
205
+
206
+ def _compute_llama3_parameters(config, seq_len: Optional[int] = None, **rope_kwargs):
207
+ # Gets the default RoPE parameters
208
+ inv_freq, attention_factor = _compute_default_rope_parameters(config, seq_len, **rope_kwargs)
209
+
210
+ factor = config.rope_scaling["factor"] # `8` in the original implementation
211
+ low_freq_factor = config.rope_scaling["low_freq_factor"] # `1` in the original implementation
212
+ high_freq_factor = config.rope_scaling["high_freq_factor"] # `4` in the original implementation
213
+ old_context_len = config.rope_scaling["original_max_position_embeddings"] # `8192` in the original implementation
214
+
215
+ low_freq_wavelen = old_context_len / low_freq_factor
216
+ high_freq_wavelen = old_context_len / high_freq_factor
217
+
218
+ wavelen = 2 * math.pi / inv_freq
219
+ # wavelen < high_freq_wavelen: do nothing
220
+ # wavelen > low_freq_wavelen: divide by factor
221
+ inv_freq_llama = jnp.where(wavelen > low_freq_wavelen, inv_freq / factor, inv_freq)
222
+ # otherwise: interpolate between the two, using a smooth factor
223
+ smooth_factor = (old_context_len / wavelen - low_freq_factor) / (high_freq_factor - low_freq_factor)
224
+ smoothed_inv_freq = (1 - smooth_factor) * inv_freq_llama / factor + smooth_factor * inv_freq_llama
225
+ is_medium_freq = ~(wavelen < high_freq_wavelen) * ~(wavelen > low_freq_wavelen)
226
+ inv_freq_llama = jnp.where(is_medium_freq, smoothed_inv_freq, inv_freq_llama)
227
+
228
+ return inv_freq_llama, attention_factor
229
+
230
+
231
+ ROPE_INIT_FUNCTIONS = {
232
+ "default": _compute_default_rope_parameters,
233
+ "llama3": _compute_llama3_parameters,
234
+ "longrope": _compute_longrope_parameters,
235
+ }
236
+
237
+
238
+ def create_sinusoidal_positions(num_pos, dim):
239
+ inv_freq = 1.0 / (10000 ** (np.arange(0, dim, 2) / dim))
240
+ freqs = np.einsum("i , j -> i j", np.arange(num_pos), inv_freq).astype("float32")
241
+
242
+ emb = np.concatenate((freqs, freqs), axis=-1)
243
+ out = np.concatenate((np.sin(emb)[:, None, :], np.cos(emb)[:, None, :]), axis=-1)
244
+ return jnp.array(out[:, :, :num_pos])
245
+
246
+
247
+ # Copied from transformers.models.llama.modeling_flax_llama.rotate_half
248
+ def rotate_half(tensor):
249
+ """Rotates half the hidden dims of the input."""
250
+ rotate_half_tensor = jnp.concatenate(
251
+ (-tensor[..., tensor.shape[-1] // 2 :], tensor[..., : tensor.shape[-1] // 2]), axis=-1
252
+ )
253
+ return rotate_half_tensor
254
+
255
+
256
+ # Adapted from transformers.models.llama.modeling_flax_llama.apply_rotary_pos_emb
257
+ def apply_rotary_pos_emb(tensor, sin_pos, cos_pos):
258
+ return (tensor * cos_pos[:, :, None, :]) + (rotate_half(tensor) * sin_pos[:, :, None, :])
259
+
260
+
261
+ class FlaxTPULlamaRMSNorm(nn.Module):
262
+ config: TPULlamaConfig
263
+ dtype: jnp.dtype = jnp.float32
264
+ override_dim: int = None
265
+
266
+ def setup(self):
267
+ if self.override_dim is not None:
268
+ dim = self.override_dim
269
+ else:
270
+ dim = self.config.hidden_size
271
+
272
+ self.epsilon = self.config.rms_norm_eps
273
+ self.weight = self.param("weight", lambda _, shape: jnp.ones(shape), dim)
274
+
275
+ def __call__(self, hidden_states):
276
+ variance = jnp.asarray(hidden_states, dtype=jnp.float32)
277
+ variance = jnp.power(variance, 2)
278
+ variance = variance.mean(-1, keepdims=True)
279
+ # use `jax.numpy.sqrt` as `jax.lax.rsqrt` does not match `torch.rsqrt`
280
+ hidden_states = hidden_states / jnp.sqrt(variance + self.epsilon)
281
+
282
+ return self.weight * jnp.asarray(hidden_states, dtype=self.dtype)
283
+
284
+
285
+ class FlaxTPULlamaRotaryEmbedding(nn.Module):
286
+ config: TPULlamaConfig
287
+ dtype: jnp.dtype = jnp.float32
288
+
289
+ def setup(self):
290
+ self.rope_kwargs = {}
291
+
292
+ if self.config.rope_scaling is not None:
293
+ self.rope_type = self.config.rope_scaling.get("rope_type", self.config.rope_scaling.get("type"))
294
+ else:
295
+ self.rope_type = "default"
296
+ self.max_seq_len_cached = self.config.max_position_embeddings
297
+ self.original_max_seq_len = self.config.max_position_embeddings
298
+
299
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
300
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, **self.rope_kwargs)
301
+ self.inv_freq = self.original_inv_freq = inv_freq
302
+
303
+ def __call__(self, x, position_ids):
304
+ inv_freq_expanded = jnp.tile(
305
+ self.inv_freq[None, :, None].astype(jnp.float32),
306
+ (position_ids.shape[0], 1, 1),
307
+ )
308
+ position_ids_expanded = position_ids[:, None, :].astype(jnp.float32)
309
+
310
+ freqs = jnp.swapaxes(jnp.matmul(inv_freq_expanded, position_ids_expanded), 1, 2)
311
+ emb = jnp.concatenate([freqs, freqs], axis=-1)
312
+ cos = jnp.cos(emb)
313
+ sin = jnp.sin(emb)
314
+
315
+ cos = cos * self.attention_scaling
316
+ sin = sin * self.attention_scaling
317
+
318
+ return cos.astype(x.dtype), sin.astype(x.dtype)
319
+
320
+
321
+ class FlaxTPULlamaAttention(nn.Module):
322
+ config: TPULlamaConfig
323
+ dtype: jnp.dtype = jnp.float32
324
+ causal: bool = True
325
+ is_cross_attention: bool = False
326
+
327
+ def setup(self):
328
+ config = self.config
329
+ self.embed_dim = config.hidden_size
330
+ self.num_heads = config.num_attention_heads
331
+ self.head_dim = getattr(config, "head_dim", self.embed_dim // self.num_heads)
332
+ self.num_key_value_heads = config.num_key_value_heads
333
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
334
+ self.attention_softmax_in_fp32 = self.dtype is not jnp.float32
335
+
336
+ dense = partial(
337
+ nn.Dense,
338
+ use_bias=config.attention_bias,
339
+ dtype=self.dtype,
340
+ kernel_init=jax.nn.initializers.normal(self.config.initializer_range),
341
+ )
342
+
343
+ self.q_proj = dense(self.num_heads * self.head_dim)
344
+ self.k_proj = dense(self.num_key_value_heads * self.head_dim)
345
+ self.v_proj = dense(self.num_key_value_heads * self.head_dim)
346
+ self.o_proj = dense(self.embed_dim)
347
+
348
+ if self.config.add_qk_norm:
349
+ if self.config.qk_norm_position == "post_split":
350
+ self.q_norm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype, override_dim=self.head_dim)
351
+ self.k_norm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype, override_dim=self.head_dim)
352
+ elif self.config.qk_norm_position == "pre_split":
353
+ self.q_norm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
354
+ self.k_norm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
355
+
356
+ self.causal_mask = make_causal_mask(
357
+ jnp.ones(
358
+ (1, getattr(config, "max_length", config.max_position_embeddings)),
359
+ dtype="bool",
360
+ ),
361
+ dtype="bool",
362
+ )
363
+
364
+ def _split_heads(self, hidden_states, num_heads):
365
+ return hidden_states.reshape(hidden_states.shape[:2] + (num_heads, self.head_dim))
366
+
367
+ def _merge_heads(self, hidden_states, num_heads):
368
+ return hidden_states.reshape(hidden_states.shape[:2] + (num_heads * self.head_dim,))
369
+
370
+ @nn.compact
371
+ # Copied from transformers.models.gpt_neo.modeling_flax_gpt_neo.FlaxGPTNeoSelfAttention._concatenate_to_cache
372
+ def _concatenate_to_cache(self, key, value, query, attention_mask):
373
+ """
374
+ This function takes projected key, value states from a single input token and concatenates the states to cached
375
+ states from previous steps. This function is slighly adapted from the official Flax repository:
376
+ https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252
377
+ """
378
+ # detect if we're initializing by absence of existing cache data.
379
+ is_initialized = self.has_variable("cache", "cached_key")
380
+ cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype)
381
+ cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype)
382
+ cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32))
383
+
384
+ if is_initialized:
385
+ *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape
386
+ # update key, value caches with our new 1d spatial slices
387
+ cur_index = cache_index.value
388
+ indices = (0,) * len(batch_dims) + (cur_index, 0, 0)
389
+ key = lax.dynamic_update_slice(cached_key.value, key, indices)
390
+ value = lax.dynamic_update_slice(cached_value.value, value, indices)
391
+ cached_key.value = key
392
+ cached_value.value = value
393
+ num_updated_cache_vectors = query.shape[1]
394
+ cache_index.value = cache_index.value + num_updated_cache_vectors
395
+ # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements.
396
+ pad_mask = jnp.broadcast_to(
397
+ jnp.arange(max_length) < cur_index + num_updated_cache_vectors,
398
+ tuple(batch_dims) + (1, num_updated_cache_vectors, max_length),
399
+ )
400
+ attention_mask = combine_masks(pad_mask, attention_mask)
401
+ return key, value, attention_mask
402
+
403
+ def __call__(
404
+ self,
405
+ hidden_states,
406
+ position_embeddings,
407
+ attention_mask,
408
+ position_ids,
409
+ deterministic: bool = True,
410
+ init_cache: bool = False,
411
+ output_attentions: bool = False,
412
+ ):
413
+ raw_query = self.q_proj(hidden_states)
414
+ raw_key = self.k_proj(hidden_states)
415
+ raw_value = self.v_proj(hidden_states)
416
+
417
+ if self.config.add_qk_norm and self.config.qk_norm_position == "pre_split":
418
+ raw_query = self.q_norm(raw_query)
419
+ raw_key = self.k_norm(raw_key)
420
+
421
+ query = self._split_heads(raw_query, self.num_heads)
422
+ key = self._split_heads(raw_key, self.num_key_value_heads)
423
+ value = self._split_heads(raw_value, self.num_key_value_heads)
424
+
425
+ if self.config.add_qk_norm and self.config.qk_norm_position == "post_split":
426
+ query = self.q_norm(query)
427
+ key = self.k_norm(key)
428
+
429
+ cos, sin = position_embeddings
430
+ query = apply_rotary_pos_emb(query, sin, cos)
431
+ key = apply_rotary_pos_emb(key, sin, cos)
432
+
433
+ query_length, key_length = query.shape[1], key.shape[1]
434
+
435
+ if self.has_variable("cache", "cached_key"):
436
+ mask_shift = self.variables["cache"]["cache_index"]
437
+ max_decoder_length = self.variables["cache"]["cached_key"].shape[1]
438
+ causal_mask = lax.dynamic_slice(
439
+ self.causal_mask,
440
+ (0, 0, mask_shift, 0),
441
+ (1, 1, query_length, max_decoder_length),
442
+ )
443
+ else:
444
+ causal_mask = self.causal_mask[:, :, :query_length, :key_length]
445
+
446
+ batch_size = hidden_states.shape[0]
447
+ causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:])
448
+
449
+ if attention_mask.ndim == 2:
450
+ attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2))
451
+ else:
452
+ assert attention_mask.ndim == 4
453
+
454
+ attention_mask = jnp.broadcast_to(attention_mask, causal_mask.shape)
455
+ attention_mask = combine_masks(attention_mask, causal_mask)
456
+
457
+ dropout_rng = None
458
+ if not deterministic and self.config.attention_dropout > 0.0:
459
+ dropout_rng = self.make_rng("dropout")
460
+
461
+ # During fast autoregressive decoding, we feed one position at a time,
462
+ # and cache the keys and values step by step.
463
+ if self.has_variable("cache", "cached_key") or init_cache:
464
+ key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask)
465
+
466
+ key = jnp.repeat(key, self.num_key_value_groups, axis=2)
467
+ value = jnp.repeat(value, self.num_key_value_groups, axis=2)
468
+
469
+ # transform boolean mask into float mask
470
+ attention_bias = lax.select(
471
+ attention_mask > 0,
472
+ jnp.full(attention_mask.shape, 0.0).astype(self.dtype),
473
+ jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype),
474
+ )
475
+
476
+ # usual dot product attention
477
+ attention_dtype = jnp.float32 if self.attention_softmax_in_fp32 else self.dtype
478
+ attn_weights = dot_product_attention_weights(
479
+ query,
480
+ key,
481
+ bias=attention_bias,
482
+ dropout_rng=dropout_rng,
483
+ dropout_rate=self.config.attention_dropout,
484
+ deterministic=deterministic,
485
+ dtype=attention_dtype,
486
+ )
487
+
488
+ if self.attention_softmax_in_fp32:
489
+ attn_weights = attn_weights.astype(self.dtype)
490
+
491
+ attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value)
492
+ attn_output = self._merge_heads(attn_output, self.num_heads)
493
+ attn_output = self.o_proj(attn_output)
494
+
495
+ outputs = (attn_output, (raw_query, raw_key, raw_value)) if output_attentions else (attn_output,)
496
+ return outputs
497
+
498
+
499
+ class FlaxTPULlamaFlashAttention(FlaxTPULlamaAttention):
500
+ def setup(self):
501
+ super().setup()
502
+
503
+ if self.num_heads % len(jax.devices()) != 0:
504
+ # TODO: warn or pad attention heads or neither or both?
505
+ shard_across_model = False
506
+ else:
507
+ shard_across_model = True
508
+
509
+ model_partition = "model" if shard_across_model else None
510
+ data_partition = "data"
511
+
512
+ self.flash_attn_fn = shard_map(
513
+ partial(
514
+ pallas_flash_attention,
515
+ sm_scale=1 / math.sqrt(self.head_dim),
516
+ causal=True,
517
+ ),
518
+ mesh=getattr(self.config, "mesh"),
519
+ in_specs=(
520
+ # bnlh
521
+ P(data_partition, model_partition, None, None),
522
+ P(data_partition, model_partition, None, None),
523
+ P(data_partition, model_partition, None, None),
524
+ # P(),
525
+ ),
526
+ # bnlh
527
+ out_specs=P(data_partition, model_partition, None, None),
528
+ check_rep=False,
529
+ )
530
+
531
+ def __call__(
532
+ self,
533
+ hidden_states,
534
+ position_embeddings,
535
+ attention_mask,
536
+ position_ids,
537
+ deterministic: bool = True,
538
+ init_cache: bool = False,
539
+ output_attentions: bool = False,
540
+ ):
541
+ raw_query = self.q_proj(hidden_states)
542
+ raw_key = self.k_proj(hidden_states)
543
+ raw_value = self.v_proj(hidden_states)
544
+
545
+ query = self._split_heads(raw_query, self.num_heads)
546
+ key = self._split_heads(raw_key, self.num_key_value_heads)
547
+ value = self._split_heads(raw_value, self.num_key_value_heads)
548
+
549
+ cos, sin = position_embeddings
550
+ query = apply_rotary_pos_emb(query, sin, cos)
551
+ key = apply_rotary_pos_emb(key, sin, cos)
552
+
553
+ query_length, key_length = query.shape[1], key.shape[1]
554
+
555
+ if self.has_variable("cache", "cached_key"):
556
+ mask_shift = self.variables["cache"]["cache_index"]
557
+ max_decoder_length = self.variables["cache"]["cached_key"].shape[1]
558
+ causal_mask = lax.dynamic_slice(
559
+ self.causal_mask,
560
+ (0, 0, mask_shift, 0),
561
+ (1, 1, query_length, max_decoder_length),
562
+ )
563
+ else:
564
+ causal_mask = self.causal_mask[:, :, :query_length, :key_length]
565
+
566
+ batch_size = hidden_states.shape[0]
567
+ causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:])
568
+
569
+ if attention_mask.ndim == 2:
570
+ attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2))
571
+ else:
572
+ assert attention_mask.ndim == 4
573
+
574
+ attention_mask = jnp.broadcast_to(attention_mask, causal_mask.shape)
575
+ attention_mask = combine_masks(attention_mask, causal_mask)
576
+
577
+ # During fast autoregressive decoding, we feed one position at a time,
578
+ # and cache the keys and values step by step.
579
+ if self.has_variable("cache", "cached_key") or init_cache:
580
+ key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask)
581
+
582
+ key = jnp.repeat(key, self.num_key_value_groups, axis=2)
583
+ value = jnp.repeat(value, self.num_key_value_groups, axis=2)
584
+
585
+ # transform boolean mask into float mask
586
+ # attention_bias = lax.select(
587
+ # attention_mask > 0,
588
+ # jnp.full(attention_mask.shape, 0.0).astype(self.dtype),
589
+ # jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(
590
+ # self.dtype
591
+ # ),
592
+ # )
593
+
594
+ query = jnp.swapaxes(query, 1, 2)
595
+ key = jnp.swapaxes(key, 1, 2)
596
+ value = jnp.swapaxes(value, 1, 2)
597
+
598
+ # TODO: revisit attention_bias when implementing packing
599
+ # attention_bias = jnp.broadcast_to(
600
+ # attention_bias, (batch_size, self.num_heads, query_length, key_length)
601
+ # )
602
+
603
+ # flash attn needs fp32
604
+ query = query.astype(jnp.float32)
605
+ key = key.astype(jnp.float32)
606
+ value = value.astype(jnp.float32)
607
+
608
+ # usual dot product attention
609
+ attn_output = self.flash_attn_fn(
610
+ query,
611
+ key,
612
+ value,
613
+ ).astype(hidden_states.dtype)
614
+ attn_output = jnp.swapaxes(attn_output, 1, 2)
615
+ attn_output = self._merge_heads(attn_output, self.num_heads)
616
+ attn_output = self.o_proj(attn_output)
617
+
618
+ outputs = (attn_output, (raw_query, raw_key, raw_value)) if output_attentions else (attn_output,)
619
+ return outputs
620
+
621
+
622
+ class FlaxTPULlamaMLP(nn.Module):
623
+ config: TPULlamaConfig
624
+ dtype: jnp.dtype = jnp.float32
625
+
626
+ def setup(self):
627
+ embed_dim = self.config.hidden_size
628
+ inner_dim = self.config.intermediate_size if self.config.intermediate_size is not None else 4 * embed_dim
629
+
630
+ kernel_init = jax.nn.initializers.normal(self.config.initializer_range)
631
+ self.act = ACT2FN[self.config.hidden_act]
632
+
633
+ self.gate_proj = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, kernel_init=kernel_init)
634
+ self.down_proj = nn.Dense(embed_dim, use_bias=False, dtype=self.dtype, kernel_init=kernel_init)
635
+ self.up_proj = nn.Dense(inner_dim, use_bias=False, dtype=self.dtype, kernel_init=kernel_init)
636
+
637
+ def __call__(self, hidden_states):
638
+ up_proj_states = self.up_proj(hidden_states)
639
+ gate_states = self.act(self.gate_proj(hidden_states))
640
+
641
+ hidden_states = self.down_proj(up_proj_states * gate_states)
642
+ return hidden_states
643
+
644
+
645
+ LLAMA_ATTENTION_CLASSES = {
646
+ "eager": FlaxTPULlamaAttention,
647
+ "pallas_flash_attention": FlaxTPULlamaFlashAttention,
648
+ }
649
+
650
+
651
+ class FlaxTPULlamaDecoderLayer(nn.Module):
652
+ config: TPULlamaConfig
653
+ dtype: jnp.dtype = jnp.float32
654
+
655
+ def setup(self):
656
+ self.self_attn = LLAMA_ATTENTION_CLASSES[self.config._attn_implementation](self.config, dtype=self.dtype)
657
+ self.mlp = FlaxTPULlamaMLP(self.config, dtype=self.dtype)
658
+
659
+ if self.config.norm_position == "pre":
660
+ self.input_layernorm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
661
+ self.post_attention_layernorm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
662
+ elif self.config.norm_position == "post":
663
+ self.post_attention_layernorm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
664
+ self.post_feedforward_layernorm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
665
+
666
+ def __call__(
667
+ self,
668
+ hidden_states,
669
+ position_embeddings,
670
+ attention_mask=None,
671
+ position_ids=None,
672
+ deterministic: bool = True,
673
+ init_cache: bool = False,
674
+ output_attentions: bool = False,
675
+ ):
676
+ hidden_states = jax.lax.with_sharding_constraint(
677
+ hidden_states, jax.sharding.NamedSharding(getattr(self.config, "mesh"), P("data", None, "model"))
678
+ )
679
+ residual = hidden_states
680
+
681
+ if self.config.norm_position == "pre":
682
+ hidden_states = self.input_layernorm(hidden_states)
683
+
684
+ outputs = self.self_attn(
685
+ hidden_states,
686
+ position_embeddings,
687
+ attention_mask=attention_mask,
688
+ position_ids=position_ids,
689
+ deterministic=deterministic,
690
+ init_cache=init_cache,
691
+ output_attentions=output_attentions,
692
+ )
693
+ # residual connection
694
+ attn_output = outputs[0]
695
+
696
+ if self.config.norm_position == "post":
697
+ attn_output = self.post_attention_layernorm(attn_output)
698
+
699
+ hidden_states = residual + attn_output
700
+
701
+ residual = hidden_states
702
+
703
+ if self.config.norm_position == "pre":
704
+ hidden_states = self.post_attention_layernorm(hidden_states)
705
+
706
+ hidden_states = jax.lax.with_sharding_constraint(
707
+ hidden_states, jax.sharding.NamedSharding(getattr(self.config, "mesh"), P("data", None, "model"))
708
+ )
709
+
710
+ mlp_output = self.mlp(hidden_states)
711
+
712
+ if self.config.norm_position == "post":
713
+ mlp_output = self.post_feedforward_layernorm(mlp_output)
714
+
715
+ # residual connection
716
+ hidden_states = residual + mlp_output
717
+
718
+ return (hidden_states, attn_output, mlp_output)
719
+
720
+
721
+ # Copied from transformers.models.gpt_neo.modeling_flax_gpt_neo.FlaxGPTNeoPreTrainedModel with GPTNeo->Llama, GPT_NEO->LLAMA, transformer->model
722
+ class FlaxTPULlamaPreTrainedModel(FlaxPreTrainedModel):
723
+ """
724
+ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
725
+ models.
726
+ """
727
+
728
+ config_class = TPULlamaConfig
729
+ base_model_prefix = "model"
730
+ module_class: nn.Module = None
731
+
732
+ def __init__(
733
+ self,
734
+ config: TPULlamaConfig,
735
+ input_shape: Tuple = (1, 1),
736
+ seed: int = 0,
737
+ dtype: jnp.dtype = jnp.float32,
738
+ _do_init: bool = True,
739
+ gradient_checkpointing: bool = False,
740
+ **kwargs,
741
+ ):
742
+ module = self.module_class(
743
+ config=config,
744
+ dtype=dtype,
745
+ gradient_checkpointing=gradient_checkpointing,
746
+ **kwargs
747
+ )
748
+ super().__init__(
749
+ config,
750
+ module,
751
+ input_shape=input_shape,
752
+ seed=seed,
753
+ dtype=dtype,
754
+ _do_init=_do_init,
755
+ )
756
+
757
+ def enable_gradient_checkpointing(self):
758
+ self._module = self.module_class(
759
+ config=self.config,
760
+ dtype=self.dtype,
761
+ gradient_checkpointing=True,
762
+ )
763
+
764
+ @classmethod
765
+ def can_generate(cls) -> bool:
766
+ # disable generation, handled separately
767
+ # this is convenient since GenerationConfig.from_model_config(config) needs a pickleable config
768
+ return False
769
+
770
+ def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict:
771
+ # init input tensors
772
+ input_ids = jnp.zeros(input_shape, dtype="i4")
773
+ attention_mask = jnp.ones_like(input_ids)
774
+ position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape)
775
+ params_rng, dropout_rng = jax.random.split(rng)
776
+ rngs = {"params": params_rng, "dropout": dropout_rng}
777
+
778
+ random_params = self.module.init(rngs, input_ids, None, attention_mask, position_ids, return_dict=False)[
779
+ "params"
780
+ ]
781
+
782
+ if params is not None:
783
+ random_params = flatten_dict(unfreeze(random_params))
784
+ params = flatten_dict(unfreeze(params))
785
+ for missing_key in self._missing_keys:
786
+ params[missing_key] = random_params[missing_key]
787
+ self._missing_keys = set()
788
+ return freeze(unflatten_dict(params))
789
+ else:
790
+ return random_params
791
+
792
+ def init_cache(self, batch_size, max_length):
793
+ r"""
794
+ Args:
795
+ batch_size (`int`):
796
+ batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
797
+ max_length (`int`):
798
+ maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
799
+ cache.
800
+ """
801
+ # init input variables to retrieve cache
802
+ input_ids = jnp.ones((batch_size, max_length))
803
+ attention_mask = jnp.ones_like(input_ids)
804
+ position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape)
805
+
806
+ init_variables = self.module.init(
807
+ jax.random.PRNGKey(0),
808
+ input_ids,
809
+ None,
810
+ attention_mask,
811
+ position_ids,
812
+ return_dict=False,
813
+ init_cache=True,
814
+ )
815
+ return unfreeze(init_variables["cache"])
816
+
817
+ @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
818
+ def __call__(
819
+ self,
820
+ input_ids,
821
+ inputs_embeds=None,
822
+ attention_mask=None,
823
+ position_ids=None,
824
+ params: dict = None,
825
+ past_key_values: dict = None,
826
+ dropout_rng: jax.random.PRNGKey = None,
827
+ train: bool = False,
828
+ output_attentions: Optional[bool] = None,
829
+ output_hidden_states: Optional[bool] = None,
830
+ return_dict: Optional[bool] = None,
831
+ ):
832
+ if (input_ids is None) == (inputs_embeds is None):
833
+ raise ValueError("Need to provide either input_ids or inputs_embeds (and not both)")
834
+
835
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
836
+ output_hidden_states = (
837
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
838
+ )
839
+ return_dict = return_dict if return_dict is not None else self.config.return_dict
840
+
841
+ if input_ids is not None:
842
+ batch_size, sequence_length = input_ids.shape
843
+ else:
844
+ batch_size, sequence_length, _ = inputs_embeds.shape
845
+
846
+ if position_ids is None:
847
+ if past_key_values is not None:
848
+ raise ValueError("Make sure to provide `position_ids` when passing `past_key_values`.")
849
+
850
+ position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length))
851
+
852
+ if attention_mask is None:
853
+ attention_mask = jnp.ones((batch_size, sequence_length))
854
+
855
+ # Handle any PRNG if needed
856
+ rngs = {}
857
+ if dropout_rng is not None:
858
+ rngs["dropout"] = dropout_rng
859
+
860
+ inputs = {"params": params or self.params}
861
+
862
+ # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be changed by FlaxTPULlamaAttention module
863
+ if past_key_values:
864
+ inputs["cache"] = past_key_values
865
+ mutable = ["cache"]
866
+ else:
867
+ mutable = False
868
+
869
+ outputs = self.module.apply(
870
+ inputs,
871
+ jnp.array(input_ids, dtype="i4") if input_ids is not None else None,
872
+ inputs_embeds if inputs_embeds is not None else None,
873
+ jnp.array(attention_mask, dtype="i4"),
874
+ jnp.array(position_ids, dtype="i4"),
875
+ not train,
876
+ False,
877
+ output_attentions,
878
+ output_hidden_states,
879
+ return_dict,
880
+ rngs=rngs,
881
+ mutable=mutable,
882
+ )
883
+
884
+ # add updated cache to model output
885
+ if past_key_values is not None and return_dict:
886
+ outputs, past_key_values = outputs
887
+ outputs["past_key_values"] = unfreeze(past_key_values["cache"])
888
+ return outputs
889
+ elif past_key_values is not None and not return_dict:
890
+ outputs, past_key_values = outputs
891
+ outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:]
892
+
893
+ return outputs
894
+
895
+
896
+ class FlaxTPULlamaLayerCollection(nn.Module):
897
+ config: TPULlamaConfig
898
+ dtype: jnp.dtype = jnp.float32
899
+ gradient_checkpointing: bool = False
900
+
901
+ def setup(self):
902
+ self.rotary_emb = FlaxTPULlamaRotaryEmbedding(self.config, dtype=self.dtype)
903
+
904
+ if self.gradient_checkpointing:
905
+ FlaxTPULlamaDecoderCheckpointLayer = remat(FlaxTPULlamaDecoderLayer, static_argnums=(4, 5, 6))
906
+ self.blocks = [
907
+ FlaxTPULlamaDecoderCheckpointLayer(self.config, dtype=self.dtype, name=str(i))
908
+ for i in range(self.config.num_hidden_layers)
909
+ ]
910
+ else:
911
+ self.blocks = [
912
+ FlaxTPULlamaDecoderLayer(self.config, dtype=self.dtype, name=str(i))
913
+ for i in range(self.config.num_hidden_layers)
914
+ ]
915
+
916
+ def __call__(
917
+ self,
918
+ hidden_states,
919
+ attention_mask=None,
920
+ position_ids=None,
921
+ deterministic: bool = True,
922
+ init_cache: bool = False,
923
+ output_attentions: bool = False,
924
+ output_hidden_states: bool = False,
925
+ return_dict: bool = False,
926
+ ):
927
+ all_attentions = () if output_attentions else None
928
+ all_hidden_states = [(), ()] if output_hidden_states else None
929
+
930
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
931
+
932
+ if output_hidden_states:
933
+ all_hidden_states[0] += (hidden_states,)
934
+ all_hidden_states[1] += (hidden_states,)
935
+
936
+ for block_idx, block in enumerate(self.blocks):
937
+ layer_outputs = block(
938
+ hidden_states,
939
+ position_embeddings,
940
+ attention_mask,
941
+ position_ids,
942
+ deterministic,
943
+ init_cache,
944
+ output_attentions,
945
+ )
946
+ hidden_states = layer_outputs[0]
947
+
948
+ if output_hidden_states:
949
+ if block_idx != len(self.blocks) - 1:
950
+ all_hidden_states[0] += (hidden_states,)
951
+ all_hidden_states[1] += layer_outputs[1:]
952
+
953
+ if output_attentions:
954
+ raise NotImplementedError("Attention outputs are not implemented for TPULLama (with projections).")
955
+
956
+ # this contains possible `None` values - `FlaxTPULlamaModule` will filter them out
957
+ outputs = (hidden_states, all_hidden_states, all_attentions)
958
+
959
+ return outputs
960
+
961
+
962
+ class FlaxTPULlamaModule(nn.Module):
963
+ config: TPULlamaConfig
964
+ dtype: jnp.dtype = jnp.float32
965
+ gradient_checkpointing: bool = False
966
+
967
+ def setup(self):
968
+ self.hidden_size = self.config.hidden_size
969
+ embedding_init = jax.nn.initializers.normal(stddev=self.config.initializer_range)
970
+ self.embed_tokens = nn.Embed(
971
+ self.config.vocab_size,
972
+ self.hidden_size,
973
+ embedding_init=embedding_init,
974
+ dtype=self.dtype,
975
+ )
976
+ self.layers = FlaxTPULlamaLayerCollection(self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing)
977
+ self.norm = FlaxTPULlamaRMSNorm(self.config, dtype=self.dtype)
978
+
979
+ def embed(
980
+ self,
981
+ input_ids,
982
+ ):
983
+ return self.embed_tokens(input_ids.astype("i4"))
984
+
985
+ def __call__(
986
+ self,
987
+ input_ids,
988
+ inputs_embeds=None,
989
+ attention_mask=None,
990
+ position_ids=None,
991
+ deterministic=True,
992
+ init_cache: bool = False,
993
+ output_attentions: bool = False,
994
+ output_hidden_states: bool = False,
995
+ return_dict: bool = True,
996
+ ):
997
+ if inputs_embeds is None:
998
+ inputs_embeds = self.embed(input_ids)
999
+
1000
+ outputs = self.layers(
1001
+ inputs_embeds,
1002
+ position_ids=position_ids,
1003
+ attention_mask=attention_mask,
1004
+ deterministic=deterministic,
1005
+ init_cache=init_cache,
1006
+ output_attentions=output_attentions,
1007
+ output_hidden_states=output_hidden_states,
1008
+ return_dict=return_dict,
1009
+ )
1010
+
1011
+ hidden_states = outputs[0]
1012
+
1013
+ if not self.config.skip_out_norm:
1014
+ hidden_states = self.norm(hidden_states)
1015
+
1016
+ if output_hidden_states:
1017
+ all_hidden_states = outputs[1]
1018
+ all_hidden_states[0] += (hidden_states,)
1019
+ outputs = (hidden_states, all_hidden_states) + outputs[2:]
1020
+ else:
1021
+ outputs = (hidden_states,) + outputs[1:]
1022
+
1023
+ if not return_dict:
1024
+ return tuple(v for v in outputs if v is not None)
1025
+
1026
+ return FlaxBaseModelOutput(
1027
+ last_hidden_state=hidden_states,
1028
+ hidden_states=outputs[1],
1029
+ attentions=outputs[-1],
1030
+ )
1031
+
1032
+
1033
+ @add_start_docstrings(
1034
+ "The bare Llama Model transformer outputting raw hidden-states without any specific head on top.",
1035
+ LLAMA_START_DOCSTRING,
1036
+ )
1037
+ class FlaxTPULlamaModel(FlaxTPULlamaPreTrainedModel):
1038
+ module_class = FlaxTPULlamaModule
1039
+
1040
+
1041
+ append_call_sample_docstring(
1042
+ FlaxTPULlamaModel,
1043
+ _CHECKPOINT_FOR_DOC,
1044
+ FlaxBaseModelOutput,
1045
+ _CONFIG_FOR_DOC,
1046
+ real_checkpoint=_REAL_CHECKPOINT_FOR_DOC,
1047
+ )
1048
+
1049
+
1050
+ class FlaxTPULlamaForCausalLMModule(nn.Module):
1051
+ config: TPULlamaConfig
1052
+ dtype: jnp.dtype = jnp.float32
1053
+ gradient_checkpointing: bool = False
1054
+
1055
+ def setup(self):
1056
+ self.model = FlaxTPULlamaModule(self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing)
1057
+ self.lm_head = nn.Dense(
1058
+ self.config.vocab_size,
1059
+ use_bias=False,
1060
+ dtype=self.dtype,
1061
+ kernel_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
1062
+ )
1063
+
1064
+ def embed(self, input_ids):
1065
+ return self.model.embed(input_ids)
1066
+
1067
+ def __call__(
1068
+ self,
1069
+ input_ids,
1070
+ inputs_embeds=None,
1071
+ attention_mask=None,
1072
+ position_ids=None,
1073
+ deterministic: bool = True,
1074
+ init_cache: bool = False,
1075
+ output_attentions: bool = False,
1076
+ output_hidden_states: bool = False,
1077
+ return_dict: bool = True,
1078
+ ):
1079
+ outputs = self.model(
1080
+ input_ids,
1081
+ inputs_embeds=inputs_embeds,
1082
+ position_ids=position_ids,
1083
+ attention_mask=attention_mask,
1084
+ deterministic=deterministic,
1085
+ init_cache=init_cache,
1086
+ output_attentions=output_attentions,
1087
+ output_hidden_states=output_hidden_states,
1088
+ return_dict=return_dict,
1089
+ )
1090
+
1091
+ hidden_states = outputs[0]
1092
+ if self.config.tie_word_embeddings:
1093
+ shared_kernel = self.model.variables["params"]["embed_tokens"]["embedding"].T
1094
+ lm_logits = self.lm_head.apply({"params": {"kernel": shared_kernel}}, hidden_states)
1095
+ else:
1096
+ lm_logits = self.lm_head(hidden_states)
1097
+
1098
+ lm_logits = jax.lax.with_sharding_constraint(
1099
+ lm_logits,
1100
+ jax.sharding.NamedSharding(getattr(self.config, "mesh"), P("data", None, "model")),
1101
+ )
1102
+
1103
+ if not return_dict:
1104
+ return (lm_logits,) + outputs[1:]
1105
+
1106
+ return FlaxCausalLMOutput(
1107
+ logits=lm_logits,
1108
+ hidden_states=outputs.hidden_states,
1109
+ attentions=outputs.attentions,
1110
+ )
1111
+
1112
+
1113
+ @add_start_docstrings(
1114
+ """
1115
+ The Llama Model transformer with a language modeling head (linear layer) on top.
1116
+ """,
1117
+ LLAMA_START_DOCSTRING,
1118
+ )
1119
+ # Copied from transformers.models.gptj.modeling_flax_gptj.FlaxGPTJForCausalLM with GPTJ->Llama
1120
+ class FlaxTPULlamaForCausalLM(FlaxTPULlamaPreTrainedModel):
1121
+ module_class = FlaxTPULlamaForCausalLMModule
1122
+
1123
+ def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jax.Array] = None):
1124
+ # initializing the cache
1125
+ batch_size, seq_length = input_ids.shape
1126
+
1127
+ past_key_values = self.init_cache(batch_size, max_length)
1128
+ # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length.
1129
+ # But since Llama uses a causal mask, those positions are masked anyways.
1130
+ # Thus we can create a single static attention_mask here, which is more efficient for compilation
1131
+ extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4")
1132
+ if attention_mask is not None:
1133
+ position_ids = attention_mask.cumsum(axis=-1) - 1
1134
+ extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0))
1135
+ else:
1136
+ position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length))
1137
+
1138
+ return {
1139
+ "past_key_values": past_key_values,
1140
+ "attention_mask": extended_attention_mask,
1141
+ "position_ids": position_ids,
1142
+ }
1143
+
1144
+ def update_inputs_for_generation(self, model_outputs, model_kwargs):
1145
+ model_kwargs["past_key_values"] = model_outputs.past_key_values
1146
+ model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1
1147
+ return model_kwargs
1148
+
1149
+
1150
+ append_call_sample_docstring(
1151
+ FlaxTPULlamaForCausalLM,
1152
+ _CHECKPOINT_FOR_DOC,
1153
+ FlaxCausalLMOutput,
1154
+ _CONFIG_FOR_DOC,
1155
+ real_checkpoint=_REAL_CHECKPOINT_FOR_DOC,
1156
+ )