JongYeop commited on
Commit
73131f5
·
verified ·
1 Parent(s): 8f2971b

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - quantization
8
+ - fp4
9
+ - nvfp4
10
+ - compressed-tensors
11
+ - mistral
12
+ - text-generation
13
+ - 4bit
14
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
15
+ pipeline_tag: text-generation
16
+ model-index:
17
+ - name: Mistral-7B-Instruct-v0.2-FP4-W4A4
18
+ results: []
19
+ quantization:
20
+ quant_method: compressed-tensors
21
+ bits: 4
22
+ type: float
23
+ format: nvfp4-pack-quantized
24
+ strategy: tensor_group
25
+ group_size: 16
26
+ symmetric: true
27
+ ---
28
+
29
+ # Mistral-7B-Instruct-v0.2-FP4-W4A4
30
+
31
+ ## Model Description
32
+
33
+ This is an NVFP4 (NVIDIA FP4) quantized version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) using the compressed-tensors quantization method.
34
+
35
+ - **Base Model**: mistralai/Mistral-7B-Instruct-v0.2
36
+ - **Quantization Method**: compressed-tensors
37
+ - **Quantization Type**: NVFP4 W4A4 (4-bit Weight and Activation)
38
+ - **Model Size**: ~4.2GB (compared to ~14GB for BF16)
39
+ - **Compression Ratio**: ~3.3x
40
+
41
+ ## Quantization Configuration
42
+
43
+ This model uses **NVFP4 (NVIDIA FP4) quantization** with grouped quantization for both weights and activations:
44
+
45
+ ### Weights
46
+ - **Precision**: NVFP4 (4-bit floating point)
47
+ - **Strategy**: Tensor-group (grouped quantization)
48
+ - **Group Size**: 16
49
+ - **Symmetric**: Yes
50
+ - **Dynamic**: No (static quantization)
51
+ - **Observer**: MinMax
52
+
53
+ ### Activations
54
+ - **Precision**: NVFP4 (4-bit floating point)
55
+ - **Strategy**: Tensor-group (grouped quantization)
56
+ - **Group Size**: 16
57
+ - **Symmetric**: Yes
58
+ - **Dynamic**: Local (dynamic quantization with local calibration)
59
+ - **Observer**: MinMax
60
+
61
+ ### Other Details
62
+ - **Format**: nvfp4-pack-quantized (packed 4-bit format)
63
+ - **KV Cache**: Not quantized
64
+ - **Ignored Layers**: lm_head
65
+ - **Target Layers**: Linear layers
66
+ - **Quantization Version**: 0.11.0
67
+
68
+ ## Usage
69
+
70
+ ```python
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+
73
+ model_id = "JongYeop/Mistral-7B-Instruct-v0.2-FP4-W4A4"
74
+
75
+ # Load tokenizer
76
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
77
+
78
+ # Load quantized model
79
+ model = AutoModelForCausalLM.from_pretrained(
80
+ model_id,
81
+ device_map="auto",
82
+ torch_dtype="auto"
83
+ )
84
+
85
+ # Generate text
86
+ messages = [
87
+ {"role": "user", "content": "What is machine learning?"}
88
+ ]
89
+
90
+ input_ids = tokenizer.apply_chat_template(
91
+ messages,
92
+ add_generation_prompt=True,
93
+ return_tensors="pt"
94
+ ).to(model.device)
95
+
96
+ outputs = model.generate(
97
+ input_ids,
98
+ max_new_tokens=256,
99
+ do_sample=True,
100
+ temperature=0.7,
101
+ top_p=0.9,
102
+ )
103
+
104
+ response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
105
+ print(response)
106
+ ```
107
+
108
+ ## Model Architecture
109
+
110
+ - **Architecture**: MistralForCausalLM
111
+ - **Hidden Size**: 4096
112
+ - **Intermediate Size**: 14336
113
+ - **Number of Layers**: 32
114
+ - **Number of Attention Heads**: 32
115
+ - **Number of KV Heads**: 8
116
+ - **Vocabulary Size**: 32000
117
+ - **Max Position Embeddings**: 32768
118
+
119
+ ## Intended Use
120
+
121
+ This quantized model is intended for efficient inference with significantly reduced memory footprint while maintaining reasonable performance. It is suitable for:
122
+
123
+ - Resource-constrained environments
124
+ - Edge deployment
125
+ - Applications requiring minimal memory usage
126
+ - High throughput scenarios
127
+ - GPU inference with FP4 support
128
+
129
+ ## Limitations
130
+
131
+ - FP4 quantization may result in more accuracy loss compared to FP8 or INT8 quantization
132
+ - Best performance is achieved on hardware with native FP4 support (e.g., NVIDIA H100, Ada Lovelace, Blackwell GPUs)
133
+ - Dynamic activation quantization may introduce additional runtime overhead
134
+ - Grouped quantization requires compatible inference engines
135
+
136
+ ## Performance Notes
137
+
138
+ - **Memory Usage**: ~3.3x reduction compared to BF16
139
+ - **Speed**: Requires hardware with FP4 tensor core support for optimal performance
140
+ - **Accuracy**: May experience some degradation compared to higher precision formats
141
+
142
+ ## Citation
143
+
144
+ If you use this model, please cite the original Mistral paper and the compressed-tensors library.
145
+
146
+ ## License
147
+
148
+ Same as the base model: [Apache 2.0](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
chat_template.jinja ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if messages[0]['role'] == 'system' %}
2
+ {%- set system_message = messages[0]['content'] %}
3
+ {%- set loop_messages = messages[1:] %}
4
+ {%- else %}
5
+ {%- set loop_messages = messages %}
6
+ {%- endif %}
7
+
8
+ {{- bos_token }}
9
+ {%- for message in loop_messages %}
10
+ {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
11
+ {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}
12
+ {%- endif %}
13
+ {%- if message['role'] == 'user' %}
14
+ {%- if loop.first and system_message is defined %}
15
+ {{- ' [INST] ' + system_message + '\n\n' + message['content'] + ' [/INST]' }}
16
+ {%- else %}
17
+ {{- ' [INST] ' + message['content'] + ' [/INST]' }}
18
+ {%- endif %}
19
+ {%- elif message['role'] == 'assistant' %}
20
+ {{- ' ' + message['content'] + eos_token}}
21
+ {%- else %}
22
+ {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}
23
+ {%- endif %}
24
+ {%- endfor %}
config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MistralForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "head_dim": null,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 8,
18
+ "quantization_config": {
19
+ "config_groups": {
20
+ "group_0": {
21
+ "format": "nvfp4-pack-quantized",
22
+ "input_activations": {
23
+ "actorder": null,
24
+ "block_structure": null,
25
+ "dynamic": "local",
26
+ "group_size": 16,
27
+ "num_bits": 4,
28
+ "observer": "minmax",
29
+ "observer_kwargs": {},
30
+ "strategy": "tensor_group",
31
+ "symmetric": true,
32
+ "type": "float"
33
+ },
34
+ "output_activations": null,
35
+ "targets": [
36
+ "Linear"
37
+ ],
38
+ "weights": {
39
+ "actorder": null,
40
+ "block_structure": null,
41
+ "dynamic": false,
42
+ "group_size": 16,
43
+ "num_bits": 4,
44
+ "observer": "minmax",
45
+ "observer_kwargs": {},
46
+ "strategy": "tensor_group",
47
+ "symmetric": true,
48
+ "type": "float"
49
+ }
50
+ }
51
+ },
52
+ "format": "nvfp4-pack-quantized",
53
+ "global_compression_ratio": null,
54
+ "ignore": [
55
+ "lm_head"
56
+ ],
57
+ "kv_cache_scheme": null,
58
+ "quant_method": "compressed-tensors",
59
+ "quantization_status": "compressed",
60
+ "sparsity_config": {},
61
+ "transform_config": {},
62
+ "version": "0.11.0"
63
+ },
64
+ "rms_norm_eps": 1e-05,
65
+ "rope_theta": 1000000.0,
66
+ "sliding_window": null,
67
+ "tie_word_embeddings": false,
68
+ "torch_dtype": "bfloat16",
69
+ "transformers_version": "4.55.2",
70
+ "use_cache": true,
71
+ "vocab_size": 32000
72
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.55.2"
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:698bb20f2aec96050ef7a9d780ee64cf767dfc532034ecf69d3f12bee534b983
3
+ size 4450800712
recipe.yaml ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ quant_stage:
2
+ quant_modifiers:
3
+ QuantizationModifier:
4
+ targets: [Linear]
5
+ ignore: [lm_head]
6
+ scheme: NVFP4
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ }
30
+ },
31
+ "additional_special_tokens": [],
32
+ "bos_token": "<s>",
33
+ "clean_up_tokenization_spaces": false,
34
+ "eos_token": "</s>",
35
+ "extra_special_tokens": {},
36
+ "legacy": false,
37
+ "model_max_length": 1000000000000000019884624838656,
38
+ "pad_token": "</s>",
39
+ "sp_model_kwargs": {},
40
+ "spaces_between_special_tokens": false,
41
+ "tokenizer_class": "LlamaTokenizer",
42
+ "unk_token": "<unk>",
43
+ "use_default_system_prompt": false
44
+ }