Update README.md
Browse files
README.md
CHANGED
|
@@ -9,6 +9,46 @@ tags:
|
|
| 9 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 10 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
## Use with llama.cpp
|
| 13 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 14 |
|
|
|
|
| 9 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 10 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
| 11 |
|
| 12 |
+
|
| 13 |
+
# **How to Use DoeyLLM / OneLLM-Doey-V1-Llama-3.2-3B-Instruct**
|
| 14 |
+
|
| 15 |
+
This guide explains how to use the **DoeyLLM** model on both app (iOS) and PC platforms.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
## **App (iOS): Use with OneLLM**
|
| 20 |
+
|
| 21 |
+
OneLLM brings versatile large language models (LLMs) to your device—Llama, Gemma, Qwen, Mistral, and more. Enjoy private, offline GPT and AI tools tailored to your needs.
|
| 22 |
+
|
| 23 |
+
With OneLLM, experience the capabilities of leading-edge language models directly on your device, all without an internet connection. Get fast, reliable, and intelligent responses, while keeping your data secure with local processing.
|
| 24 |
+
|
| 25 |
+
### **Quick Start for iOS**
|
| 26 |
+
|
| 27 |
+
Follow these steps to integrate the **DoeyLLM** model using the OneLLM app:
|
| 28 |
+
|
| 29 |
+
1. **Download OneLLM**
|
| 30 |
+
Get the app from the [App Store](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910) and install it on your iOS device.
|
| 31 |
+
|
| 32 |
+
2. **Load the DoeyLLM Model**
|
| 33 |
+
Use the OneLLM interface to load the DoeyLLM model directly into the app:
|
| 34 |
+
- Navigate to the **Model Library**.
|
| 35 |
+
- Search for `DoeyLLM`.
|
| 36 |
+
- Select the model and tap **Download** to store it locally on your device.
|
| 37 |
+
3. **Start Conversing**
|
| 38 |
+
Once the model is loaded, you can begin interacting with it through the app's chat interface. For example:
|
| 39 |
+
- Tap the **Chat** tab.
|
| 40 |
+
- Type your question or prompt, such as:
|
| 41 |
+
> "Explain the significance of AI in education."
|
| 42 |
+
- Receive real-time, intelligent responses generated locally.
|
| 43 |
+
|
| 44 |
+
### **Key Features of OneLLM**
|
| 45 |
+
- **Versatile Models**: Supports various LLMs, including Llama, Gemma, and Qwen.
|
| 46 |
+
- **Private & Secure**: All processing occurs locally on your device, ensuring data privacy.
|
| 47 |
+
- **Offline Capability**: Use the app without requiring an internet connection.
|
| 48 |
+
- **Fast Performance**: Optimized for mobile devices, delivering low-latency responses.
|
| 49 |
+
|
| 50 |
+
For more details or support, visit the [OneLLM App Store page](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910).
|
| 51 |
+
|
| 52 |
## Use with llama.cpp
|
| 53 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 54 |
|