--- license: apache-2.0 pipeline_tag: text-generation library_name: node-llama-cpp tags: - node-llama-cpp - llama.cpp - conversational base_model: ByteDance-Seed/Seed-OSS-36B-Instruct quantized_by: giladgd --- # Seed-OSS-36B-Instruct-GGUF Static quants of [`ByteDance-Seed/Seed-OSS-36B-Instruct`](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct). ## Quants | Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Quant | Size | |:-----|:--------------------------------------------------|:------|-----:| | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q2_K.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q2_K` | Q2_K | 13.6GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q3_K_S.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3_K_S` | Q3_K_S | 15.9GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q3_K_M.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3_K_M` | Q3_K_M | 17.6GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q3_K_L.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q3_K_L` | Q3_K_L | 19.1GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q4_0.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_0` | Q4_0 | 20.6GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q4_K_S.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_S` | Q4_K_S | 20.7GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q4_K_M.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_M` | Q4_K_M | 21.8GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q5_0.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q5_0` | Q5_0 | 25.0GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q5_K_S.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q5_K_S` | Q5_K_S | 25.0GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q5_K_M.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q5_K_M` | Q5_K_M | 25.6GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q6_K.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q6_K` | Q6_K | 29.7GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.Q8_0.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q8_0` | Q8_0 | 38.4GB | | [GGUF](https://huggingface.co/giladgd/Seed-OSS-36B-Instruct-GGUF/resolve/main/Seed-OSS-36B-Instruct.F16.gguf) | `hf:giladgd/Seed-OSS-36B-Instruct-GGUF:F16` | F16 | 72.3GB | > [!TIP] > Download a quant using `node-llama-cpp` ([more info](https://node-llama-cpp.withcat.ai/cli/pull)): > ```bash > npx -y node-llama-cpp pull > ``` # Usage ## Use with [`node-llama-cpp`](https://node-llama-cpp.withcat.ai) (recommended) Ensure you have node.js installed: ```bash brew install nodejs ``` ### CLI Chat with the model: ```bash npx -y node-llama-cpp chat hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_M ``` ### Code Use it in your project: ```bash npm install node-llama-cpp ``` ```typescript import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp"; const modelUri = "hf:giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_M"; const llama = await getLlama(); const model = await llama.loadModel({ modelPath: await resolveModelFile(modelUri) }); const context = await model.createContext(); const session = new LlamaChatSession({ contextSequence: context.getSequence() }); const q1 = "Hi there, how are you?"; console.log("User: " + q1); const a1 = await session.prompt(q1); console.log("AI: " + a1); ``` > [!TIP] > Read the [getting started guide](https://node-llama-cpp.withcat.ai/guide/) to quickly scaffold a new `node-llama-cpp` project ## Use with [llama.cpp](https://github.com/ggml-org/llama.cpp) Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` ### CLI ```bash llama-cli -hf giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_M -p "The meaning to life and the universe is" ``` ### Server ```bash llama-server -hf giladgd/Seed-OSS-36B-Instruct-GGUF:Q4_K_M -c 2048 ```