Llamas: update urls

This commit is contained in:
Akemi Izuko 2023-12-31 18:43:19 -07:00
parent cc01691a33
commit c26ad07c69
Signed by: akemi
GPG key ID: 8DE0764E1809E9FC

View file

@ -102,7 +102,7 @@ VRAM, which meant many home computers could now run 4-bit quantized 7B models!
Previously, most enthusiasts would have to rent cloud GPUs to run their "local"
llamas. Quantizing into GGUF is a very expensive process, so
[TheBloke](https://huggingface.co/TheBloke) on Huggingface emerges the defacto
source for pre-quantized llamas.
source for [pre-quantized llamas](../quantization).
Based on LLaMa, the open source
[llama.cpp](https://github.com/ggerganov/llama.cpp) becomes the leader of local