Llama: add local llama links
This commit is contained in:
parent
dcd2fffd42
commit
38675c3871
73
src/content/llama/localllama_links.md
Normal file
73
src/content/llama/localllama_links.md
Normal file
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
title: 'Local Llama Quickstart'
|
||||
description: 'A collection of guides to get started with local llamas'
|
||||
updateDate: 'Dec 31 2023'
|
||||
heroImage: '/images/llama/llama-cool.avif'
|
||||
---
|
||||
|
||||
<p style="font-size: max(2vh, 10px); margin-top: 0; text-align: right">
|
||||
Midjourney-generated llama from <a href="https://pub.towardsai.net/meet-vicuna-the-latest-metas-llama-model-that-matches-chatgpt-performance-e23b2fc67e6b">Medium</a>
|
||||
</p>
|
||||
|
||||
"Llama" refers to a Large Language Model (LLM). "Local llama" refers to a
|
||||
locally-hosted (typically open source) llama, in contrast to commercially hosted
|
||||
ones.
|
||||
|
||||
# Local LLaMa Quickstart
|
||||
|
||||
I've recently become aware of the open source LLM (local llama) movement. Unlike
|
||||
traditional open source, the speed at which this field is moving is
|
||||
unprecedented. Several breakthroughs come out on a weekly basis and information
|
||||
dated further back than a month is often functionally deprecated.
|
||||
|
||||
This collection was gathered in late December 2023, with the intent to help
|
||||
anyone looking to get caught up with the field.
|
||||
|
||||
## Models
|
||||
|
||||
#### Model Sources
|
||||
- [Pre-quantized models](https://huggingface.co/TheBloke)
|
||||
|
||||
## Practical use
|
||||
|
||||
#### Interfaces
|
||||
- [List of web UIs](https://www.reddit.com/r/LocalLLaMA/comments/1847qt6/llm_webui_recommendations)
|
||||
- [SillyTavern tutori]()
|
||||
- https://www.reddit.com/r/CharacterAi_NSFW/comments/13yy8m9/indepth_explanation_on_how_to_install_silly/ :: tutorial sillytavern webui
|
||||
- [Jailbreaking GPT when prompting for characters](https://rentry.org/GPTJailbreakPrompting)
|
||||
- [Guidelines for prompting for characters](https://rentry.org/NG_CharCard)
|
||||
- [ChatML from OpenAI is quickly becoming the standard for
|
||||
prompting](https://news.ycombinator.com/item?id=34988748)
|
||||
|
||||
#### Training
|
||||
- [Teaching llama a new language through tuning](https://www.reddit.com/r/LocalLLaMA/comments/18oc1yc/i_tried_to_teach_mistral_7b_a_new_language)
|
||||
- [Mergekit - MoE training framework](https://github.com/cg123/mergekit)
|
||||
- [Axolotl - Fine tuning framework](https://github.com/OpenAccess-AI-Collective/axolotl)
|
||||
- [Unsloth - Fine tuning accelerator](https://github.com/unslothai/unsloth)
|
||||
- llama.cpp vs Transformers vs LangChain vs PyTorch :: library framework tool standard
|
||||
|
||||
#### Tutorials
|
||||
- [Karpathy builds and trains a llama](https://www.youtube.com/watch?v=kCc8FmEb1nY)
|
||||
- [Build a llama DIY by freecodecamp](https://www.youtube.com/watch?v=UU1WVnMk4E8)
|
||||
- [Understanding different quantization methods](https://www.youtube.com/watch?v=mNE_d-C82lI)
|
||||
|
||||
#### Servers
|
||||
- [Ollama.ai](https://ollama.ai/)
|
||||
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
||||
|
||||
## Hardware
|
||||
|
||||
#### GPU stuff
|
||||
- [Choosing a GPU for deeplearning](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/)
|
||||
|
||||
#### Cloud renting
|
||||
- [Kaggle](https://kaggle.com) - 30h/week free, enough VRAM for 7B models
|
||||
- [Lambda Labs](https://lambdalabs.com) - Huge instances, competitive pricing
|
||||
- [Runpod](https://runpod.io) - Bad pricing, community cloud option
|
||||
- [Paperspace](https://paperspace.com) - Requires subscription, terrible pricing
|
||||
- [Genesis Cloud](https://genesiscloud.com) - Reddit says it's affordable... I can't verify
|
||||
- [Vast.ai](https://vast.ai) - Very affordable, especially the "interruptible" ones
|
||||
|
||||
## Research and Other Blogs
|
||||
- [Mixture of Experts explained](https://goddard.blog/posts/clown-moe/)
|
||||
- [A summary of the local llama scene in December 2023](https://www.reddit.com/r/LocalLLaMA/comments/18mwd6j/how_is_the_scene_currently/)
|
Loading…
Reference in a new issue