From 410573e03d52f480bea6ec897f0ec7bd8bca9a29 Mon Sep 17 00:00:00 2001 From: Akemi Izuko Date: Sun, 11 Feb 2024 21:30:59 -0700 Subject: [PATCH] Llama: update history --- src/content/llama/a-history-of-llamas.md | 17 ++++++++++++++++- src/content/llama/localllama_links.md | 5 ++++- src/pages/unix/index.astro | 2 +- 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/src/content/llama/a-history-of-llamas.md b/src/content/llama/a-history-of-llamas.md index e9dd40b..bb887de 100644 --- a/src/content/llama/a-history-of-llamas.md +++ b/src/content/llama/a-history-of-llamas.md @@ -1,7 +1,7 @@ --- title: 'A Brief History of Local Llamas' description: 'A Brief History of Local Llamas' -updateDate: 'Jan 01 2024' +updateDate: 'Feb 11 2024' heroImage: '/images/llama/tiny-llama-logo.avif' --- @@ -161,4 +161,19 @@ the turning point where open source models finally break ahead of commercial models. However as of writing, it's very unclear how the community will break through GPT4, the llama that remains uncontested in practice. +#### Early 2024 +This is where we currently are! Hence, things are just dates for now. We'll see +how much impact they have in a retrospective: + + - 2024-01-22: Bard with Gemini-Pro defeats all models except GPT4-Turbo in + chatbot arena. This is seen as questionably fair, since bard has internet + access. + - 2024-01-29: miqu gets released. This is a suspected Mistral_Medium leak. + Despite only having a 4bit-quantized version, it's ahead of all current + locallamas. + - 2024-01-30: Yi-34B is the largest local llama for language-vision. LLaVA 1.6 + based on top of it sets new records in vision performance. + - 2024-02-08: Google releases Gemini Advanced, a GPT4 competitor with similar + pricing. Public opinion seems to be that it's quite a bit worse that GPT4, + except that it's less censored and much better at creative writing. diff --git a/src/content/llama/localllama_links.md b/src/content/llama/localllama_links.md index 85ebe57..b24d55d 100644 --- a/src/content/llama/localllama_links.md +++ b/src/content/llama/localllama_links.md @@ -1,7 +1,7 @@ --- title: 'Local Llama Quickstart' description: 'A collection of guides to get started with local llamas' -updateDate: 'Dec 31 2023' +updateDate: 'Jan 28 2024' heroImage: '/images/llama/llama-cool.avif' --- @@ -49,6 +49,7 @@ anyone looking to get caught up with the field. - [Karpathy builds and trains a llama](https://www.youtube.com/watch?v=kCc8FmEb1nY) - [Build a llama DIY by freecodecamp](https://www.youtube.com/watch?v=UU1WVnMk4E8) - [Understanding different quantization methods](https://www.youtube.com/watch?v=mNE_d-C82lI) + - [LLM workshop](https://github.com/mlabonne/llm-course) #### Servers - [Ollama.ai](https://ollama.ai/) @@ -58,6 +59,8 @@ anyone looking to get caught up with the field. #### GPU stuff - [Choosing a GPU for deeplearning](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/) + - [Last gen servers for + AI](https://www.kyleboddy.com/2024/01/28/building-deep-learning-machines-unorthodox-gpus/) #### Cloud renting - [Kaggle](https://kaggle.com) - 30h/week free, enough VRAM for 7B models diff --git a/src/pages/unix/index.astro b/src/pages/unix/index.astro index afb6372..629f0fb 100644 --- a/src/pages/unix/index.astro +++ b/src/pages/unix/index.astro @@ -29,7 +29,7 @@ const posts = (await getCollection('unix')).sort(

- Over the past few years, I've extenstively configured at + Over the past few years, I've extenstively configured and learned about using MacOS, Linux, and even some BSD systems. Along the way I documented some of the fun aspects of learning Unix. The code for most guides is part of my