From 16c94815904fec697bd6ca9e6dbe6e2ba8242531 Mon Sep 17 00:00:00 2001 From: Akemi Izuko Date: Mon, 1 Jan 2024 02:38:28 -0700 Subject: [PATCH] Llama: update history grammar --- src/content/llama/a-history-of-llamas.md | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/src/content/llama/a-history-of-llamas.md b/src/content/llama/a-history-of-llamas.md index 9d66818..9721cb7 100644 --- a/src/content/llama/a-history-of-llamas.md +++ b/src/content/llama/a-history-of-llamas.md @@ -12,21 +12,18 @@ heroImage: '/images/llama/tiny-llama-logo.avif' ## My Background -I've been taking machine learning courses throughout the "modern history" of -llamas. When ChatGPT was first released, we bought in a guest lecturer on NLP -methods of the time. Since then, I've also taken an NLP course, though not one -focused on deep learning. - Most my knowledge of this field comes from a few guest lectures, and the indispensable [r/localllama](https://www.reddit.com/r/LocalLLaMA/) community, which always has the latest news about local llamas. I've become a fan of the local llama movement in December 2023, so the "important points" covered here are coming from a retrospective. -I use the terms "large language model" and "llama" interchangeably, throughout -this piece. I write "open source and locally hosted llama" as "local llama". -Whenever you see numbers 7B, that means the llama has 7 billion parameters. More -parameters means the model is smarter but bigger. +Throughout this piece, the terms "large language model" and "llama" are used +interchangeably. Same goes for the terms "open source and locally hosted llama" +and "local llama". + +Whenever you see numbers like 7B, that means the llama has 7 billion parameters. +More parameters means the model is smarter but bigger. ## Modern History