Llama: update history grammar

This commit is contained in:
Akemi Izuko 2024-01-01 02:38:28 -07:00
parent 6f3c89a2fa
commit 16c9481590
Signed by: akemi
GPG key ID: 8DE0764E1809E9FC

View file

@ -12,21 +12,18 @@ heroImage: '/images/llama/tiny-llama-logo.avif'
## My Background
I've been taking machine learning courses throughout the "modern history" of
llamas. When ChatGPT was first released, we bought in a guest lecturer on NLP
methods of the time. Since then, I've also taken an NLP course, though not one
focused on deep learning.
Most my knowledge of this field comes from a few guest lectures, and the
indispensable [r/localllama](https://www.reddit.com/r/LocalLLaMA/) community,
which always has the latest news about local llamas. I've become a fan of the
local llama movement in December 2023, so the "important points" covered here
are coming from a retrospective.
I use the terms "large language model" and "llama" interchangeably, throughout
this piece. I write "open source and locally hosted llama" as "local llama".
Whenever you see numbers 7B, that means the llama has 7 billion parameters. More
parameters means the model is smarter but bigger.
Throughout this piece, the terms "large language model" and "llama" are used
interchangeably. Same goes for the terms "open source and locally hosted llama"
and "local llama".
Whenever you see numbers like 7B, that means the llama has 7 billion parameters.
More parameters means the model is smarter but bigger.
## Modern History