Llama: update history grammar
This commit is contained in:
parent
6f3c89a2fa
commit
16c9481590
1 changed files with 6 additions and 9 deletions
|
@ -12,21 +12,18 @@ heroImage: '/images/llama/tiny-llama-logo.avif'
|
|||
|
||||
## My Background
|
||||
|
||||
I've been taking machine learning courses throughout the "modern history" of
|
||||
llamas. When ChatGPT was first released, we bought in a guest lecturer on NLP
|
||||
methods of the time. Since then, I've also taken an NLP course, though not one
|
||||
focused on deep learning.
|
||||
|
||||
Most my knowledge of this field comes from a few guest lectures, and the
|
||||
indispensable [r/localllama](https://www.reddit.com/r/LocalLLaMA/) community,
|
||||
which always has the latest news about local llamas. I've become a fan of the
|
||||
local llama movement in December 2023, so the "important points" covered here
|
||||
are coming from a retrospective.
|
||||
|
||||
I use the terms "large language model" and "llama" interchangeably, throughout
|
||||
this piece. I write "open source and locally hosted llama" as "local llama".
|
||||
Whenever you see numbers 7B, that means the llama has 7 billion parameters. More
|
||||
parameters means the model is smarter but bigger.
|
||||
Throughout this piece, the terms "large language model" and "llama" are used
|
||||
interchangeably. Same goes for the terms "open source and locally hosted llama"
|
||||
and "local llama".
|
||||
|
||||
Whenever you see numbers like 7B, that means the llama has 7 billion parameters.
|
||||
More parameters means the model is smarter but bigger.
|
||||
|
||||
## Modern History
|
||||
|
||||
|
|
Loading…
Reference in a new issue