Weds back from South Lake
LLM
Some more reading. Still behind the curve, but seeing how the foundation can be put together to run locally on the tools we have today:
- Vicuna is a LLaMa derivative fine-tuned as a chatbot by GPT-4. It provides near GPT-3 quality (90% whatever that means) and the 13B model can be run on local commodity hardware, e.g. an Apple M1 chip with 32GB of ram has no problem running the model. The fine-tuning only cost ~$500 for the 65B param model
- Alpaca-LoRA opens the door for instruction-tuning on local hardware. The sauce is Low Ranked Adaptation [LoRA], which freezes the pre-trained model while training a lower-ranked layer, a much cheaper process than training the whole high-ranked model.
- Found my way to the LLM wikipedia article, always good to read.
Exercise
Slight regression from last week, the ribs still hurt from Friday and I don’t want to push it. Similar story with grip strength – left middle finger tendons have been strained for like the last two weeks.
Exercise | Set | Weight | Reps |
---|---|---|---|
DBell Overhead Press | 1 | 90 | 11 |
DBell Overhead Press | 2 | 90 | 10 |
DBell Overhead Press | 3 | 90 | 8 |
Lateral Raise | 1 | 50 | 9 |
Lateral Raise | 2 | 50 | 7 |
Lateral Raise | 3 | 50 | 7 |
Front Raise | 1 | 50 | 10 |
Front Raise | 2 | 50 | 10 |
Front Raise | 3 | 50 | 9 |