jtmoulia’s intermittent journal, guides, and recipes. See recent posts below, and links to archives in the header.
First look at chatgpt-shell history
I need some sort of switchable “memory” using chatgpt-shell
, for both the
shell and org-mode. Memory, here, refers to e.g. a conversation like we have in
the ChatGPT web interface. The Emacs chatgpt shell and org-mode both have the
ability to hold a single conversation of chat messages. (with the current caveat
that the org-babel integration has a bug and the shell can’t be restored).
The Shell
chatgpt-shell does indeed have saving / loading session transcripts. Let’s try it out… alright looks like this works well enough as-is. The key functions are:
…June Twenty-Fifth
Emacs + LLMs
I’ve been continuing to integrate chatgpt-shell with Emacs and my workflows, specifically org-mode. The config below 1) changes the results formatting to be org + markdown friendly, 2) drops the temperature to zero to make it “deterministic”, and 3) takes advantage of the built in programming prompt.
(require 'a)
(setq org-babel-default-header-args:chatgpt-shell
`((:results . "code")
(:version . "gpt-4-0613")
(:system . ,(a-get chatgpt-shell-system-prompts "Programming"))
(:temperature . 0)
(:wrap . "EXPORT markdown")))
The prompt and exporting config targets markdown. Let’s try reconfiguring it to use org, because when in Rome. First let’s reconfigure the system prompt to ask the LLM to use org formatting. Here’s the existing prompt:
…June Twentieth
GT7 racing with SJO, sixteen laps at Spa. We’re not quite equal on pace but damn it’s a fun track.
LLMs continue to rock the world:
- Prompting Is Programming: A Query Language for Large Language Models – “we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks while abstracting language model internals and providing high-level semantics.”
- Sourcegraph – Code Intelligence Platform. This is the layer that brings in coding context. I was unimpressed by Sourcegraph’s new client, Cody, which was buggy at the moment (libraries not indexing, black screens, etc). Still, I’ve heard good things, so I have a feeling that when it clicks it’s good. Uses Anthropic LLMs.
- CollabGPT – Business oriented AI companion that integrates with the various systems of record. This is the layer that brings in business context. Signed up for the waitlist to learn more 🤨
- vLLM – a new high performance inference library on the block. Competes with HuggingFace Transformers (and llama.cpp?). Uses PagedAttention to achieve it’s throughput.
F1 Montreal Monday
I was visiting my family in Marin for Father’s Day, so couldn’t see the race till today. Thrilling to watch Alonso and Hamilton clash initially, but Alonso had pace in hand when it came down to it. Consistent with the season, Max had by far the most pace to spare.
Went for a bike ride after watching the race. Looped Calero Park from the South parking lot. I practiced carrying momentum and flow through the turns – keeping off the brakes while hitting the apex made a big difference. Strikingly unintuitive.
…F1 Barcelona Sunday
SUPERMAX! More surprising than Max taking the top of the podium was Lewis and Russel giving Merc the second and third places. They had shipped some upgrades in Monaco, but Barcelona is a classic circuit that seemed to showcase their potential. Second place drama as Aston Martin is now twenty points off Merc in the constructors and Lewis is chasing third. Also, dropping the chicane into the final straight was fantastic – T1 was a helluva dueling ground.
…