The In-Laws and NH and chatgpt-shell context


Headed to the White Mountains with CEO to hang with the family for a few days.

org-babel chatgpt-shell multiple src block contexts!

I’m on the plane, my laptop is shaking with turbulence, and I’m going to try and put together multiple org-babel chatgpt-shell contexts. Okie dokie.

Notes

The chatgpt-shell context is produced by the –context function. Currently this function takes no arguments, but if it was modified to 1) accept the name of the context it can a

Read more ⟶

A short todo


todo

  • pack up for NH with Curie
  • checkout latest chatgpt-shell
  • 1h bike ride at 18mph
  • GT7 Laguna Seca Gr.4?
Read more ⟶

First look at chatgpt-shell history


I need some sort of switchable “memory” using chatgpt-shell, for both the shell and org-mode. Memory, here, refers to e.g. a conversation like we have in the ChatGPT web interface. The Emacs chatgpt shell and org-mode both have the ability to hold a single conversation of chat messages. (with the current caveat that the org-babel integration has a bug and the shell can’t be restored).

The Shell

chatgpt-shell does indeed have saving / loading session transcripts. Let’s try it out… alright looks like this works well enough as-is. The key functions are:

Read more ⟶

June Twenty-Fifth


Emacs + LLMs

I’ve been continuing to integrate chatgpt-shell with Emacs and my workflows, specifically org-mode. The config below 1) changes the results formatting to be org + markdown friendly, 2) drops the temperature to zero to make it “deterministic”, and 3) takes advantage of the built in programming prompt.

(require 'a)
(setq org-babel-default-header-args:chatgpt-shell
      `((:results . "code")
        (:version . "gpt-4-0613")
        (:system . ,(a-get chatgpt-shell-system-prompts "Programming"))
        (:temperature . 0)
        (:wrap . "EXPORT markdown")))

The prompt and exporting config targets markdown. Let’s try reconfiguring it to use org, because when in Rome. First let’s reconfigure the system prompt to ask the LLM to use org formatting. Here’s the existing prompt:

Read more ⟶

June Twentieth


GT7 racing with SJO, sixteen laps at Spa. We’re not quite equal on pace but damn it’s a fun track.

LLMs continue to rock the world:

  • Prompting Is Programming: A Query Language for Large Language Models – “we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks while abstracting language model internals and providing high-level semantics.”
  • Sourcegraph – Code Intelligence Platform. This is the layer that brings in coding context. I was unimpressed by Sourcegraph’s new client, Cody, which was buggy at the moment (libraries not indexing, black screens, etc). Still, I’ve heard good things, so I have a feeling that when it clicks it’s good. Uses Anthropic LLMs.
  • CollabGPT – Business oriented AI companion that integrates with the various systems of record. This is the layer that brings in business context. Signed up for the waitlist to learn more 🤨
  • vLLM – a new high performance inference library on the block. Competes with HuggingFace Transformers (and llama.cpp?). Uses PagedAttention to achieve it’s throughput.
Read more ⟶