Scheduling alerts with GPT-4o

Posted on Jan 15, 2025

OpenAI quietly released scheduling tasks from a GPT-4o conversation. Alerts have been table stakes for a digital assistant, e.g. the ubiquitous “Hey Siri, set an alarm for 3 minutes”, so it tracks to see alert support for aspiring-to-be-agent LLMs.

Compared to, say, Siri, the GPT scheduling flow is super flexible: a simple system prompt gets us natural language [NLP] recurring scheduling, dynamic / optional notifications generated at send-time, notification editing, etc. 1 A notification can be sent by email and/or push notification, but it’s otherwise contained within a Tasks-specific model conversation.

What’s it good for?

Tasks seem best for running tool-use in the future. I’m not particularly interested simple notifications, e.g. ping me in 5 minutes, because it’s already solved at the OS and calendar levels. Tool-use, on the other hand, is particularly interesting because it lets the model handle new information, e.g. a breaking news story.

Given that, here’s a quick list of what I’ll try using it for:

  • Recurring reports, e.g. local events, custom news / stock updates
  • Learning through spaced repetition
  • Regularly checking whether something occurred
  • Flexibly chaining tasks together?

The last bullet, chaining tasks together, is the most profound from an agent perspective but also the most half-baked given current tool-use capability. For example, check for a deposit to go through and, once it does, pay my bill. If you can effectively schedule that chain of tasks across common needs you effectively have an agent. Just a few more nines of reliability and a well developed tool-use infrastructure to get there.

GPTs don’t compose

Tasks follow OpenAI’s standard MO of packaging features in single-purpose GPTs / models. For example, there’s no search the web, use canvas or access memories from o1, and now I can’t use task scheduling from the o1 model.

With the way things have been going, though, an o1 type reasoning model will soon support tool use down the line, maybe even from within its own chain-of-thought. In the meantime, model features are fragmented and confusing.

Feel the lock-in

The downside of features like Tasks is increasing lock-in to OpenAI’s ecosystem. Open models / tool-use platforms are competitive with GPT-4o and keep it hackable. So, by the end of the year I’d like to move to a more open tool-use platform, perhaps one based on the Model Context Protocol. Thoughts for a future post.


  1. Back around 2012 I built some simple NLP to try and pull scheduling information out of text – a hard NLP problem that AFAIK was effectively unsolved relative to humans until LLMs. ↩︎