News

Codex Spark: OpenAI's Fastest Model at 1,000+ Tokens Per Second

OpenAI is rolling out GPT-5.3-Codex-Spark to engaged Codex subscribers on ChatGPT Plus, promising speeds over 1,000 tokens per second. For developers, this could mean near-instant code generation and iteration.

What It Is

GPT-5.3-Codex-Spark is a new model variant from OpenAI optimized for speed. Available as a temporary preview through March 20 for active Codex users on ChatGPT Plus at no extra cost, it's positioned as the fastest model in OpenAI's lineup. The 1,000+ tokens per second throughput dramatically reduces latency for coding tasks.

How This Helps Today

Fast token generation changes the interaction model—instead of waiting seconds for responses, you get near-instant feedback, making AI-assisted coding feel more like pair programming. For rapid prototyping, you can iterate through multiple approaches quickly without losing flow. The preview availability through March 20 lets teams evaluate whether the speed improvement justifies their Plus subscription or future API costs. For workflows requiring real-time suggestions—like live coding interviews or teaching scenarios—low latency is essential.

The Context

Speed has become a key battleground in AI coding tools. Cursor, GitHub Copilot, and various startups compete on latency as much as output quality. Anthropic recently released Claude Code with voice mode; OpenAI's response emphasizes raw throughput. The 'temporary preview' framing suggests OpenAI is testing price-performance positioning—speed may become a premium tier feature rather than standard.

What to Watch

The preview ends March 20—watch for pricing announcements if you become dependent on the speed. 1,000 tokens/second is impressive but may come with quality trade-offs; benchmark on your specific codebases. Check if speed improvements hold for longer context windows or degrade with complex prompts. Also monitor API availability; this preview appears limited to ChatGPT Plus, not the API, which affects how you integrate it into production workflows.

Stay ahead with the latest news in AI

You will not get replaced by AI, but by someone using AI - Samuel Altman