OpenAI simply launched a brand new analysis preview referred to as GPT-5.3 Codex-Spark. This mannequin is constructed for 1 factor: excessive pace. Whereas the usual GPT-5.3 Codex focuses on deep reasoning, Spark is designed for near-instant response occasions. It’s the results of a deep hardware-software integration between OpenAI and Cerebras.
The outcomes are game-changing. Spark is 15x sooner than the flagship GPT-5.3 Codex. It persistently delivers over 1000 tokens per second. This pace successfully removes the delay between a developer’s thought and the mannequin’s code output.
The {Hardware}: Wafer-Scale Engineering
The huge efficiency leap is powered by the Cerebras Wafer-Scale Engine 3 (WSE-3). Conventional AI fashions run on clusters of small GPUs. These GPUs should talk to one another over cables, which creates a ‘bottleneck.’ This bottleneck slows down the pace of the mannequin.
The WSE-3 is totally different. It’s a single, big chip the scale of a complete silicon wafer. As a result of your entire mannequin lives on 1 piece of silicon, there are not any cables to sluggish it down. This structure offers:
- Large on-chip reminiscence.
- Extremely-high bandwidth.
- Low-latency compute.
Through the use of the Cerebras CS-3 system, OpenAI can run inference at speeds that conventional GPU clusters can’t attain.
Software program Optimizations and Low Latency
Pace is not only concerning the chip. OpenAI re-engineered the best way the mannequin communicates along with your pc. They moved away from conventional request strategies and launched a persistent WebSocket connection.
This variation results in a number of technical enhancements:
- Spherical-Journey Time (RTT): Consumer-server overhead is decreased by 80%.
- Time-to-First-Token (TTFT): That is improved by 50%, which means the code begins showing nearly the second you hit enter.
- Per-Token Overhead: Inside processing time per token is reduce by 30%.
These optimizations enable for ‘Actual-Time Steering.’ You’ll be able to interrupt the mannequin whereas it’s typing and redirect its logic with out ready for the complete block to complete.
The Commerce-offs: Pace vs. Reasoning
GPT-5.3 Codex-Spark is optimized for throughput, not deep complexity. It’s a ‘smaller’ mannequin than the flagship GPT-5.3 Codex. Due to this, it has decrease reasoning depth.



Devs ought to pay attention to these efficiency variations:
- Benchmarks: Spark scores decrease on SWE-Bench Professional and Terminal-Bench 2.0 in comparison with the flagship mannequin. It might battle with very advanced, multi-file structure adjustments.
- Safety: Underneath OpenAI’s Preparedness Framework, the flagship GPT-5.3 Codex is rated as ‘Excessive’ functionality for cybersecurity. Spark doesn’t meet this excessive threshold. It shouldn’t be used for delicate safety logic or autonomous authentication duties.
Fast Specs and Entry
Spark is on the market now for ChatGPT Professional customers and builders. You’ll be able to entry it by the next instruments:
- Codex App: Use the mannequin picker to pick ‘Spark.’
- VS Code Extension: Built-in instantly into the composer.
- CLI: Entry it by way of the command
codex --model gpt-5.3-codex-spark.
| Characteristic | GPT-5.3 Codex-Spark | GPT-5.3 Codex (Flagship) |
| Tokens per Second | 1000+ | ~70 |
| Context Window | 128k | 128k |
| {Hardware} | Cerebras WSE-3 | NVIDIA GPU Clusters |
| Finest For | Quick Iteration | Deep Reasoning / Safety |
Key Takeaways
- Nice Pace: Spark is 15x sooner than the flagship GPT-5.3 Codex, delivering an unprecedented throughput of over 1,000 tokens per second to allow near-instant code era.
- Customized Silicon Infrastructure: That is OpenAI’s first mannequin to run on Cerebras Wafer-Scale Engine 3 (WSE-3) {hardware} slightly than conventional NVIDIA GPUs, utilizing ‘wafer-scale’ reminiscence to eradicate information bottlenecks.
- Drastic Latency Discount: The mixing of a persistent WebSocket connection reduces client-server round-trip overhead by 80% and improves the time-to-first-token by 50%.
- Actual-Time Steering: Designed for ‘micro-iterations,’ the mannequin’s pace permits builders to interrupt and redirect logic in real-time, shifting the workflow from batch-processing to reside pair-programming.
- Focused Functionality Commerce-offs: Whereas sooner, Spark has decrease reasoning depth than the flagship mannequin and does not meet the ‘Excessive functionality’ threshold for cybersecurity in OpenAI’s Preparedness Framework, making it unsuitable for delicate auth or safety duties.
Try the Technical details here. Additionally, be at liberty to observe us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.


