AI Infrastructure2026-02-13VentureBeat

OpenAI Uses Cerebras Chips for 15x Faster Code Generation

OpenAI's quest for lightning-fast code generation has led to a major hardware partnership. The company's new GPT-5.3-Codex-Spark model achieves its 'near-instant' performance by running on specialized hardware from Cerebras Systems, rather than traditional NVIDIA GPUs. This collaboration has delivered a remarkable 15x speedup in code generation times, marking a strategic and technical shift for OpenAI. The move to Cerebras's wafer-scale engines represents OpenAI's first significant inference deployment beyond its long-standing reliance on NVIDIA's ecosystem. It highlights a growing industry focus on using specialized, purpose-built chips to optimize specific AI workloads for maximum efficiency and performance. For code generation, where developer productivity hinges on latency, a 15x speed improvement is transformative, making the AI feel like a real-time collaborator. This diversification strategy serves multiple goals: it potentially lowers operational costs, reduces infrastructure

Related news

More AI news

AIStart.ai · Your Personal AI Start Page