▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

Cerebras Triples its Industry-Leading Inference Performance, Setting New All Time Record

Today, Cerebras Systems, the pioneer in high performance AI compute, smashed its previous industry record for inference, delivering 2,100 tokens/second performance on Llama 3.2 70B. This is 16x faster...

Business Wire

Cerebras Inference delivers 2,100 tokens/second for Llama 3.2B 70B -- 16X performance of the fastest GPUs and 68x faster than hyperscale clouds

SUNNYVALE, Calif.: Today, Cerebras Systems, the pioneer in high performance AI compute, smashed its previous industry record for inference, delivering 2,100 tokens/second performance on Llama 3.2 70B. This is 16x faster than any known GPU solution and 68x faster than hyperscale clouds as measured by Artificial Analysis, a third-party benchmarking organization. Moreover, Cerebras Inference serves Llama 70B more than 8x faster than GPUs serve Llama 3B, delivering an aggregate 184x advantage (8x faster on models 23 x larger). By providing Instant Inference for large models, Cerebras is unlocking new AI use cases powered by real-time, higher quality responses, chain of thought reasoning, more interactions and higher user engagement.

“The world’s fastest AI inference just got faster. It takes graphics processing units an entirely new hardware generation -- two to three years- - to triple their performance. We just did it in a single software release,” said Andrew Feldman, CEO and co-founder, Cerebras. “Early adopters and AI developers are creating powerful AI use cases that were impossible to build on GPU-based solutions. Cerebras Inference is providing a new compute foundation for the next era of AI innovation.”

From global pharmaceutical giants like GlaxoSmithKline (GSK), to pioneering startups like Audivi, Tavus, Vellum and LiveKit, Cerebras is eliminating AI application latency with 60x speed-ups:

  • GSK: “With Cerebras’ inference speed, GSK is developing innovative AI applications, such as intelligent research agents, that will fundamentally improve the productivity of our researchers and drug discovery process,” said Kim Branson, SVP of AI and ML, GSK.
  • LiveKit: “When building voice AI, inference is the slowest stage in your pipeline. With Cerebras Inference, it’s now the fastest. A full pass through a pipeline consisting of cloud-based speech-to-text, 70B-parameter inference using Cerebras Inference, and text-to-speech, runs faster than just inference alone on other providers. This is a game changer for developers building voice AI that can respond with human-level speed and accuracy,” said Russ d’Sa, CEO of LiveKit.
  • Audivi AI: "For real-time voice interactions, every millisecond counts in creating a seamless, human-like experience. Cerebras’ fast inference capabilities empower us to deliver instant voice interactions to our customers, driving higher engagement and expected ROI,” said Seth Siegel, CEO of Audivi AI.
  • Tavus: “We migrated from a leading GPU solution to Cerebras and reduced our end-user latency by 75%,” said Hassan Raza, CEO of Tavus.
  • Vellum: “Our customers are blown away with the results! Time to completion on Cerebras is hands down faster than any other inference provider and I’m excited to see the production applications we’ll power via the Cerebras inference platform,” Akash Sharma, CEO of Vellum.

Cerebras is gathering the llama community in llamapalooza NYC, a developer event that will feature talks from meta, Hugging Face, LiveKit, Vellum, LaunchDarkly, Val.town, Haize Labs, Crew AI, Cloudflare, South Park Commons, and Slingshot.

Cerebras Inference is powered by the Cerebras CS-3 system and its industry-leading AI processor, the Wafer Scale Engine 3 (WSE-3). Unlike graphic processing units that force customers to make trade-offs between speed and capacity, the CS-3 delivers best in class per-user performance while delivering high throughput. The massive size of the WSE-3 enables many concurrent users to benefit from blistering speed. With 7,000x more memory bandwidth than the Nvidia H100, the WSE-3 solves Generative AI’s fundamental technical challenge: memory bandwidth. Developers can easily access the Cerebras Inference API, which is fully compatible with the OpenAI Chat Completions API, making migration seamless with just a few lines of code.

Cerebras Inference is available now, at a fraction of the cost of hyperscale and GPU clouds. Try Cerebras Inference today: www.cerebras.ai.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference, powered by Wafer-Scale Engine 3, delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit www.cerebras.ai or follow us on LinkedIn or X.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

Sparkle and Telsy test Quantum Key Distribution in practice

Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…

Dronus gets a strategic investment by Eni Next

Eni's VC company invest in the Italian drone company to develop new solutions for industrial plants monitoring

Technology Reply wins the 2024 Oracle Partner Awards - Europe South Innovation

Oracle recognizes Technology Reply’s ability to develop and deliver pioneering solutions through partnering with Oracle

25 Italian Startups Will Be Present at Expand North Star 2024

Scheduled for October, the world's largest startup event will bring together more than 2,000 exhibitors in Dubai, UAE

Most read

AuditBoard Named the “Overall Risk Management Solution of the Year” in…

AuditBoard, the leading cloud-based platform transforming audit, risk, compliance, and ESG management, today announced it has been recognized in the 8th…

St. Maarten Goes Digital: Dual-Island Nation Launches Advanced Border…

Princess Juliana International Airport (PJIAE), in collaboration with the Ministry of Justice and Ministry of Tourism, Economic Affairs, Traffic and Telecommunication…

ZenBusiness Takes The Guesswork Out Of AI For Small Business Owners

ZenBusiness®, a platform helping small business owners start, run, and grow successful businesses, is advancing its mission to empower entrepreneurs by…

GenAI Raises U.S. Companies’ Hopes for Future of Work

$III #ContinuousProductivityServices--Enterprises in the U.S. are cautiously optimistic about workplace management amid macroeconomic uncertainty and…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!