Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offer...
New serverless Inference-as-a-Service offering available from Vultr across six continents and 32 locations worldwide
WEST PALM BEACH, Fla.: Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offering global AI model deployment and AI inference capabilities. Leveraging Vultr’s global infrastructure spanning six continents and 32 locations, Vultr Cloud Inference provides customers with seamless scalability, reduced latency, and enhanced cost efficiency for their AI deployments.
Today's rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has created a growing need for more inference-optimized cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organizations increasingly focus on inference spending as they move their models into production. But with bigger models comes increased complexity. Developers are being challenged to optimize AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency.
With that in mind, Vultr created Cloud Inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. Users can simply bring their own model, trained on any platform, cloud, or on-premises, and it can be seamlessly integrated and deployed on Vultr’s global NVIDIA GPU-powered infrastructure. With dedicated compute clusters available on six continents, Vultr Cloud Inference ensures that businesses can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives.
“Training provides the foundation for AI to be effective, but it's inference that converts AI’s potential into impact. As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimized to meet the world’s inference needs,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.”
With the capability to self-optimize and auto-scale globally in real-time, Vultr Cloud Inference ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. Moreover, its serverless architecture eliminates the complexities of managing and scaling infrastructure, delivering unparalleled impact, including:
“Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide,” said Matt McGrigg, director of global business development, cloud partners at NVIDIA. “The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally.”
As AI continues to push the limits of what’s possible and change the way organizations think about cloud and edge computing, the scale of infrastructure needed to train large AI models and to support globally-distributed inference needs has never been greater. Following the recent launch of Vultr CDN to scale media and content delivery worldwide, Vultr Cloud Inference will provide the technological foundation to enable innovation, increase cost efficiency, and expand global reach for organizations around the world, across industries, making the power of AI accessible to all.
Vultr Cloud Inference is now available for early access via registration here. Learn more about Vultr Cloud Inference at NVIDIA GTC and contact sales to get started.
About Constant and Vultr
Constant, the creator and parent company of Vultr, is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers worldwide. Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Founded by David Aninowsky and completely bootstrapped, Vultr has become the world’s largest privately-held cloud computing company without ever raising equity financing. Learn more at: www.vultr.com.
Fonte: Business Wire
Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…
Eni's VC company invest in the Italian drone company to develop new solutions for industrial plants monitoring
Oracle recognizes Technology Reply’s ability to develop and deliver pioneering solutions through partnering with Oracle
Scheduled for October, the world's largest startup event will bring together more than 2,000 exhibitors in Dubai, UAE
#CX--Enterprises worldwide continue to outsource contact center functions, in search of lower costs and new technologies, despite a decline in the annual…
Wysa, the global leader in AI-driven mental health support, today launches the Safety Assessment for LLMs in Mental Health (SAFE-LMH). Unveiled on World…
Today, Rithum announced two new executive appointments: Suzin Wold joins as Chief Marketing Officer (CMO), and Mike Vantusko joins as Chief Financial…
#CNC--The line between traditional automation and collaborative robots continues to grey as cobots take on more advanced welding tasks. Since Universal…