▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

MLCommons Introduces MLPerf Client v0.5

#MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This ...

Business Wire

A New Benchmark for Consumer AI Performance

SAN FRANCISCO: #MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This benchmark sets a new standard for evaluating consumer AI performance, enabling users, press, and the industry to measure how effectively laptops, desktops, and workstations can run cutting-edge large language models (LLMs).

A Collaborative Effort by Industry Leaders

MLPerf Client represents a collaboration among technology leaders, including AMD, Intel, Microsoft, NVIDIA, Qualcomm Technologies, Inc., and top PC OEMs. These stakeholders have pooled resources and expertise to create a standardized benchmark, offering new insight into performance on key consumer AI workloads.

“MLPerf Client is a pivotal step forward in measuring consumer AI PC performance, bringing together industry heavyweights to set a new standard for evaluating generative AI applications on personal computers,” said David Kanter, Head of MLPerf at MLCommons.

Key Features of the MLPerf Client v0.5 benchmark:

  • AI model: The benchmark’s tests are based on Meta's Llama 2 7B large language model, optimized for reduced memory and computational requirements via 4-bit integer quantization.
  • Tests and metrics: Includes four AI tasks—content generation, creative writing, and text summarization of two different document lengths—evaluated using familiar metrics like time-to-first-token (TTFT) and tokens-per-second (TPS).
  • Hardware optimization: Supports hardware-accelerated execution on integrated and discrete GPUs via two distinct paths: ONNX Runtime GenAI and Intel OpenVINO.
  • Platform support: This initial release supports Windows 11 on x86-64 systems, with future updates planned for Windows on Arm and macOS.
  • Freely accessible: The benchmark is freely downloadable from MLCommons.org, empowering anyone to measure AI performance on supported systems.

Future Development

While version 0.5 marks the benchmark's debut, MLCommons plans to expand its capabilities in future releases, including support for additional hardware acceleration paths and a broader set of test scenarios incorporating a range of AI models.

Availability

The MLPerf Client v0.5 benchmark is available for download now from MLCommons. See the website for additional details on the benchmark’s hardware and software support requirements.

About MLCommons

MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI risk and reliability.

For more information and details on becoming a member, please visit MLCommons.org or contact participation@mlcommons.org.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

RSA at Cybertech Europe 2024

Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…

Italian Security Awards 2024: G11 Media honours the best of Italian cybersecurity

G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes

How Austria is making its AI ecosystem grow

Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries

Sparkle and Telsy test Quantum Key Distribution in practice

Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…

Most read

SnapLogic Named a Visionary in the 2024 Gartner® Magic Quadrant™ for Data…

SnapLogic, the leader in generative integration, today announced that it has been named by Gartner as a Visionary in the 2024 “Magic Quadrant for Data…

JetBlue Names Justin Thompson Vice President, IT Data and Analytics

JetBlue (Nasdaq: JBLU) today announced the promotion of Justin Thompson to vice president, IT data and analytics. In this role, Thompson will oversee…

InfoVision Wins Top Spot at the Prestigious North American Software Testing…

#BestOverallTestingProject--InfoVision, a global leader in IT services and enterprise digital transformation, has been named the winner in the ‘Best Overall…

China and Global Automotive Software Business Models and Suppliers' Layout…

The "Automotive Software Business Models and Suppliers' Layout Research Report, 2024" report has been added to ResearchAndMarkets.com's offering. Software…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!