#MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This ...
Autore: Business Wire
A New Benchmark for Consumer AI Performance
SAN FRANCISCO: #MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This benchmark sets a new standard for evaluating consumer AI performance, enabling users, press, and the industry to measure how effectively laptops, desktops, and workstations can run cutting-edge large language models (LLMs).
A Collaborative Effort by Industry Leaders
MLPerf Client represents a collaboration among technology leaders, including AMD, Intel, Microsoft, NVIDIA, Qualcomm Technologies, Inc., and top PC OEMs. These stakeholders have pooled resources and expertise to create a standardized benchmark, offering new insight into performance on key consumer AI workloads.
“MLPerf Client is a pivotal step forward in measuring consumer AI PC performance, bringing together industry heavyweights to set a new standard for evaluating generative AI applications on personal computers,” said David Kanter, Head of MLPerf at MLCommons.
Key Features of the MLPerf Client v0.5 benchmark:
Future Development
While version 0.5 marks the benchmark's debut, MLCommons plans to expand its capabilities in future releases, including support for additional hardware acceleration paths and a broader set of test scenarios incorporating a range of AI models.
Availability
The MLPerf Client v0.5 benchmark is available for download now from MLCommons. See the website for additional details on the benchmark’s hardware and software support requirements.
About MLCommons
MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI risk and reliability.
For more information and details on becoming a member, please visit MLCommons.org or contact participation@mlcommons.org.
Fonte: Business Wire