#MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This ...
A New Benchmark for Consumer AI Performance
SAN FRANCISCO: #MLPerfClient--MLCommons®, the leading open engineering consortium dedicated to advancing machine learning (ML), is excited to announce the public release of the MLPerf® Client v0.5 benchmark. This benchmark sets a new standard for evaluating consumer AI performance, enabling users, press, and the industry to measure how effectively laptops, desktops, and workstations can run cutting-edge large language models (LLMs).
A Collaborative Effort by Industry Leaders
MLPerf Client represents a collaboration among technology leaders, including AMD, Intel, Microsoft, NVIDIA, Qualcomm Technologies, Inc., and top PC OEMs. These stakeholders have pooled resources and expertise to create a standardized benchmark, offering new insight into performance on key consumer AI workloads.
“MLPerf Client is a pivotal step forward in measuring consumer AI PC performance, bringing together industry heavyweights to set a new standard for evaluating generative AI applications on personal computers,” said David Kanter, Head of MLPerf at MLCommons.
Key Features of the MLPerf Client v0.5 benchmark:
Future Development
While version 0.5 marks the benchmark's debut, MLCommons plans to expand its capabilities in future releases, including support for additional hardware acceleration paths and a broader set of test scenarios incorporating a range of AI models.
Availability
The MLPerf Client v0.5 benchmark is available for download now from MLCommons. See the website for additional details on the benchmark’s hardware and software support requirements.
About MLCommons
MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI risk and reliability.
For more information and details on becoming a member, please visit MLCommons.org or contact participation@mlcommons.org.
Fonte: Business Wire
Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…
G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes
Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries
Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…
SnapLogic, the leader in generative integration, today announced that it has been named by Gartner as a Visionary in the 2024 “Magic Quadrant for Data…
JetBlue (Nasdaq: JBLU) today announced the promotion of Justin Thompson to vice president, IT data and analytics. In this role, Thompson will oversee…
#BestOverallTestingProject--InfoVision, a global leader in IT services and enterprise digital transformation, has been named the winner in the ‘Best Overall…
The "Automotive Software Business Models and Suppliers' Layout Research Report, 2024" report has been added to ResearchAndMarkets.com's offering. Software…