What is MLPerf?

Machine Learning Performance, or MLPerf, is a benchmark suite designed to measure the performance of machine-learning (ML) hardware, software and cloud platforms. Created by a consortium of AI industry leaders, researchers and academics, it provides a standardized set of benchmarks and metrics that enable fair comparisons across different ML workloads and systems.

 

The MLPerf benchmarks are intended to cover a range of different ML tasks and scenarios, including but not limited to:

  • MLPerf Training: This measures how fast systems can train models to a target quality level on various tasks, like image classification, object detection, translation and more.

 

  • MLPerf Inference: This evaluates the performance of systems when running trained models to make predictions or inferences from new data. It includes various scenarios like data center inference, edge inference and mobile device inference.

 

Each benchmark within MLPerf is designed to be representative of a particular class of ML applications, providing insights into how well a system performs under specific ML workloads. The benchmarks are also versioned, with updates that reflect the rapidly advancing ML field.

 

By providing a common set of benchmarks, MLPerf aims to drive innovation in the ML field by enabling fair comparisons between different ML products and technologies, and helping customers make informed decisions based on reliable performance data.