Computer Vision Model Leaderboard

Leaderboard FAQ

powered by  supervision  |  model-leaderboard

benchmarked on: COCO 2017
(validation set, 5k images)

Methodology

The Roboflow computer vision model leaderboard benchmarks popular object detection models against the Microsoft COCO dataset. The Microsoft COCO dataset is commonly used to evaluate and compare the performance of object detection models.

Benchmark data in the table was computed independently by the Roboflow team, following public inference instructions from each model vendor. We aim to achieve as close to the original benchmark results as possible by following all instructions.

This project is open source, with code available that we use for benchmarking. This means you can verify the results of the data in the leaderboard table.

We used the validation set of the COCO dataset, to evaluate model performance on common objects. This means that the benchmark is less relevant for evaluating domain adaptiveness: how a new architecture does on a specific domain.

The Roboflow 100 benchmark was designed to measure model performance across domains. If you are interested in learning more about domain-specific model benchmarking, refer to the Roboflow 100 website.