From Leaderboard to Deployment: Code Quality Challenges in AV Perception Repositories
2026-03-02 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionMachine LearningRoboticsSoftware Engineering
AI summaryⓘ
The authors studied 178 autonomous vehicle perception software models and found that most are not ready for real-world use because their code has errors and security problems. They used tools to check code quality and found only a small portion met basic safety standards. They also noticed that using automated testing and deployment systems improves code quality. The authors suggest that good benchmark scores do not guarantee safe or maintainable software, and they provide recommendations to fix common issues.
autonomous vehiclesperception modelscode qualityproduction readinessstatic analysissecurity vulnerabilitiescontinuous integrationcontinuous deploymentsoftware maintainability3D object detection
Authors
Mateus Karvat, Bram Adams, Sidney Givigi
Abstract
Autonomous vehicle (AV) perception models are typically evaluated solely on benchmark performance metrics, with limited attention to code quality, production readiness and long-term maintainability. This creates a significant gap between research excellence and real-world deployment in safety-critical systems subject to international safety standards. To address this gap, we present the first large-scale empirical study of software quality in AV perception repositories, systematically analyzing 178 unique models from the KITTI and NuScenes 3D Object Detection leaderboards. Using static analysis tools (Pylint, Bandit, and Radon), we evaluated code errors, security vulnerabilities, maintainability, and development practices. Our findings revealed that only 7.3% of the studied repositories meet basic production-readiness criteria, defined as having zero critical errors and no high-severity security vulnerabilities. Security issues are highly concentrated, with the top five issues responsible for almost 80% of occurrences, which prompted us to develop a set of actionable guidelines to prevent them. Additionally, the adoption of Continuous Integration/Continuous Deployment pipelines was correlated with better code maintainability. Our findings highlight that leaderboard performance does not reflect production readiness and that targeted interventions could substantially improve the quality and safety of AV perception code.