nuScenes
nuScenes: A Multimodal Dataset for Autonomous Driving. Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.
Keywords for this software
References in zbMATH (referenced in 5 articles , 1 standard article )
Showing results 1 to 5 of 5.
Sorted by year (- Wang, Yanbing; Monache, Maria Laura Delle; Work, Daniel B.: Identifiability of car-following dynamics (2022)
- Marcos Nieto, Orti Senderos, Oihana Otaegui: Boosting AI applications: Labeling format for complex datasets (2021) not zbMATH
- Rajeswar, Sai; Mannan, Fahim; Golemo, Florian; Parent-Lévesque, Jérôme; Vazquez, David; Nowrouzezahrai, Derek; Courville, Aaron: Pix2Shape: towards unsupervised learning of 3D scenes from images using a view-based representation (2020)
- Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom: nuScenes: A multimodal dataset for autonomous driving (2019) arXiv
- Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Sheng Zhao, Shuyang Cheng, Yu Zhang, Jonathon Shlens, Zhifeng Chen, Dragomir Anguelov: Scalability in Perception for Autonomous Driving: Waymo Open Dataset (2019) arXiv