ONCE-3DLanes

Introduction

ONCE-3DLanes dataset, a real-world autonomous driving dataset with lane layout annotation in 3D space, is a new benchmark constructed to stimulate the development of monocular 3D lane detection methods.

This dataset is constructed from ONCE, you can refer to our paper ONCE-3DLanes: Building Monocular 3D Lane Detection for the dataset construction details. It is collected in various geographical locations in China, including highway, bridge, tunnel, suburb and downtown, with different weather conditions (sunny/rainy) and lighting conditions (day/night).

The whole dataset contains 211K images with its corresponding 3D lane annotations in the camera coordinates.

Dataset Split:

The resolution of images in this dataset is 1920×1020.

Some examples of our dataset are shown as below.

Descriptive statistical analysis

Our dataset contains enough slope scenes, different lighting conditions and the number of lanes shows the complexity of driving scenes in our dataset.

 

Annotations

The dataset contains three parts:

  1. train: labels and train.txt
  2. val: labels and val.txt
  3. test: test.txt

 

Data Format:

Each frame has an image, for each image, there would be a frame.json file, the number of lanes in this image is counted as lane_num. Each lane is represented as a series of key points with the x, y, z coordinates in the camera coordinates.

Coordinates: In the camera coordinates, the x-axis points to the right of the vehicle, the Y-axis points to the front, and the Z-axis points to the bottom.

We also provide the code to read the labels, you can directly read the 2D lane and 3D lane labels of the image from the frame.json file.

 

Downloads

We don't provide the images and you can download the images in the ONCE_Download, you can easily index the required images with our image path in txt files.

The 3D lane labels of our ONCE_3DLanes dataset is about 141.5 MB after compression.

We provide download link from Google Drive and Baidu Yunpan to facilate users from all over the world.

Extraction code : 1234

 

Evaluation Tools

We also provide the evaluation tools here

 

Baseline Results

  1. Comparison of Monocular 3D lane detection methods

    We show the evaluation results of our method SALAD, as well as two other monocular 3D lane detection methods.

MethodF1(%)Precision(%)Recall(%)CD error(m)
3D-LaneNet44.7361.4635.160.127
Gen-LaneNet45.5963.9535.420.121
SALAD64.0775.9055.420.098

 

  1. Comparison of extended 2D lane detection methods

    We also extend some 2D lane detection methods with depth estimation to perform 3D lane detection task, the comparison results are shown in the table below.

MethodF1(%)Precision(%)Recall(%)CD error(m)
PointLaneNet (R101)54.9964.5047.930.115
UltraFast (R101)54.1863.6847.140.128
RESA (R101)55.5365.0848.430.112
LaneAF (DLA34)56.3966.0749.180.109
LaneATT (R122)56.5766.7549.070.101
SALAD64.0775.9055.420.098

 

Visualization

We show several examples in testing dataset with different lighting conditions.

The ground-truth lanes are colored in red while the predicted lanes are colored in blue.

2D projections are shown in the left and 3D visualizations in the right.

 

Citation

Please cite this paper in your publications if it helps your research: