The dataset contains pixel- and instance wise segmentation masks for each object in the scene. Each object
has a unique ID.
Since these annotations where generated in a simulation, the annotations are perfect.
scenario/
rgb-front/
000000.jpg
...
segmentation-front/
000000.png
...
depth-front/
000000.png
...
img = Image.open("segmentation-front/000099.png")
# instance ids are encoded in the G and B channel
id_fields = np.array(img)[:,:,1:3]
instance_ids = np.zeros(shape=(id_fields.shape[0], id_fields.shape[1]), dtype=np.int32)
instance_ids += id_fields[:,:,0]
instance_ids += id_fields[:,:,1].astype(np.int32) << 8
# per-pixel classes are encoded in the R channel
segmentation_mask = np.array(img)[:,:,0]
Camera ImageInstance Segmentation MaskDepth Map
Camera ImageInstance Segmentation MaskDepth Map
Semantic LIDAR Point Clouds
Pixel-Anotated LIDAR Pointclouds for each frame with realistic settings.
scenario/
pointclouds/
000000.feather
...
These files can be loaded with pandas:
import pandas as pd
# Columns: x, y, z, angle, object_id, class_id
data = pd.read_feather("000000.feather")
LIDAR Example 1LIDAR Example 2
KITTI Annotations
Kitti annotations contain 3D bounding boxes and connect them to the camera.
scenario/
kitti-front/
complete_data/
000000_extended.json
...
label_2/
000000.txt
...
calib/
000000.txt
...
Anomaly Annotations
Sample- and Sensor-Level
For cameras the per-pixel anomaly labels are available in a separate directory.
Labels are written in a 1-channel PNG where 0 means normal and everything else means anomaly.
Sensor-level anomaly labels are given in a CSV with an anomaly column.
scenario/
anomaly-front/
000000.png
...
sensor.csv
For LiDAR the anomaly labels are similarly available in a separate directory:
scenario/
anomaly-pcl/
000000.feather
...
sensor.csv
The .feather files are serialized dataframes with a column for the anomaly label.
Observation-Level
In CarlAnomaly, an observation is an anomaly when there is an anomaly in any of the sensors.
For convenience, these are also stored in feather format:
scenario/
anomaly-observation.feather
Scenario-Level
These labels are given by the directory.
Additional Data
The dataset additionally contains sensor readings for the following sensors in CSV format:
IMU: Measuring acceleration and orientation of the ego vehicle
GNSS: Measuring position of the vehicle
Weather: Exact weather conditions
Actions: Actions executed by the auto-pilot (Note: these are the actions that are executed by CARLAs traffic manager after the last frame)
scenario/
gnss.feather
imu.fether
weather.feather
actions.feather
You can simply load these as pandas dataframes:
import pandas as pd
weather = pd.read_feather("weather.feather")
Example: IMU
The per-step IMU readings look as follows: