Skip to content
/ EFFOcc Public

EFFOcc: A Minimal Baseline for EFficient Fusion-based 3D Occupancy Network

License

Notifications You must be signed in to change notification settings

synsin0/EFFOcc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EFFOcc

EFFOcc: Learning Efficient Occupancy Networks from Minimal Labels for Autonomous Driving (Old title: EFFOcc: A Minimal Baseline for EFficient Fusion-based 3D Occupancy Network)

Demo videos

The project demo videos.

EFFOcc_project_demo_video.mp4

The lidar-camera occupancy prediction video of Occ3D-nuScenes dataset.

lc_occnet.00_00_00-00_01_30.mp4

Abstract

3D occupancy prediction (3DOcc) is a rapidly rising and challenging perception task in the field of autonomous driving. Existing 3D occupancy networks (OccNets) are both computationally heavy and label-hungry. In terms of model complexity, OccNets are commonly composed of heavy Conv3D modules or transformers at the voxel level. Moreover, OccNets are supervised with expensive large-scale dense voxel labels. Model and data inefficiencies, caused by excessive network parameters and label annotation requirements, severely hinder the onboard deployment of OccNets. This paper proposes an EFFicient Occupancy learning framework, EFFOcc, that targets minimal network complexity and label requirements while achieving state-of-the-art accuracy. We first propose an efficient fusion-based OccNet that only uses simple 2D operators and improves accuracy to the state-of-the-art on three large-scale benchmarks: Occ3D-nuScenes, Occ3D-Waymo, and OpenOccupancy-nuScenes. On the Occ3D-nuScenes benchmark, the fusion-based model with ResNet-18 as the image backbone has 21.35M parameters and achieves 51.49 in terms of mean Intersection over Union (mIoU). Furthermore, we propose a multi-stage occupancy-oriented distillation to efficiently transfer knowledge to vision-only OccNet. Extensive experiments on occupancy benchmarks show state-of-the-art precision for both fusion-based and vision-based OccNets. For the demonstration of learning with limited labels, we achieve 94.38% of the performance (mIoU = 28.38) of a 100% labeled vision OccNet (mIoU = 30.07) using the same OccNet trained with only 40% labeled sequences and distillation from the fusion-based OccNet.

Data Setup

We follow the setups of BEVDet for data preprocessing of nuScenes dataset.

Models

Exps on Occ3D-nuScenes:

Settings Fusion Model FlashOcc Distilled Model
100% pretrained Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
100% from scratch Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
5% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
10% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
20% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
40% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
60% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
80% Fusion-R18 CFG CKPT FlashOcc-R50 CFG CKPT DistillOcc-R50 CFG CKPT
100% pretrained Fusion-R50 CFG CKPT
100% pretrained Fusion-SwinB CFG CKPT

Exps on OpenOccupancy-nuScenes:

Settings Fusion Model
100% Fusion-R18 CFG CKPT

Exps on OpenOccFlow-nuScenes:

Settings Fusion Model
100% Fusion-R18 CFG CKPT

Exps on Occ3D-Waymo (Checkpoints not allowed to share under Waymo's regulation):

Settings Model
20%_8e Fusion-R18 CFG
100%_24e Fusion-R18 CFG
20%_8e LiDAR CFG
100%_24e LiDAR CFG

Acknowledgements

Thanks to prior excellent open source projects:

About

EFFOcc: A Minimal Baseline for EFficient Fusion-based 3D Occupancy Network

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages