Skip to content

Code for 3D Room Layout Estimation from a Cubemap of Panorama Image via Deep Manhattan Hough Transform (ECCV 2022)

License

Notifications You must be signed in to change notification settings

Starrah/DMH-Net

Repository files navigation

DMH-Net

Code for 3D Room Layout Estimation from a Cubemap of Panorama Image via Deep Manhattan Hough Transform (ECCV 2022)

If you have any question or difficulty on running the code, please raise an issue.

[2022.11.30] Pretrained models are available! See the Pretrained Models section.

Installation

This project needs python 3.7 (Other version may also be used, but is not tested.)

We recommend to use anaconda to create a virtual environment:

conda create -n dmhnet python=3.7
conda activate dmhnet

Then, you should install Pytorch. If your NVIDIA GPU driver supported CUDA version (check it by nvidia-smi) is no less than 11.1, then you can directly use the following command:

conda install pytorch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1 cudatoolkit=11.1 -c pytorch -c conda-forge

Or, you can see this link for Pytorch installing method. The code is tested with Pytorch 1.9.1 with CUDA 11.1, but other versions should also works.

Then, you should clone our project, cd into the project directory, and run the following to install pip dependencies:

pip install -r requirements.txt

Preparing data

PanoContext and Stanford 2D-3D

For these two dataset, you can currently use the preprocessed data generated by HorizonNet, please see this link for download method. After download, please put the downloaded data just as is shown in HorizonNet. Or, you can also download the origin dataset from their official release website, and preprocess them with preprocess.py, please see the next section for details.

Matterport 3D

For copyright reasons, we cannot directly provide this dateset. You should visit this link to download the Matterport 3D dataset, and preprocess it with:

python preprocess.py --img_glob origin_dataset/*.png --output_dir data/matterport3d_layout/img/

Pretrained Models

You can browse and download the pretrained models from here.

You should put the downloaded .pth files in ckpt folder. You may evaluate the model with instructions in the Evaluate section.

Train

We provide 3 configs in the cfgs directory, corresponding to the 3 datasets. For example, to run training on PanoContext dataset:

python train.py --cfg_file cfgs/panocontext.yaml --id dev -b 8

--cfg_file is the config file name. --id specify the name for saving checkpoints and tensorboard logs. Checkpoints can be found at ckpt/{id}, and tensorboard logs can be found at logs/{id}. -b is batchsize. Please use python train.py -h for more options.

Evaluate

To evaluate a model:

python eval.py --cfg_file cfgs/panocontext.yaml --ckpt ckpt/dev/best_valid.pth --print_detail --visu_all --visu_path result_visu/dev

Please use python eval.py -h for more options. You can see quantitative result at the end of the stdout output, and see qualitative result in the directory specified with the --visu_path options (result_visu/dev in the example above).

About

Code for 3D Room Layout Estimation from a Cubemap of Panorama Image via Deep Manhattan Hough Transform (ECCV 2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages