nnUNetV2 with Bounding Box Prediction for 2D Data
In this post, we’ll walk you through the installation and setup process for nnUNetV2 with bounding box prediction, specifically focusing on 2D data. nnUNetV2 is a powerful framework for biomedical image segmentation, and this guide will help you get started with installing, organizing your data, and training the model.
Installation
-
Create and activate a virtual environment:
conda create -n SRP conda activate SRP
-
Install compatible versions of PyTorch, CUDA, and cuDNN:
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=11.8 -c pytorch -c nvidia
-
Install nnUNet:
git clone https://github.com/MIC-DKFZ/nnUNet.git cd nnUNet pip install -e .
Setup Folder Structure
Next, you’ll need to organize the folder structure for nnUNet to handle your data. Follow the steps below:
- Create a directory named
nnUNetFrame
inside thennUNet
folder. - Inside
nnUNetFrame
, create a directory namedDATASET
. -
Create three subfolders within
DATASET
:nnUNet_raw
nnUNet_preprocessed
nnUNet_trained_models
Run the following commands:
mkdir nnUNetFrame cd nnUNetFrame mkdir DATASET cd DATASET mkdir nnUNet_raw mkdir nnUNet_preprocessed mkdir nnUNet_trained_models
Import Data
For each dataset, follow this structure:
- Create a folder for each dataset:
Dataset00x_Name
(wherex
is the task number). -
Inside each dataset folder, create three folders:
-
imagesTr
: For training images -
labelsTr
: For training labels -
imagesTs
: For testing images
-
-
Create a
dataset.json
file that includes metadata about the dataset. The format should look like this:{ "channel_names": { "0": "FLAIR", "1": "T1w", "2": "T1gd", "3": "T2w" }, "labels": { "background": 0, "tumor": 1 }, "numTraining": 32, "file_ending": ".nii.gz", "overwrite_image_reader_writer": "SimpleITKIO" }
Folder Structure Example
nnUNet/
├── nnUNetFrame/
│ ├── DATASET/
│ │ ├── nnUNet_raw/
│ │ │ ├── Dataset001_MEN/
│ │ │ │ ├── imagesTr/
│ │ │ │ │ ├── BRATS_001_0000.nii.gz
│ │ │ │ │ ├── BRATS_001_0001.nii.gz
│ │ │ │ │ ├── BRATS_001_0002.nii.gz
│ │ │ │ │ ├── BRATS_001_0003.nii.gz
│ │ │ │ ├── labelsTr/
│ │ │ │ │ ├── BRATS_001.nii.gz
│ │ │ ├── Dataset002_MET/
│ │ │ ├── Dataset003_GLI/
│ ├── nnUNet_preprocessed/
│ └── nnUNet_trained_models/
Setting Up the Environment Variables
- Edit your bash configuration:
vim ~/.bashrc
- Add the following lines at the end:
export nnUNet_raw="/path/to/nnUNetFrame/DATASET/nnUNet_raw"
export nnUNet_preprocessed="/path/to/nnUNetFrame/DATASET/nnUNet_preprocessed"
export nnUNet_results="/path/to/nnUNetFrame/DATASET/nnUNet_trained_models"
Source the updated .bashrc:
3.
source ~/.bashrc
You can verify the data integrity with the following command:
Verify Data Integrity
nnUNetv2_plan_and_preprocess -d [DATASET_ID] --verify_dataset_integrity
Training the Model
To train the model, run:
nnUNetv2_train [DATASET_ID] 2d [FOLD]
Where:
-
DATASET_ID
: Identifier for the dataset (e.g., 4). -
UNET_CONFIGURATION
: Configuration of the U-Net model (e.g., 2d or 3d_fullres). -
FOLD
: Fold number for cross-validation (from 0 to 4).
Multi-GPU Training
If you have multiple GPUs, you can run multiple folds in parallel:
CUDA_VISIBLE_DEVICES=0 nnUNetv2_train 4 2d 0 --npz &
CUDA_VISIBLE_DEVICES=1 nnUNetv2_train 4 2d 1 --npz &
CUDA_VISIBLE_DEVICES=2 nnUNetv2_train 4 2d 2 --npz &
CUDA_VISIBLE_DEVICES=3 nnUNetv2_train 4 2d 3 --npz &
wait
Running Inference on Test Data
Once the training is complete, you can run inference on the test data:
nnUNetv2_predict -i /path/to/test_data -o /path/to/output -tr nnUNetTrainerV2 -c 2d -p nnUNetPlans
Example
nnUNetv2_predict -i /home/Intern1/Yun_SRP/nnUNet/nnUNetFrame/DATASET/nnUNet_raw/Dataset004_ALL/imagesTs -o /home/Intern1/Yun_SRP/nnUNet/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/inferTs -d 4 -c 2d -f 0
Checking GPU Memory
To check the available GPU memory, use:
nvidia-smi
This guide should help you get started with nnUNetV2 for 2D data segmentation. Make sure to follow the steps carefully, from setting up the environment to training and inference.
Enjoy Reading This Article?
Here are some more articles you might like to read next: