UnPIE: Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks
Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks, Simone Scaccia, Francesco Pro, Irene Amerini, 2025
| Instance Recognition method | Local Aggregation method |
|---|---|
![]() |
![]() |
Download annotations and video clips from the PIE webpage and place them in the PIE_dataset directory,
You can run this command to get the videos:
wget -r -np -c -nH -R index.html https://data.nvision2.eecs.yorku.ca/PIE_dataset/PIE_clips/
Note: download all the sets to run training and cross-validation, or only set03 to run testing:
wget -r -np -c -nH -R index.html https://data.nvision2.eecs.yorku.ca/PIE_dataset/PIE_clips/set03/
Annotation zip files should be copied to the main dataset folder and unzipped. There are three types of annotations in the PIE dataset: spatial annotations with text labels, object attributes, ego-vehicle information.
You can run this command to get the annotations:
cd PIE_dataset
wget -O annotations.zip https://github.com/aras62/PIE/blob/master/annotations/annotations.zip?raw=true
wget -O annotations_attributes.zip https://github.com/aras62/PIE/blob/master/annotations/annotations_attributes.zip?raw=true
wget -O annotations_vehicle.zip https://github.com/aras62/PIE/blob/master/annotations/annotations_vehicle.zip?raw=true
unzip 'annotations*.zip' && rm annotations*.zip
The folder structure should look like this:
PIE_dataset
annotations
set01
set02
...
PIE_clips
set01
set02
...
Miniconda Linux installation:
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.shCheck $PATH:
echo $PATHAdd export PATH="/home/username/miniconda3/bin:$PATH" at the end of ~/.profile file if the directory is not present (substituting username):
echo "export PATH=\"/home/username/miniconda3/bin:\$PATH\"" >> ~/.profileReboot the system. Create environment:
conda env create -f environment_tf2.yamlInit conda:
conda initActivate environment:
conda activate unpie-tf2-envBuild Dockerfile
sudo docker build -t unpie:v1 .Run docker container
sudo docker run --rm -it -v ../PIE_dataset:/PIE_dataset --name unpie-v1-c1 --gpus all unpie:v1Create config.yml in settings folder:
PIE_PATH: 'path\to\PIE_dataset'
PRETRAINED_MODEL_PATH: 'path\to\pretrained\model'
IS_GPU: False or True
PIE_SPLITS_TO_EXTRACT: 'all' # train, val, test, or allRun the following command to extract and save all the image features needed by the GNN without saving each frame:
python extract_features.py pieTraining:
sh run_training_x.shTesting:
sh run_testing_x.shwhere x can be SUP for supervised learning, IR or IR_LA for unsupervised learning.
If you find our work useful in your research, please consider citing our publications:
@Article{Scaccia2025,
author = {Scaccia, Simone and Pro, Francesco and Amerini, Irene},
title = {Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks},
journal = {Pattern Analysis and Applications},
year = {2025},
month = {May},
issn = {1433-755X},
doi = {10.1007/s10044-025-01483-0},
url = {https://doi.org/10.1007/s10044-025-01483-0}
}Some modules are taken and modified from the following repositories:




