Skip to content

simonescaccia/UnPIE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

289 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

UnPIE: Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks

Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks, Simone Scaccia, Francesco Pro, Irene Amerini, 2025

graph_representation

UnPIE network

unpie_network

Unsupervised training visualization

Instance Recognition method Local Aggregation method

Training/Testing Dataset

PIE dataset

Download annotations and video clips from the PIE webpage and place them in the PIE_dataset directory,

You can run this command to get the videos:

wget -r -np -c -nH -R index.html https://data.nvision2.eecs.yorku.ca/PIE_dataset/PIE_clips/

Note: download all the sets to run training and cross-validation, or only set03 to run testing:

wget -r -np -c -nH -R index.html https://data.nvision2.eecs.yorku.ca/PIE_dataset/PIE_clips/set03/

Annotation zip files should be copied to the main dataset folder and unzipped. There are three types of annotations in the PIE dataset: spatial annotations with text labels, object attributes, ego-vehicle information.

You can run this command to get the annotations:

cd PIE_dataset
wget -O annotations.zip https://github.com/aras62/PIE/blob/master/annotations/annotations.zip?raw=true
wget -O annotations_attributes.zip https://github.com/aras62/PIE/blob/master/annotations/annotations_attributes.zip?raw=true
wget -O annotations_vehicle.zip https://github.com/aras62/PIE/blob/master/annotations/annotations_vehicle.zip?raw=true
unzip 'annotations*.zip' && rm annotations*.zip

The folder structure should look like this:

PIE_dataset
    annotations
        set01
        set02
        ...
    PIE_clips
        set01
        set02
        ...

Setup

Conda environment

Miniconda Linux installation:

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh

Check $PATH:

echo $PATH

Add export PATH="/home/username/miniconda3/bin:$PATH" at the end of ~/.profile file if the directory is not present (substituting username):

echo "export PATH=\"/home/username/miniconda3/bin:\$PATH\"" >> ~/.profile

Reboot the system. Create environment:

conda env create -f environment_tf2.yaml

Init conda:

conda init

Activate environment:

conda activate unpie-tf2-env

Docker environment

Build Dockerfile

sudo docker build -t unpie:v1 .

Run docker container

sudo docker run --rm -it -v ../PIE_dataset:/PIE_dataset --name unpie-v1-c1 --gpus all unpie:v1

UnPIE setup

Create config.yml in settings folder:

PIE_PATH: 'path\to\PIE_dataset'
PRETRAINED_MODEL_PATH: 'path\to\pretrained\model'
IS_GPU: False or True
PIE_SPLITS_TO_EXTRACT: 'all'  # train, val, test, or all

Preprocessing

Run the following command to extract and save all the image features needed by the GNN without saving each frame:

python extract_features.py pie

feature_extraction

Training and testing

Training:

sh  run_training_x.sh

Testing:

sh run_testing_x.sh

where x can be SUP for supervised learning, IR or IR_LA for unsupervised learning.

Citation

If you find our work useful in your research, please consider citing our publications:

@Article{Scaccia2025,
    author  = {Scaccia, Simone and Pro, Francesco and Amerini, Irene},
    title   = {Unsupervised pedestrian intention estimation through deep neural embeddings and spatio-temporal graph convolutional networks},
    journal = {Pattern Analysis and Applications},
    year    = {2025},
    month   = {May},
    issn    = {1433-755X},
    doi     = {10.1007/s10044-025-01483-0},
    url     = {https://doi.org/10.1007/s10044-025-01483-0}
}

Credits

Some modules are taken and modified from the following repositories:

About

Unsupervised Pedestrian Intention Estimation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors