Title (Master's thesis)
Alejandro Cortijo, ....
In recent years, point cloud perception tasks have gained increasing attention due to their relevance in several computer vision applications, such as 3D reconstruction, autonomous navigation, and human-machine interaction. This master's thesis aims to push the state of the art (SOTA) in estimating 3D human body meshes from sparse LiDAR point clouds by proposing new algorithmic approaches that seek to improve model accuracy and robustness.
In addition, it presents a critical review of the current state of the technology, examining its limitations and comparing it with alternative strategies, including the use of other sensing modalities and sensor fusion. The goal is to contribute to a deeper understanding of the problem space and to offer insights that may guide future developments in 3D human body reconstruction.
.......
- Build the Docker image:
To build and run the Docker container, follow these steps:
docker compose up
Note
: After running docker compose, the container's environment will be set up. Since the Pointops library requires CUDA for compilation, this process cannot be done earlier. As a result, you may see logs for about 3-4 minutes. Thank you for your patience!.
- Enjoy:
docker exec -it <CONTAINER_ID> /bin/bash
To be continued....
TODO: Improve it (add links)
Downloading the SMPL-X model weights from this website into 'smplx_models' folder.
Several 3D HPE:
To be continued ......
The corresponding train and test codes are in the 'scripts' folder.
Training: Edit the corresponding path and variable in the training files. PRN training:
python scripts/pct/train_pct.py --dataset sloper4d --cfg configs/pose/pose_15.yaml
LiDAR_HMR training:
python scripts/lidar_hmr/train_lidarhmr.py --dataset sloper4d --cfg configs/mesh/sloper4d.yaml --prn_state_dict /path/to/your/file
LiDAR_HMR testing:
python scripts/lidar_hmr/test_lidarhmr.py --dataset sloper4d --cfg configs/mesh/sloper4d.yaml --state_dict weights/sloper4d/lidar_hmr_mesh.pth
The mesh groundtruths of the Waymo-v2 dataset are acquired utilizting human pose annotations and point clouds. Download the saved pkl files and move them into ./save_data folder (create one if not exists.) for training and testing in the Waymo-v2 dataset. Download link
Our code is based on Mesh Graphormer, Point Transformer-V2, and HybrIK.
If you find this project helpful, please consider citing the following paper:
@article{xxxxx,
title={TBD},
author={xxxxx},
journal={xxxxx},
year={TBD}
}