Node/nodelet to merge depth images from two cameras into one

Hi everyone,

during my work I’ve developed a node/nodelet that merges the depth images of two depth cameras (RGBD, TOF, whatever generates depth images) into a single depth image as it would be seen from an (almost) arbitrary perspective. It basically transforms and combines the point clouds from two depth cameras of a known transformation to each other into one, and uses a pinhole camera model to create the depth image from a third given perspective.

Here’s the repository on GitHub: GitHub - HWiese1980/merge_depth_cam: Merge the depth images from two depth cameras (RGBD, TOF, structured light...) into one depth image as it would be seen from an "arbitrarily" positioned virtual depth camera

Feel free to comment, criticize and contribute! But be aware that this has evolved from the mere need of enhancing the field of view of a depth cam by adding a second one. It’s not optimized in any way. Hence, I’ll certainly be grateful for any contribution especially regarding performance optimization!

Cheers!

Hendrik

2 Likes