I’m working with a FLIR A400 thermal camera as part of a sensor-fusion pipeline
(thermal + radar / LiDAR).
I just found that unlike RGB cameras, FLIR does not expose factory intrinsics, and traditional
OpenCV checkerboard calibration has proven unreliable due to thermal contrast
limitations.
I wanted to start a discussion on what practitioners typically do in this case:
Using FOV-derived pinhole intrinsics (fx, fy from datasheet FOV)
Optimizing intrinsics during downstream tasks (SLAM / NeRF / reconstruction)
Avoiding explicit intrinsics and relying on extrinsics only
I’m especially interested in what has worked in real robotic systems rather than
textbook calibration.
Looking forward to hearing how others approach this.
I had to deal with a similar cross calibration problem using thermal cameras about a decade ago for a project I was working on. We were using ROS to build a mapping vehicle to do city-scale energy audits of northern cities (think Boston or Minneapolis in the dead of winter).
I seem to recall that we had to fabricate our own custom calibration targets to achieve sufficient contrast. Once we had a solution we proceeded as if the FLIR camera was a conventional RGB camera and everything worked reasonably well. I don’t recall exactly what we used for the final solution but it involved two different materials with two different thermal properties to achieve a reasonable contrast. We would heat up the target with a hair dryer and one of the materials would cool much faster than the other. Once that happened we had a minute or two to frantically collect our cross-calibration data. I think we ended up using a laser cutter to cut out a grid from two different materials and then assembling a final grid (perhaps aluminum or steel and ABS / acrylic). I believe we also experimented with also spraying a solvent like isopropanol on the target to enhance contrast.
I’ll talk to my old colleagues to see if they remember what materials we used for the final calibration targets.
The poor man’s solution I used and worked well was waiting for a sunny day, letting a standard paper checkerboard heat up on sun and capturing the calibration bag. The contrast was bad for opencv. So I manually selected a few images from the bag, loaded them in CVAT and I’ve manually marked the corners (which was quite okay task for human eyes). I then used slightly modified Kalibr that was able to load the CVAT dataset as input.
The laser-cut calibration board is shown in Figure 3. We used a hair dryer to generate a temperature difference between the board and the cut holes. Calibration was performed using the camera calibration found at https://github.com/ros-perception/image pipeline. The calibrated YAML file for the Boson 320 camera is listed in Appendix D.