I'm trying to extract models from Insta360 One X images. Specifically I am using frames from videos, but the principle should apply equally to photographs.
The One X has two fisheye lens in a tight back-to-back configuration. The lenses have > 180 degree FoV so they overlap enough for a fully panoramic picture. Insta360 provide software to stitch the images together, but the raw files are two synchronized H264 files (the file extension is insv, but it's just a normal MP4 container) and this is what I am using.
My current workflow is as follows:
1. Record a video in the room I would like to scan making sure not to move the camera too fast.
2. Take 1 second samples from the video (see attachment fisheye.jpeg). I actually use the keyframes because they are roughly every second and should have the least compression artefacts.
3. Convert the fisheye view to a flat view using a OpenCV Python script (see attachment undistorted.jpeg) based on this tutorial:
https://medium.com/@kennethjiang/calibrate-fisheye-lens-using-opencv-333b05afa0b04. Load the undistorted images into Metashape and process as normal by assuming a flat camera.
The workflow gives surprisingly good results and certainly Metashape does better than the other software I tried. However, I wanted to see if I could import the distorted keyframes directly and skip my Python script. At first I thought I could just use my calibrated OpenCV parameters, which look like this:
fx = 857, fy = 857
cx = 1440, cy = 1440
k1 = 0.042449
k2 = 0.004571
k3 = -0.008562
k4 = 0.000604
I plugged these in to the "Fisheye" camera calibration model, but it failed to align any photos. Presumably because the transform messed up all the key points.
Second thing I tried was running your lens calibration system. I fixed everything but the parameters listed above and ran the calibration. It came up with very similar values except for cx and cy, which were 5.99 and -1.72. Given that the units are pixels this seemed like an error and I would have expected cx and cy to be roughly in the middle of the lens. I couldn't really understand the Distortion Plot dialog and it only has an entry in the Residuals section, but that didn't make much sense to me. Ideally I would like to see the undistorted image and confirm it is flat, but maybe I'm missing some subtlety of the process.
Do you think it is going to be possible to use the fisheye images directly like this or should I stick to my Python script? I've uploaded my calibration set in case it's any use:
http://avn.pub/q0nU5W (43 PNG images, 529MB archive file). I can provide more data if required.