I am currently working on a computer vision problem that involve 8 convergent cameras. I am trying to recreate a 2D image (I actually don't care of the shape of the image) that is the representation of my scene seen from the 8 convergent cameras (like an above view ?).
I am currently working with Real-time Stereo-Image Stitching using GPU-based Belief Propagation by Adam, Jung, Roth and Brunnett, which has been made for divergent aligned cameras. I am actually working on translating the formulas for not aligned cameras, but I have no clue on how it is going to work with convergent cameras.
I did some researches but didn't find anything on how to do exactly what I want, even if it seems possible.
If you have any path I could explore or any clue, I would love to have them!
I found Computer Vision: Algorithms and Applications by Richard Szeliski and it actually gives some clues.