I was also trying to do it with LTX 2.3; initially, I created a base video of a character rotating 360 degrees that I would later use with ControlNet. But I still need to better understand this part of RealityScan.
Also, some 360 video generations are better than others. My newer process is using LTX 2.3 to generate the 360 video. I must of used all my Grok free credits.
I use RealityScan and export the Colmap. RealityScan does it best to get the camera locations. It does not really look good until the Github Brush does the training.
The result turned out really well, considering it was generated synthetically. So with a 360 video, do you need to configure something specific in RealityScan? I tried to do something similar, but it couldn't align the images.