Forum

Author Topic: Let's talk about transforms (problems translating / rotating / scaling)  (Read 2336 times)

julyfortoday

  • Newbie
  • *
  • Posts: 1
    • View Profile
TL;DR
Metashape is great, but is a bit lacking when it comes to easily transforming (translate/rotate/scale) a reconstruction prior to export to another application.

Hello!

I just started using Metashape recently. I've been using the spherical camera calibration feature to reconstruct a scene from panoramas. So far Metashape has performed very admirably for what I've been trying to do, at least in terms of figuring out the relative camera positions and creating reconstructions of the various scenes I'm working with . Nothing else I've tried before has been able to even handle panoramas properly, so this has been incredibly useful for this particular project. So kudos on creating such a useful tool.  8)

Unfortunately  I'm hitting a rather annoying wall at the moment. I'm having substantial problems with dealing with manipulating / managing the transformation of the reconstruction. Metashape will do a great job with finding the camera positions and points relative to each other. But in reference to worldspace it's just random and willy nilly. That's not unexpected. What I did find unexpected was how much difficulty I would encounter trying to adjust or correct it in Metashape.

I am aware I could easily make the necessary adjustments elsewhere (Blender in this case). The problem is I need both the dense point cloud and camera positions, and I have many chunks. If they do not have the proper translation/rotation/scale prior to export from Metashape, then that work will all need to be manually done at the other end (in Blender). Then if something needed to be adjusted in Metashape, those adjustments in Blender would possibly need to be redone. It would be greatly preferrably to just have everything properly adjusted and in the correct position in Metashape prior to export.

But I'm finding it nearly impossible to make the adjustments I need. At this point I'm just horribly confused and frustrated with Metashape's way of working with coordinate systems. I've looked at other forum posts, blog posts, documentation. I'm more confused than ever.

Worth mentioning that the dataset is old. I can't just introduce a scale bar, and take a few new photos of the scene. I simply need to make due with what I've got.

Here is what I'm trying to do:

TRANSLATION
I want a specific camera to be equivalent to the origin (0,0,0) of the scene/worldspace, everything relative to that point.

ROTATION
Since this camera is spherical (and has a definite vertical axis) I would like this axis to be properly oriented vertically in the scene/worldspace (Z axis).

SCALE
Metashape generates a point cloud that clearly produces a well defined ground plane from the panoramas. I want place a marker that sits directly below the main camera (same X & Y) and lies in this flat group of points. The distance between this marker and the camera is the difference between the ground and eye-level (approx 1.5 meters).

I can very easily do all three in Blender. But as I've said, it's desirable to have this fixed in Metashape BEFORE export.

IN BLENDER
(after importing the .ply and .fbx generated by Metashape for the point cloud and cameras)
  • make the point cloud and cameras children of the camera I wished to designate as the center
  • move that parent camera to the worldspace/scene origin (everything else moves in a relative fashion with it)
  • set the point cloud's object origin to the worldspace origin
  • use orthographic views to rotate everything (relative to the main camera) until everything seems flat (precision rotation snap rather useful here)
  • to scale I would create a box with the right size (this case 1.5meters) and place the top edge exactly at origin, so the box extends -1.5 meters in the Z axis. Then scale everything (again relative to that main camera) so that the group of points that constitutes the ground plane is level with the bottom of the box (eyeballing this, although the more precise way would be to take an average of those points and use a scale factor that places it exactly at the desired distance (1.5)).
  • Optionally move everything again so the ground plane is now at the origin, and the main camera sits 1.5 meters ABOVE the origin.

Then the model/reconstruction is properly transformed. Phew! Now do this with 10-15 chunks, each with wildly variable rotations.  ;)

IN METASHAPE
Method A) create a scalebar from elements in the picture...
Oops. Fixed data set that has no measurements or proper size reference in images. Cannot do this.

Method B) Use Move/Rotate/Scale tool from Metashape interface.
Oops. Metashape doesn't display any indication of where a chunk is relative to worldspace. It doesn't even show where reconstruction elements (camera, points) are relative to the chunk's coordinate system itself. Only shows the Region/BoundingBox (and the purpose for it's existence still eludes me, but it's entirely separate from the chunk and it's coordinate system).

Also, Metashape provides no manual input for these tools (need to exactly double the size? goodluck eyeballing that with the scale tool). Also, there is no snap. So if the rotation is off by exactly 90 degrees, good luck fixing that while using the rotation tool to eyeball that.

Method C) create markers points in several images, specify their absolute location in the reference pane (using source). Use enough of these in enough images/views/frames, hit update, and Metashape will automatically figure everything out.

Oops. Fixed data set that has no measurements or proper size reference. Also cannot do this.
The only possibility here is using the only known measurement for this data set: the panoramas are at eye level. So if you had a marker at the camera's position, and placed one precisely on the ground, you know that distance is about 1.5 meters. So if you work backward and trick Metashape by adding those markers, and then confirm their estimated positions in nearby images/cameras (that are already aligned), you could use the update button to get everything transformed correctly.

Oh wait. You need more than just those 2 markers.

Oh, so I could just add a marker a meter in each direction X&Y on the ground from the first ground marker, and trick Metashape by also confirming the estimated positions of those new markers in nearby aligned cameras. Right?

Oh wait. Metashape doesn't allow me to move markers easily in the 3d scene. Again, It also has no manual input to place anything precisely. Foiled again.

Method D) Use a python scripts to workaround Metashape's input limitations, and implement Method C in an incredibly cumbersome manner.

Place marker precisely on desired primary camera:  Check
Place marker directly below camera at desired offset: NOT CHECK -- WHY IS IT ALSO OFFSET IN THE OTHER 2 AXES!? IS THE CHUNK ROTATED TOO?! BUT IT'S PERFECTLY LEVEL IN THE ORTHOVIEWS!? WHAT IS HAPPENING!?... ARGHHG!HH!H

Both applications have essentially the same data. In one I can do exactly what I want (even if it's a little involved). But in Metashape I just cannot get where I want to go.

In my opinion Metashape is definitely lacking in some Quality of Life tools regarding model/reconstruction transformation.

I realize that  maybe I'm coming at this from a different perspective than other users of Metashape (Aerial/GIS types). But I think these kinds of manipulations would be useful in just about any Metashape project, certainly a much wider range of applications/projects that just my particular use case.

The number of other threads and posts regarding how to transform reconstructions seems to imply this process is unintuitive and a  weakpoint in Metashape's workflow.


SUGGESTIONS
  • Add visual indicator of both worldspace origin, and also chunk origin and orientation (y'know, grids) (also, if there are other coordinate systems at play, make that more obvious, cause I'm really not sure if there are at this point) (also not 100% sure whether Metashape is simply Z-up or not, or if it can handle both Y and Z up so an unambiguous visual representation would help)
  • Allow manipulation of elements in the 3d viewport. I want to be able to move markers (and even cameras) easily directly in the 3d viewport around the 3d scene (but have them locked by default, but unlockable and manipulatable)
  • Have manual inputs for these manipulations for precise control (also have snaps)
  • Fix the drawing tools. If I want to draw a point, or a line made of points, it stands to reason I may want to manipulate where they are in the 3d scene. Right now if I plop down a point, I can't figure out how to move it. More lack of manipulation.
  • Why does Undo seem utterly useless in Metashape? I remove an image from the chunk, go to use undo. Grayed out. Many other actions, same problem. Unless this is some kind of restriction of the trial, undo is practically useless.
  • Maybe make the Reference Pane docked on the right side of the interface by default, instead of hidden behide the Workspace Pane. Or at the very least put the tab on the TOP instead of the bottom, where it goes easily unnoticed. Took detective work to figure out where the fabled Reference Pane was hidding located.
I apologize for whatever I may have missed. Maybe Metashape already has several of these features, or I'm just a dumb blockhead who missed something obvious. But it hasn't seemed particularly obvious to me  :-\

***

P.S.
This is the script I was using to add a marker at a selected camera's location.
Code: [Select]
import sys

def get_selected_cams(cameras):
return [c for c in cameras if c.selected == True]

def marker_to_cam():
doc = Metashape.app.document
chunk = doc.chunk
selectedCams = get_selected_cams(chunk.cameras)

for c in selectedCams:
m = chunk.addMarker(c.center, True)
m.label = c.label + "_CENTER"

label = "SCRIPTS/Add Marker At Camera"
Metashape.app.addMenuItem(label, marker_to_cam)

I tried making another script where I took the camera.center vector, and added an offset to the Y component, and use the resultant vector as the position for a new marker. Here is the relevant line / change:
Code: [Select]
chunk.addMarker([c.center[0], c.center[1] - offset, c.center[2]], True)
That marker should be directly under the camera (or to the side, if Z is up). The point is, the result was offset in the direction I expected, but also had some additional offset along the X and Z axes. Which was unexpected, and ruins the point of what I was trying to do. I'm not sure if camera.center and the marker's positions are referring to differing coordinate systems. If the camera is in worldspace coordinates, and the marker position is in the chunk's coordinate system, and the chunk was slightly rotated, then you could end up with offsets in all three directions in worldspace. But then it should be fine relative to the reconstruction that is also in the chunk's coordinate system. So it would actually get me the result I want. But then I would expect the chunk to not be nicely aligned in the orthographic top/left/front views. But the chunk is aligned with the orthoviews. And when looking at the marker, it's clearly in the wrong position, and undesirably off to the side.

Blah blah blah. I'm just not sure why it is happening.
« Last Edit: August 16, 2019, 09:57:36 PM by julyfortoday »