Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - bhogan

Pages: [1]
1
General / Re: Changing normal map coordinate system (export depth/normal)
« on: January 09, 2021, 01:19:18 AM »
Hiya,

Since I got no responses here, asked support and got this useful reply. Pasting here for anyone else's reference:

Thank you for contacting us.
You can use the console to change the position of the model and cameras relative to the chunk. Sample script:
ck = Metashape.app.document.chunk
m_main = Metashape.Matrix([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
m_cam = ck.cameras[0].transform
m_t = m_cam.inv() * m_main
ck.cameras[0].transform = m_t * m_cam
ck.model.transform(m_t)
​#export Depth map

#Further return of the model and the camera to their places
ck.model.transform(m_t.inv())
ck.cameras[0].transform = m_cam

2
General / Changing normal map coordinate system (export depth/normal)
« on: December 01, 2020, 11:48:11 PM »
Hi,

Sometime user, first time poster!

I am looking for some help with normal map generation. By which I mean the normal maps that you can export for each camera (through depth map generation interface, or with api).
What I would like is to make the normal map coordinates to be relative to the camera position, such that regardless of camera position each channel will be relative to the original image (x channel left to right relative to the camera/image, y channel up to down relative to the camera/image, z doesn't really make sense in this context). The idea here is that if I overlay the generated normal map over the original image, I would know at what angle the surfaces of the object were relative to the camera.

As things stand, I have a scaled model of a small object, but any normal maps generated are relative to the model/chunk. Hoping that some transformation of the chunk/chunk coords would help (if the normal maps were based on world coordinates), I tried a few things: moving the region/model to the origin, moving the camera to the origin, and placing reference markers where cameras are and making scale bars to try and reference the camera as the origin (and then doing this programmatically for each camera in turn). Nothing I’ve done affects the outcome, as the normal map seems to be relative to the chunk coordinates regardless, so the approach is no good.

I imagine that one approach would be to render the normal maps as per usual, and then subtract the transform of the camera from the relevant channels – or some other fancy maths that I can’t get my head around. Any help appreciated!

A couple previous q’s about normal map coords that I found in a forum search:
https://www.agisoft.com/forum/index.php?topic=10710.0
https://www.agisoft.com/forum/index.php?topic=6816.msg32908#msg32908

Thanks!


Pages: [1]