Agisoft Metashape
Agisoft Metashape => Python and Java API => Topic started by: Safa.Naraghi on November 01, 2016, 09:25:02 AM
-
Hi there, I have been using Agisoft PhotoScanPro for the passed few months to reconstruct a close range object and analyse the depth map of the object from a certain camera view. I have been able to do this my following the workflow process (without building the DEM - at takes 15 hours to complete). Once I completely reconstruct the model I can right click on the desired camera (the first one) in the 'workspace pane' and export the depth map. This works for me and produces a depthmap, diffusion map and a normal map that I then save to a file location. However when I try to do this using python scripting it does not give me the same depth map. It instead gives me a map that shows the object as a uniform depth and the background as black (I have attached the pictures of the desired depthmap and python-produced depth maps to this post.
I have also attached my python script so far. The exportDepthRender function is the one I am having trouble with but maybe I am doing something wrong in the previous steps. I am using the transform matrix of the camera view that I want to produce the render but this isn't working. If anyone could help with this it would be greatly appreciated.
Regards,
Safa
-
Dear Safa,
To reproduce the behavior of Export Depth option that is available in GUI you need to use chunk.model.renderDepth(transofrm, calibration) function, where transform is the position of the point (you can use camera.transform) and for the calibration you need to use camera.sensor.calibration.
The depth maps that you are trying to save are not scaled due to their internal representation.
-
Dear Alexey,
Thank you for your response. I have followed your instruction to a T. Below I have copied the code that I used which is in the python file I attached. I have added some comments where I applied your steps (which I had previously done).
def exportDepthRender(savetoPath, savedModel):
doc = PhotoScan.app.document
doc.open(savedModel)
chunk = doc.chunk
camera = chunk.cameras[0] # getting Camera of first camera in chunk
cameraTrans= camera.transform # The camera transform matrix you suggested
cali = camera.sensor.calibration # The calibration you suggested
chunkModel = chunk.model # Getting model
doc.save()
image = chunkModel.renderDepth(transform = cameraTrans, calibration = cali) # Applying depth render as you suggested
image.save(savetoPath + "/depth.png")
Using this method produces a depth map like the "undesired" one that I attached to my post. However when I use the GUI to export depth of the first camera I get a map like the "desired" one that I uploaded. So i fear that this step may not be the problem?
I am curious to also know what you mean by "internal representation" in the last sentence of your response. I would also like to know what you meant when you said the depth map that i am trying to save is not scaled? Does this mean that there is a fault with my steps prior to rendering the depth map?
Thank you for helping with this. Your assistance is much appreciated.
Regards,
Safa
-
Visit this site, there may be some information for solving your problem http://djangostars.com/blog/python-in-a-1000-words/ (http://djangostars.com/blog/python-in-a-1000-words/)
-
Hello Safa,
Looks like it's my mistake and renderDepth() is generating the depth in floating point format, so to get grayscale values in 0 - 255 range it would be necessary to transform the data "manually" finding minimal and maximal floating point values and then scaling the pixel values accordingly.
-
Hi Alexey,
Thank you for your reply.
I am not sure how to find the minimal and maximal floating point values. I also don't think I understand how to scale the pixel values once I have this information. Could you provide some detail on these processes.
Thank you for taking the time to looking into this.
Safa
-
Hello Safa,
You can use the following code to generate the depth image (single channel) in the real world dimensions and also to create a greyscale image.
import PhotoScan
chunk = PhotoScan.app.document.chunk #active chunk
scale = chunk.transform.scale
camera = chunk.cameras[0] #first camera in the chunk
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
depth_scaled = PhotoScan.Image(depth.width, depth.height, " ", "F32")
depth_grey = PhotoScan.Image(depth.width, depth.height, "RGB", "U8")
v_min = 10E10
v_max = -10E10
for y in range(depth.height):
for x in range(depth.width):
depth_scaled[x,y] = (depth[x,y][0] * scale, )
v_max = max(v_max, depth_scaled[x,y][0])
if depth_scaled[x,y][0]:
v_min = min(v_min, depth_scaled[x,y][0])
crange = v_max - v_min
for y in range(depth.height):
for x in range(depth.width):
color = int((v_max - depth_scaled[x,y][0]) / crange * 255)
depth_grey[x,y] = (color, color, color)
depth_scaled.save("D:\depth.tif")
depth_grey.save("D:\grey.tif")
However, it works quite slow, due to two loops.
-
Dear Alexey,
It worked! Thank you so much for helping out.
Safa
-
I have the same problem, but refer to the Alexey code in the "for y in range (depth.height)" start error
Can teach me how to solve?
Could you provide some detail on these processes.
thanks you
-
Looks like you are missing the following line, where depth variable is assigned, and line with the chunk.transform.scale:
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
scale = chunk.transform.scale
-
Looks like you are missing the following line, where depth variable is assigned, and line with the chunk.transform.scale:
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
scale = chunk.transform.scale
Hello Alexey,
Thank you so fast to answer my question.
I added the suggestion code, but I can not run it.
Can you guide me in trouble?
Thanks you.
-
Hello 1104312139@gm.kuas.edu.tw,
Can you post the error message that you have in the Console pane?
-
Hello 1104312139@gm.kuas.edu.tw,
Can you post the error message that you have in the Console pane?
Hello Alexey,
I'm very sorry Often ask you,
I still can not find the problem.
Thanks you !!
-
Hello 1104312139@gm.kuas.edu.tw,
Since you are saying that the script doesn't work, I believe that you have some errors indicating the problem. Can you open the Console pane and copy any error messages that are related to the script run.
Also please check that the camera alignment was successful and both mesh and dense cloud have been reconstructed.