Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - lyhour

Pages: 1 [2] 3
16
Dear Alex,

Thank very much. I solve it already. I am writing this script to rename it. Do you have alternative coding ?
Code: [Select]
#---- rename the maker corresponding to coordinate by sort algorithm
xy_coord = list()
for marker in chunk.markers:
x_tmp,y_tmp,z_tmp = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))
xy_coord.append([x_tmp,y_tmp])
tmp_coord = sorted(xy_coord, key=lambda k: [k[0], k[1]]) # sort coordinate base on x axis first.
k = 0
for i in range(len(tmp_coord)):
k += 1
for marker in chunk.markers:
x,y,z = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))
if x == tmp_coord[i][0] and y == tmp_coord[i][1]:
marker.label = "gcp" + str(k)
break

17
Dear Alex,

Thank you very much for you kind response. Please apology me for the late response due to notification is in the junk mail I didn't notice. I can solve it thank you very much.

Best Regards,

LYHOUR

18
Excuse me everyone, I use the automatic detection function in agisoft api. Thus, I get the label of each marker. However, I want to rename the label in the order as in the figure attachment. How can I do it in python script ?

19
Excuse me everyone. I want to write python script to input coordinate x,y,z into the maker as in the figure attachment. I will show you my code that I have been achieve (not fully 100%). Noted that the marker in the figure is just example detection from the chessboard. The main purpose is I want to know the script how to input coordinate.
Code: [Select]
import Metashape
import textwrap
import glob
import os
global doc

doc = Metashape.app.document
print("Script started")

#creating new chunk
doc.addChunk()
chunk = doc.chunk
chunk.label = "New Chunk"

Metashape.app.gpu_mask = 2 ** len(Metashape.app.enumGPUDevices()) - 1
if Metashape.app.gpu_mask:
    print("the GPU is using")
    Metashape.app.cpu_enable = False

photo_list = list()
image_dir='GCPcode'
path_img = os.path.join(os.getcwd(),image_dir)
for j,img_file in enumerate(glob.glob(path_img+'/*.jpg')):
img_name = os.path.splitext(os.path.basename(img_file))[0]
photo_list.append(img_file)

chunk.addPhotos(photo_list)
keypoints = 40000 #align photos key point limit
tiepoints = 4000
chunk.matchPhotos(downscale=1, generic_preselection=True, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
chunk.alignCameras()
print("\n************ start detect marker ***************\n")
chunk.detectMarkers(target_type=Metashape.CrossTarget, tolerance=30, filter_mask=False, maximum_residual=50)
cam = chunk.cameras[1]
# print the coordinate
for marker in chunk.markers:
x,y = marker.projections[cam].coord
m,n = cam.project(marker.position)
print(marker.label, x, y, m,n)
marker_coord = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))
print("---- marker 3d coord:", marker_coord)
#------- script for input coordinate ---------
# I want to know how to do here

20
Python and Java API / Re: How to get Marker coordinates
« on: January 13, 2022, 04:09:18 PM »
Dear Alexey,

This post is long time ago, I hope that you can reply me related to this issue. I need both the coordinates of the marker projections on the individual images and position of the marker in 3D space. How can I get it in Metashape 1.70. Thank you very much.

Best Regards,

CHHAY LYHOUR

21
Dear Alexey,

thank you very much for you reply. Right Now I am doing the optimum size of marker (GCP). I am not sure which marker type should I make for my condition. Because the distance from my camera to the object is near around 20cm. I try to do the auto detect the marker but it is fail. Therefore, I am doing the optimum scale of marker. I think the input coordinate may use in the internal python script because I use only four markers.

Best Regards,
LYHOUR

22
Excuse me everyone! I am writing the python script to create the 3D model for generate the point cloud and depth image. However, I want to define the coordinate system in Local coordinates all in meter units. Also, I want to auto detect the marker in the image then I put the coordinate the marker for alignment the camera. The final step, I want to export the point cloud corresponding to the local coordinate in meter unit as well. How can I write the script. I will show you the code and figure.
Code: [Select]
import Metashape
import textwrap
import glob
import os
global doc

doc = Metashape.app.document
print("Script started")

#creating new chunk
doc.addChunk()
chunk = doc.chunk
chunk.label = "New Chunk"

Metashape.app.gpu_mask = 2 ** len(Metashape.app.enumGPUDevices()) - 1
if Metashape.app.gpu_mask:
    Metashape.app.cpu_enable = False

photo_list = list()
image_dir='RawPicture'
path_img = os.path.join(os.getcwd(),image_dir)
for j,img_file in enumerate(glob.glob(path_img+'/*.jpg')):
img_name = os.path.splitext(os.path.basename(img_file))[0]
photo_list.append(img_file)

#accuracy = Metashape.Accuracy.HighAccuracy  #align photos accuracy
#preselection = Metashape.Preselection.GenericPreselection
keypoints = 40000 #align photos key point limit
tiepoints = 10000 #align photos tie point limit
chunk.addPhotos(photo_list)
#align photos
chunk.matchPhotos(downscale=2, generic_preselection=True, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
chunk.alignCameras()


chunk.optimizeCameras()

chunk.buildDepthMaps(downscale=2, filter_mode=Metashape.AggressiveFiltering)
chunk.buildDenseCloud()

#building mesh
#chunk.buildModel(surface = surface, source = source, interpolation = interpolation, face_count = face_num)
chunk.buildModel(surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)

if chunk.transform.scale:
scale = chunk.transform.scale
else:
scale = 1

camera_list = list()
for camera in chunk.cameras:
camera_list.append(camera)

cam = chunk.cameras[4]
depth = chunk.model.renderDepth(cam.transform, cam.sensor.calibration)
depth_scaled = Metashape.Image(depth.width, depth.height, " ", "F32")
depth_grey = Metashape.Image(depth.width, depth.height, "RGB", "U8")
v_min = 10E10
v_max = -10E10

print(" ***started export depth image*********")
for y in range(depth.height):
for x in range(depth.width):
depth_scaled[x,y] = (depth[x,y][0] * scale, )
v_max = max(v_max, depth_scaled[x,y][0])
if depth_scaled[x,y][0]:
v_min = min(v_min, depth_scaled[x,y][0])

crange = v_max - v_min
for y in range(depth.height):
for x in range(depth.width):
color = int((v_max - depth_scaled[x,y][0]) / crange * 255)
depth_grey[x,y] = (color, color, color)
#export
output_dir='Depth'
path_out = os.path.join(os.getcwd(),output_dir)
depth_grey.save(path_out + "/testPython.png")

chunk.exportPoints(path = path_out + "/model.ply", format = Metashape.PointsFormatPLY, source_data = Metashape.DenseCloudData, save_colors = False)

23
Excuse me everyone. To do the measurement in Agisoft, the marker or scale bar must be conducted and present in the Image. My object measurement is aggregate height on pavement surface. Therefore, I have to move measurement point many locations on the pavement. The installing of GCP is take much time. I am not measure in Agisoft, I use Agisoft only generated the dense point cloud. Then I do the computation in Matlab. I am notice that the pre-calibrated parameter in Camera alignment step is effect to the model generation point cloud. Therefore, I come up with idea that If I create the image acquisition with the fix distance and illumination, I get the pre-calibrated camera parameter by conducting the marker only one time. Then, I use the calibrated camera parameter for my image acquisition to generate the dense point cloud. Thus, is it possible that I can generate the point cloud and measure my object using the pre-calibrated parameter that I calibrate ? Well, I have been done the trail test methodology that I describe already, But, it gives me the different result from the same batch of image. Therefore, anybody give me some suggestion regarding to my issue. I am sorry for my English. Thank very much

24
Thank you Alexey for your kindly response. First, I want to get the result similar as the Export Depth command. Second, I want to get the original depth. During these day, I found your suggestion in past few year ago (https://www.agisoft.com/forum/index.php?topic=6074.0). I will show you the coding. I run it and it is working well. However, I am not sure about the Metashape new version is update for make faster running or not. It quite slow due to two loop as you were mention. My main objective is I want to save the depth data as matlab file (.m or mat) file. I cannot do it. it would be better if you show me the way to save it.
Code: [Select]
import Metashape
import numpy
from scipy.io import savemat
from PySide2 import QtWidgets

def save_depth_maps():
chunk = Metashape.app.document.chunk #active chunk
#app = QtWidgets.QApplication.instance()
if not chunk.depth_maps:
message = "No depth maps in the active chunk. Script Aborted."
print(message)
Metashape.app.messageBox(message)
return 0

print("Script started...")
#app.processEvents() # this is the click even when we create the menu bar
if chunk.transform.scale:
scale = chunk.transform.scale
else:
scale = 1
count = 0
camera_list = list()

for camera in chunk.cameras:
camera_list.append(camera)

cam = chunk.cameras[4]
depth = chunk.model.renderDepth(cam.transform, cam.sensor.calibration)
depth_scaled = Metashape.Image(depth.width, depth.height, " ", "F32")
depth_grey = Metashape.Image(depth.width, depth.height, "RGB", "U8")
v_min = 10E10
v_max = -10E10

for y in range(depth.height):
for x in range(depth.width):
depth_scaled[x,y] = (depth[x,y][0] * scale, )
v_max = max(v_max, depth_scaled[x,y][0])
if depth_scaled[x,y][0]:
v_min = min(v_min, depth_scaled[x,y][0])

crange = v_max - v_min
for y in range(depth.height):
for x in range(depth.width):
color = int((v_max - depth_scaled[x,y][0]) / crange * 255)
depth_grey[x,y] = (color, color, color)

#print(type(depth_grey))
savemat("D:/DepthMapGeneration/Depth/test.mat", depth_grey.tostring())
depth_grey.save("D:/DepthMapGeneration/Depth/greyT.tif")

message = "Script finished "
#print(depth)
print(message)
Metashape.app.messageBox(message)
return 1

if __name__ == "__main__":
save_depth_maps()

25
Python and Java API / How to export depth map in python the same as GUI ?
« on: December 21, 2021, 05:37:26 PM »
I export the depth from the GUI and I get the resolution of the image the same as RGB image and good result as show in the figure 1. However, I want to try the python to export the depth to obtain the result as the GUI. But, the result from the python is not the same as in in the GUI, as shown in the figure 2. Therefore, How can I solve this problem in python code. I will shows you the code in this post.
Code: [Select]
import Metashape
import numpy
from PySide2 import QtWidgets

def save_depth_maps():
chunk = Metashape.app.document.chunk #active chunk
#app = QtWidgets.QApplication.instance()
if not chunk.depth_maps:
message = "No depth maps in the active chunk. Script Aborted."
print(message)
Metashape.app.messageBox(message)
return 0

print("Script started...")
#app.processEvents() # this is the click even when we create the menu bar
if chunk.transform.scale:
scale = chunk.transform.scale
else:
scale = 1
count = 0
camera_list = list()

for camera in chunk.cameras:
camera_list.append(camera)

depth = chunk.depth_maps[camera_list[4]].image()
img = depth * 1
img.save('C:/Users/DepthMapGeneration/Depth/test7.jpg')
message = "Script finished "
print(message)
#print("Depth maps exported to:\n " + output_folder)
Metashape.app.messageBox(message)
return 1


#Metashape.app.addMenuItem("Custom menu/Save Depth Maps", save_depth_maps)

if __name__ == "__main__":
save_depth_maps()

26
thank you very much for you reply. I solve the problem already. I am using the windows 10 so I am using this command: "%programfiles%\Agisoft\Metashape Pro\python\python.exe" -m pip install python_module_name" to install numpy. It is working. Noted that we have to remove the some conflict path in environment variable. For my case I create multiple path and PYTHONPATH. So it is conflict dependency. I remove it, the problem solve.

27
Hello ScubaDiving,

Do you solve the problem. I face the problem the same to you.

28
General / Camera Calibration Parameter Agisoft Different from Matlab
« on: December 17, 2021, 09:34:39 AM »
Hello everyone! I have been calibrated my camera using Matlab. After that, I use the calibrated parameter from Matlab to input in Camera calibration in Agisoft. When I align the photo, it gives me the inaccurate result. I notices that the Agisoft has the camera calibration function. Therefore, I conducted it. It gives me the acceptable result after finish alignment of photo. However, the result of calibration parameter is different. I want to know why it is different of calibration ? and how Agisoft determine the calibration parameter. My camera model is industrial camera IDS with 4mm focal length. Thank you very much.

29
General / Extract depth map as Matlab file (*.m)
« on: November 03, 2021, 08:07:36 AM »
Excuse me everyone, I want to know how to extract depth map from Agisoft and store in the Matlab (.m) file. As far I know, when we extract the depth map we get only raw file. So how can I extract and save to the other file ? Also, how can I confirm the pixel of depth map are correctly to the real distance? Thank you very much.

30
General / Re: How to align the surface model into horizontal plan?
« on: October 31, 2021, 09:25:48 AM »
Dear Steve,
I appreciate your kindness response, thank you very much. I know about the shortcut number for viewing the facad. If I compare to the plan XYZ, I found that one plan is slope not corresponding to the plan as show in the picture attachement. My purpose is want to automotatic rotate correpsonding to the plan ( do not want to get the slope). Because I am using Python Agisoft API for generating the dense point cloud and depth map. Therefore, it is neccessary to get the real distance for point cloud computation. Thank you very much.

Best Regards,
LYHOUR

Pages: 1 [2] 3