Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - lyhour

Pages: [1]
1
Excuse me! I want to extract the depth data of selected camera. I have the python code which I found in the forum and also in the way to export depth in the GUI. However, The mentions method are save the depth data to the image file. I want to know is there any method to obtain the depth data and save to other file such as Matlab. Thank you very much.

2
Python and Java API / How to remove gray flag from the marker detection
« on: February 09, 2022, 10:03:48 AM »
Excuse me everyone! I am using Autodetection Marker to detect the GCP with cross code (non code) as show in the figure. It work well for detection the GCP. However, it is over detection (gray flag) with non corresponding to the GCP. Therefore, How can i remove it using the python code ? Thank you very much.

Best Regards,
LYHOUR

3
Excuse me everyone, I use the automatic detection function in agisoft api. Thus, I get the label of each marker. However, I want to rename the label in the order as in the figure attachment. How can I do it in python script ?

4
Excuse me everyone. I want to write python script to input coordinate x,y,z into the maker as in the figure attachment. I will show you my code that I have been achieve (not fully 100%). Noted that the marker in the figure is just example detection from the chessboard. The main purpose is I want to know the script how to input coordinate.
Code: [Select]
import Metashape
import textwrap
import glob
import os
global doc

doc = Metashape.app.document
print("Script started")

#creating new chunk
doc.addChunk()
chunk = doc.chunk
chunk.label = "New Chunk"

Metashape.app.gpu_mask = 2 ** len(Metashape.app.enumGPUDevices()) - 1
if Metashape.app.gpu_mask:
    print("the GPU is using")
    Metashape.app.cpu_enable = False

photo_list = list()
image_dir='GCPcode'
path_img = os.path.join(os.getcwd(),image_dir)
for j,img_file in enumerate(glob.glob(path_img+'/*.jpg')):
img_name = os.path.splitext(os.path.basename(img_file))[0]
photo_list.append(img_file)

chunk.addPhotos(photo_list)
keypoints = 40000 #align photos key point limit
tiepoints = 4000
chunk.matchPhotos(downscale=1, generic_preselection=True, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
chunk.alignCameras()
print("\n************ start detect marker ***************\n")
chunk.detectMarkers(target_type=Metashape.CrossTarget, tolerance=30, filter_mask=False, maximum_residual=50)
cam = chunk.cameras[1]
# print the coordinate
for marker in chunk.markers:
x,y = marker.projections[cam].coord
m,n = cam.project(marker.position)
print(marker.label, x, y, m,n)
marker_coord = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))
print("---- marker 3d coord:", marker_coord)
#------- script for input coordinate ---------
# I want to know how to do here

5
Excuse me everyone! I am writing the python script to create the 3D model for generate the point cloud and depth image. However, I want to define the coordinate system in Local coordinates all in meter units. Also, I want to auto detect the marker in the image then I put the coordinate the marker for alignment the camera. The final step, I want to export the point cloud corresponding to the local coordinate in meter unit as well. How can I write the script. I will show you the code and figure.
Code: [Select]
import Metashape
import textwrap
import glob
import os
global doc

doc = Metashape.app.document
print("Script started")

#creating new chunk
doc.addChunk()
chunk = doc.chunk
chunk.label = "New Chunk"

Metashape.app.gpu_mask = 2 ** len(Metashape.app.enumGPUDevices()) - 1
if Metashape.app.gpu_mask:
    Metashape.app.cpu_enable = False

photo_list = list()
image_dir='RawPicture'
path_img = os.path.join(os.getcwd(),image_dir)
for j,img_file in enumerate(glob.glob(path_img+'/*.jpg')):
img_name = os.path.splitext(os.path.basename(img_file))[0]
photo_list.append(img_file)

#accuracy = Metashape.Accuracy.HighAccuracy  #align photos accuracy
#preselection = Metashape.Preselection.GenericPreselection
keypoints = 40000 #align photos key point limit
tiepoints = 10000 #align photos tie point limit
chunk.addPhotos(photo_list)
#align photos
chunk.matchPhotos(downscale=2, generic_preselection=True, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
chunk.alignCameras()


chunk.optimizeCameras()

chunk.buildDepthMaps(downscale=2, filter_mode=Metashape.AggressiveFiltering)
chunk.buildDenseCloud()

#building mesh
#chunk.buildModel(surface = surface, source = source, interpolation = interpolation, face_count = face_num)
chunk.buildModel(surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)

if chunk.transform.scale:
scale = chunk.transform.scale
else:
scale = 1

camera_list = list()
for camera in chunk.cameras:
camera_list.append(camera)

cam = chunk.cameras[4]
depth = chunk.model.renderDepth(cam.transform, cam.sensor.calibration)
depth_scaled = Metashape.Image(depth.width, depth.height, " ", "F32")
depth_grey = Metashape.Image(depth.width, depth.height, "RGB", "U8")
v_min = 10E10
v_max = -10E10

print(" ***started export depth image*********")
for y in range(depth.height):
for x in range(depth.width):
depth_scaled[x,y] = (depth[x,y][0] * scale, )
v_max = max(v_max, depth_scaled[x,y][0])
if depth_scaled[x,y][0]:
v_min = min(v_min, depth_scaled[x,y][0])

crange = v_max - v_min
for y in range(depth.height):
for x in range(depth.width):
color = int((v_max - depth_scaled[x,y][0]) / crange * 255)
depth_grey[x,y] = (color, color, color)
#export
output_dir='Depth'
path_out = os.path.join(os.getcwd(),output_dir)
depth_grey.save(path_out + "/testPython.png")

chunk.exportPoints(path = path_out + "/model.ply", format = Metashape.PointsFormatPLY, source_data = Metashape.DenseCloudData, save_colors = False)

6
Excuse me everyone. To do the measurement in Agisoft, the marker or scale bar must be conducted and present in the Image. My object measurement is aggregate height on pavement surface. Therefore, I have to move measurement point many locations on the pavement. The installing of GCP is take much time. I am not measure in Agisoft, I use Agisoft only generated the dense point cloud. Then I do the computation in Matlab. I am notice that the pre-calibrated parameter in Camera alignment step is effect to the model generation point cloud. Therefore, I come up with idea that If I create the image acquisition with the fix distance and illumination, I get the pre-calibrated camera parameter by conducting the marker only one time. Then, I use the calibrated camera parameter for my image acquisition to generate the dense point cloud. Thus, is it possible that I can generate the point cloud and measure my object using the pre-calibrated parameter that I calibrate ? Well, I have been done the trail test methodology that I describe already, But, it gives me the different result from the same batch of image. Therefore, anybody give me some suggestion regarding to my issue. I am sorry for my English. Thank very much

7
Python and Java API / How to export depth map in python the same as GUI ?
« on: December 21, 2021, 05:37:26 PM »
I export the depth from the GUI and I get the resolution of the image the same as RGB image and good result as show in the figure 1. However, I want to try the python to export the depth to obtain the result as the GUI. But, the result from the python is not the same as in in the GUI, as shown in the figure 2. Therefore, How can I solve this problem in python code. I will shows you the code in this post.
Code: [Select]
import Metashape
import numpy
from PySide2 import QtWidgets

def save_depth_maps():
chunk = Metashape.app.document.chunk #active chunk
#app = QtWidgets.QApplication.instance()
if not chunk.depth_maps:
message = "No depth maps in the active chunk. Script Aborted."
print(message)
Metashape.app.messageBox(message)
return 0

print("Script started...")
#app.processEvents() # this is the click even when we create the menu bar
if chunk.transform.scale:
scale = chunk.transform.scale
else:
scale = 1
count = 0
camera_list = list()

for camera in chunk.cameras:
camera_list.append(camera)

depth = chunk.depth_maps[camera_list[4]].image()
img = depth * 1
img.save('C:/Users/DepthMapGeneration/Depth/test7.jpg')
message = "Script finished "
print(message)
#print("Depth maps exported to:\n " + output_folder)
Metashape.app.messageBox(message)
return 1


#Metashape.app.addMenuItem("Custom menu/Save Depth Maps", save_depth_maps)

if __name__ == "__main__":
save_depth_maps()

8
General / Camera Calibration Parameter Agisoft Different from Matlab
« on: December 17, 2021, 09:34:39 AM »
Hello everyone! I have been calibrated my camera using Matlab. After that, I use the calibrated parameter from Matlab to input in Camera calibration in Agisoft. When I align the photo, it gives me the inaccurate result. I notices that the Agisoft has the camera calibration function. Therefore, I conducted it. It gives me the acceptable result after finish alignment of photo. However, the result of calibration parameter is different. I want to know why it is different of calibration ? and how Agisoft determine the calibration parameter. My camera model is industrial camera IDS with 4mm focal length. Thank you very much.

9
General / Extract depth map as Matlab file (*.m)
« on: November 03, 2021, 08:07:36 AM »
Excuse me everyone, I want to know how to extract depth map from Agisoft and store in the Matlab (.m) file. As far I know, when we extract the depth map we get only raw file. So how can I extract and save to the other file ? Also, how can I confirm the pixel of depth map are correctly to the real distance? Thank you very much.

10
Python and Java API / How can I install numpy in Agisoft ?
« on: October 25, 2021, 02:56:07 PM »
Excuse me everyone! I want to use numpy in my python script in Agisoft. When I run it say that the numpy is not installed. I have been installed in my computer and used it so far in different environment. I try to copy from the site-packages to Agisoft directory like : C:\Program Files\Agisoft\Metashape Pro\python\Lib. But it still error. I am stuck of this problem. It would be better if any give me suggestion to solve this problem. Thank you very much.

11
General / How to align the surface model into horizontal plan?
« on: October 25, 2021, 02:06:23 PM »
Excuse me everyone, I am using Agisoft to construct 3D pavement surface texture. When I do it, the surface model is not fit into horizontal plan direction as show the attachment (Picture). How can I solve this problem ?

12
I am new to python Agisoft. I try to export the point similar process to the GUI. However, it raised the error : Value Error: Invalid argument value: format. The code is shown in the below. Also, How can I choose between the dense point cloud and spare point cloud in the coding ? Moreover, I notice that the computation time is very different. When I use the GUI the processing time is very fast. But when I use the python API, it takes longer time. Do anyone know how to fix it. Thank you very much.
Code: [Select]
import Metashape
import textwrap
import glob
import os
global doc
doc = Metashape.app.document
print("Script started")

#creating new chunk
doc.addChunk()
chunk = doc.chunk
chunk.label = "New Chunk"

photo_list = list()
image_dir='Picture'
path_img = os.path.join(os.getcwd(),image_dir)
for j,img_file in enumerate(glob.glob(path_img+'/*.jpg')):
img_name = os.path.splitext(os.path.basename(img_file))[0]
photo_list.append(img_file)


keypoints = 40000 #align photos key point limit
tiepoints = 10000 #align photos tie point limit
chunk.addPhotos(photo_list)
#align photos
chunk.matchPhotos(downscale=1, generic_preselection=True, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
chunk.alignCameras()

chunk.optimizeCameras()

chunk.buildDepthMaps(downscale=4, filter_mode=Metashape.AggressiveFiltering)
chunk.buildDenseCloud()
#building mesh
chunk.buildModel(surface_type=Metashape.Arbitrary, interpolation=Metashape.EnabledInterpolation)

#build texture

chunk.buildUV(mapping_mode=Metashape.GenericMapping)

chunk.buildTexture(blending_mode=Metashape.MosaicBlending, texture_size=4096)
Metashape.app.update()

#export
output_dir='Picture'
path_out = os.path.join(os.getcwd(),output_dir)

chunk.exportPoints(path = path_out + "/model.ply", format = "ply", colors = True)

13
General / How to obtain the matching pixel coordinate of each image ?
« on: September 21, 2021, 12:46:14 PM »
Hello everyone, after we align the image, the spare point cloud and dense point clouse can be obtain. As far I know, the matching point was conducted base on the SIFT algorithm. Therefore, How can I get the mathcing point coordinate between each image. Also, how we can get detph image corresponding to RGB image. I have been tried to export the depth image from the API, but, it is not well corresponding to RGB image. The depth image is generated from the matching tie point, therfore, it is not the same size as RGB image. How can i solve this problem ? thank you very much.

14
After we combine the image from different view, we can obtain the point cloud and generate the 3D model of the object by using the Agisoft software. However, I have few question:
1. where is the reference of this point cloud ? I mean coordinate (0,0,0).
2. how to generate the depth image corresponding to the RGB image ? which RGB image corresponding to depth image ? I raise up this question because I need the resolution of depth image (W×H) the same as RGB image (W×H).
3. Do we need to projected the point cloud to the image again ?
Thank you very much.

Pages: [1]