Recent Posts

Pages: 1 ... 6 7 [8] 9 10
Is it possible to export images with scaling, overlap view mode (attachments) automatically by Python Api?
I also want to know why such operations occur at the build texture step and how they can be removed. Thank you.
Python and Java API / Re: Question about sensor and camera coordinates from api
« Last post by Paulo on June 10, 2021, 04:16:15 PM »
Hello Phogi,

I think the 2 methods are equivalent. Given a marker (<Marker 'PC01'>)  with internal 3d coordinates (marker.position), its 2d coordinates on a given camera (<Camera 'IMG_5813.JPG'>) can be determined either by:

1. combining camera.sensor.calibration.project with camera.transform.inv() as in
Code: [Select]
Out[12]: 2021-06-10 07:53:29 Vector([1998.4666958189314, 2522.822857879895])
2. just using camera.project as in
Code: [Select]
Out[13]: 2021-06-10 07:54:11 Vector([1998.4666958189316, 2522.822857879895])

As you can see result is the same. I would use camera.project as it is more elegant and recent. Especially as it treats rolling shutter type sensor projection correctly while old method (sensor.calibration.project + camera.transform.inv()) does not. See following and attached screen capture:
Code: [Select]
Out[23]: 2021-06-10 08:10:29 <Marker '5'>

Out[24]: 2021-06-10 08:10:35 <Camera 'DJI_0011_S'>

Out[25]: 2021-06-10 08:10:44 True

Out[26]: 2021-06-10 08:10:59 Vector([2618.710760365361, 1049.2876409972341])

Out[27]: 2021-06-10 08:11:15 Vector([2618.1837353917476, 1046.1846513344458])

PS. the 2d pixel image coordinates given by these formula are relative to the top left specified in Appendix C of User Manual:
The image coordinate system has origin in the middle of the top-left pixel (with coordinates (0.5, 0.5)). The X axis in the image coordinate system points to the right, Y axis points down. Image coordinates are measured in pixels.
Python and Java API / Question about sensor and camera coordinates from api
« Last post by Phogi on June 10, 2021, 03:17:52 PM »
Hi everyone,

I searched on the forum and found there are two ways to reproject 3D points into 2D image pixels, one is from sensor.calibration, the other is from camera.project. What is the difference between them? Furthermore how does the pixels count, from topleft corner or they've been shifted from center + cx/cy?

Could anyone help with how does camera.project works? The reason is I want to check why some markers are having higher pixel errors when they do align fine, if there's anything abnormal in the camera matrix or projection matrix, but how can I retrieve this info?

Thanks in advance!

General / multicamera system DJI H20T
« Last post by Magda on June 10, 2021, 03:11:34 PM »
Hi, I’m trying to process multicamera system DJI H20T, it contains 3 sensors: Wide RGB camera, Zoom RGB Camera and thermal Camera. I would like to process those data and create separated Orthophotos for each cameras.
So far, I’ve created a main folder with 3 subfolders, each containing images from one camera. When importing images by ‘add folder’ function Agisoft gives me option to import images as multi-camera system. Images from Wide lens are treated as master, Thermal and Zoom images are slave. All three sensor have same layer index 0. Unfortunately, alignment results are wrong (see attachment). Can anyone advice how processing workflow of multi-camera system look like?
General / Re: build mesh from depth maps vs dense cloud
« Last post by JJ on June 10, 2021, 02:24:19 PM »
AFAICT you can only filter by confidence the Dense Cloud right?  in which case the speed advantage of generating the mesh without a dense cloud is lost
here's a screenshot of where i am now.  i can view the model confidence but not able to select by confidence like i can with the dense cloud

General / Re: build mesh from depth maps vs dense cloud
« Last post by JJ on June 10, 2021, 02:18:17 PM »
I think that the extrapolated parts could be filtered out and removed by the confidence filter for the depth maps based mesh generation approach.
i found the Model COnfidence view but i still can't find how to filter the Depth Maps.

AFAICT you can only filter by confidence the Dense Cloud right?  in which case the speed advantage of generating the mesh without a dense cloud is lost

Python and Java API / Re: exportPoints() deletes cloud
« Last post by Alexey Pasumansky on June 10, 2021, 02:05:12 PM »
Hello forumname,

I am not observing such behavior on random project with the dense point cloud.

Please specify, which Metashape version you are using and whether you observe similar behavior if you try to export point using application GUI?
This is still an issue in 1.7.2.

Guys, can't you just pause the process when the disk space is low?

How many disk space you have?
Have you another disk?
How many photos in project's chunk?

In my project now 20000 photos. We use M.2 SSD 500Gb disk as project's storage. How I solve problem with free space? i use region to make model's part, and when I need to make another part - I just make offset region position to half size in any axis (by script).
General / Re: Metashape Professional and Coral Reef Measurement
« Last post by xabierr on June 10, 2021, 10:17:09 AM »
Hi Simon,

Amazing work!  I am sure using UWIS GPS is improving processing performance for large projects but are you also finding significant accuracy improvements?


Python and Java API / exportPoints() deletes cloud
« Last post by forumname on June 10, 2021, 05:00:54 AM »
I have a cloud that I would like to export:

<DenseCloud '3354 points'>

When I call exportPoints(), it outputs an empty file and then deletes the points from frame.

frame.exportPoints('', binary=False, format=Ms.PointsFormatXYZ)
ExportPoints: binary = off, format = PointsFormatXYZ, path =
point cloud size: 0 points

When I check the dense cloud again, the points are now deleted.

<DenseCloud '0 points'>

How do I export a cloud?

Pages: 1 ... 6 7 [8] 9 10