Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - dobedobedo

Pages: [1]
I noticed that in the official Python scripts collection (, footprints-to-shapes and automatic_masking use concurrent.futures.ThreadPoolExecutor to achieve multi-threading processing.
However, I use the same method in my script but no matter I run it from either Tool>Run script or from the Python console directly, it seems to always use only one core for processing. Could you please enlighten me what might go wrong?
Below is my code
Code: [Select]
def GetPointMatchList(chunk, *band, collate=True):
    point_cloud = chunk.point_cloud
    points = point_cloud.points
    point_proj = point_cloud.projections
    npoints = len(points)
    camera_matches = dict()
    point_matches = dict()
    for camera in chunk.cameras:
        total = dict()
        point_index = 0
        # If no band number input, only process the master channel
            proj = point_proj[camera.planes[band[0]]]
        except IndexError:
            proj = point_proj[camera]
        except KeyError:
            # Skip the camera if there is no projection information

        for cur_point in proj:
            track_id = cur_point.track_id
            # Match the point track ID
            while point_index < npoints and points[point_index].track_id < track_id:
                point_index += 1
            if point_index < npoints and points[point_index].track_id == track_id:
                # Add point matches and save their pixel coordinates to list
                total[point_index] = cur_point.coord
                    point_matches[point_index][camera.planes[band[0]]] = cur_point.coord
                except KeyError:
                    point_matches[point_index] = dict()
                        point_matches[point_index][camera.planes[band[0]]] = cur_point.coord
                    except IndexError:
                        point_matches[point_index][camera] = cur_point.coord
                except IndexError:
                    point_matches[point_index][camera] = cur_point.coord
            camera_matches[camera.planes[band[0]]] = total
        except IndexError:
            camera_matches[camera] = total

    if collate:
        # Keep tie points which have at least 3 observation in same band if collate is True
        point_to_keep = set()
        new_camera_matches = dict()
        new_point_matches = dict()
        for point_index, value in point_matches.items():
            if len(value) >= 3:
                new_point_matches[point_index] = value
        for camera, points in camera_matches.items():
            new_camera_matches[camera] = {point: coord for point, coord in iter(points.items()) if
                                          point in point_to_keep}

    # camera_matches describes point indice and their projected pixel coordinates for each camera
    # point_matches describes point's pixel coordinates in different cameras for each point
    return camera_matches, point_matches

with concurrent.futures.ThreadPoolExecutor() as exe:
    results = _index: GetPointMatchList(chunk, _index),
                      [_index for _index, _band_name in enumerate(bands)])
    for result in results:
        cameras, points = result
        camera_matches = {**camera_matches, **cameras}
        point_matches = {**point_matches, **points}
I also tried to put multiprocessing.cpu_count() as an argument of the executor, but didn't work either.

I would like to report that Metashape 1.8.2 crashes whenever I try to call linear algebra methods from numpy.
I tried

Code: [Select]

And they all crashes the program.
I tried to use them with the Python comes with Metashape from a terminal outside Metashape, and it didn't cause problems.
I guess there might be something related to QT.
The numpy version is 1.22.3, and scipy is 1.8.0.

General / Fit additional correction: what coefficients it fit?
« on: April 08, 2020, 09:08:01 AM »
I notice that since version 1.6, p3 and p4 in camera optimization are removed, and a new option "fit additional correction" is added. However, I can't find much information about what coefficients this option enable to fit. In the 1.6.0 pre-release thread, I read someone said it is used especially for wide-angle lens.
May I ask which coefficients are used to enable "fit additional coefficients"? Is it just using p3, p4, or a totally different correction model?


General / Error: Can't read file: input/output error (5)
« on: February 23, 2020, 01:27:40 PM »
Recently, when I try to build mesh, after generating depth map and estimating surface, it very often shows the below error:
Generating mesh...
      44084677 vertices total
      88128127 faces total
    done in 98.2883 sec
  Calculating colors...
Error: Can't read file: Input/output error (5): /PATH/TO/AN/IMAGE
The indicated image seems to be random every time. I tried to access the indicated image and it can be opened normally so it shouldn't be a file corruption issue. Could someone enlighten me what the problem is?
The parameter for building mesh is:
source=depth map
surface type=arbitrary
face count=high
calculate colour=yes
reuse depth map=yes/no (tried both)

General / AUR for MetaShape Pro is available now
« on: October 01, 2019, 12:59:40 AM »
Hi all,
I just publish the PKGBUILD for this package to AUR. You can find this package here:

Installing from AUR can help the users easier to manage this application via pacman. It's available for all Arch-based Linux distributions.
The Linux package available from the official website doesn't have icons so I extracted them from our Windows installation and upload them as encoded text since we can't upload binaries to AUR.
I also create the MIME type association so that you can double click PSX or other MetaShape file types to open MetaShape directly.

I notice that MetaShape will create .agisoft .config/agisoft folders under $HOME for saving user configurations. Are there any other files that will be created by MetaShape? I can modify the PKGBUILD to check those files before removal.

Let me know if you think there's anything can be improved.

General / Any chance to publish PhotoScan as AUR packages
« on: September 18, 2018, 11:34:25 AM »
Just wondering that whether we can get AUR packages for PhotoScan (Pro) for convenient installation/upgrade for Arch-based Linux system?
I'm glad to make one but don't know the process to install PhotoScan (Pro). Is it just (1) Download tar.gz file, (2) unzip to a specific folder (e.g. /opt), then (3) sudo ./ ? When does activation take part?

I tried the calibrate reflectance function on my old data which was taken by Sequoia on 2 Feb 2017. We didn't have a micasense panel at that time so we deployed multiple known reflectance customized panels on the ground. I used the black panel (4.2% reflectance) as reference in the calibrate reflectance dialogue. However, no matter how I tried by selecting panels only or both panels and sun sensor check, I always get a very weird result in green band as attached. I can send you the panel images I used, though the usable area is only several pixels.
Another question is how to convert the reflectance calibrated image to reflectance (0 to 1) for Sequoia images? There are two sayings I've heard: 1) divide the DN value by 32767, and 2) divide the DN value by maximum DN (65535?). Which is the correct answer?
My PhotoScan Pro version is 1.4.1 build 5929 64-bit if it helps.

General / Questions about new calibrate color feature in 1.4
« on: January 22, 2018, 09:30:56 AM »

I just notice that in version 1.4, a new feature 'calibrate colors' is added. According to the changelog:

" Added Calibrate Colors command for vignetting and brightness calibration."

Meanwhile, the original 'color correction' option in 'build Orthomosaic' is removed.

Therefore, two questions are raised:
1. Is the 'calibrate colors' feature equivalent to the 'color correction' option in the previous version (1.3.4)?
2. When perform 'calibrate colors', we can select 'data source' and 'calibrate white balance'. Is it correct that 'data source' argument is for vignetting correction while 'calibrate white balance' is for brightness calibration?

Appreciated for your help!

Python and Java API / Is it possible to modify loaded image?
« on: December 05, 2017, 09:01:09 AM »

When I tried to modified the loaded image with:
Code: [Select] = New_ImageIt always shows the error
'PhotoScan.Photo' object attribute 'image' is read-only

Although I am able to manage modifying it with:
Code: [Select]
width = camera.sensor.width
height = camera.sensor.height
for u, v in [(u, v) for u in range(width) for v in range(height)]:[u, v] = New_Image()[u, v]
The efficiency is terrible in this case. It costs around one hour to process a 1280*960 image.

The reason I do this is I want to apply two different linear regression model to the image based on its DN value, and New_Image is the image I calculated and wish to be used for further process. The concept is like this:
Code: [Select]
if DN >= criteria:
    DN = linear_model1(DN)
    DN = linear_model2(DN)
Then the modified images can be used to process the orthomosaic directly so that I don't need to run photo alignment and other workflow again.
Any idea about how to achieve this would be very appreciated.  :)

Python and Java API / How to get pixel value for different band?
« on: November 23, 2017, 10:20:16 AM »

I just can't figure out how to get the pixel value from different band through the Python API.
My photos are Parrot Sequoia multi-spectral images, and I added them via multispectral layout. I can also manage to set the master channel to review the different bands in the photo window. However, when I access the image by
Code: [Select] appears to have only one channel. And if I access the pixel value by
Code: [Select][600, 400]It gets me something like (27680, ), which seems only contains the pixel value of the master channel.
I tried to set master_channel from 1 to 4 and access the pixel value like[600, 400] but the values are always the same. Moreover, the information from camera.sensor.bands always contains the first master channel I set up (which is Green in this case) no matter how I change the master channel later.
I'm just curious that how PhotoScan store the information of multi-spectral images and how do I access the pixel value for different bands via Python API?
It would be appreciated to have your reply!

Python and Java API / Calculating the camera normal vector
« on: November 16, 2017, 09:47:22 AM »

I want to make sure I don't do any mistake in the camera vector calculation.
As far as I know, the normal vector of a camera in chunk coordinate system can be calculated by:

Code: [Select]
principal_pixel = PhotoScan.Vector(camera.sensor.width/, camera.sensor.height/
normal_vector = camera.transform.mulv(camera.sensor.calibration.unproject(principal_pixel))

Then it should work to transform the vector to world coordinate system by

Code: [Select]
world_vector = chunk.transform.matrix.mulv(normal_vector)
and I can also get the camera rotation matrix in the world by
Code: [Select]
T = chunk.transform.matrix
m =
R= m * T * camera.transform * PhotoScan.Matrix().Diag([1, -1, -1, 1])
R = R.rotation()

I would like to know that whether the vector's direction is defined as y axis to north, x axis to east, and xy plane is parallel to the horizon, or is it based on the camera orientation. If it is based on camera orientation, can I transform it by multiply the transpose rotation matrix?
Appreciated for your help to make it clear!

Python and Java API / Two Python scripting questions
« on: November 14, 2017, 06:44:19 AM »

I encountered some minor problems which I can't yet find a solution on the forum.

1. After running my script from tool > run script. My project file somehow will be locked by Python or something. After I close it, it will prompt me that the file is read only next time I open it. Although click "yes" to continue editing will solve the problem, it is still annoying.
2. From the Python API manual, I can't find a way to reorder my chunks. However, it can be achieved by dragging the chunk with mouse. Is there any way I can achieve the same task with Python?

If you're interested in the code I use, here's the link:

The PhotoScan version I used is 1.3.4

Looking forward to hearing some comment from you!

Pages: [1]