Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - ManyPixels

Pages: [1] 2 3
General / Can't upload masks from script
« on: June 06, 2024, 09:17:19 AM »

I generated a Python script to manage repetitive tasks on many chunks: doc creation, images import, calibration settings and masks import.

Everything is working expect the masks import. I've got 4 sensors and a mask by sensor, so I'm trying to import these masks with the generateMasks function.

The function executes with no error and the progress bar indicates that the masks are imported, but is there one more step of assigning the masks or what's going on?

Here is the script I use, I first make a dict to access cameras from their label instead of their index which I don't know:

camdict = {}
for cam in chunk.cameras:
    label = cam.label
    ide = cam.key
    camdict.update({label: ide})
chunk.generateMasks(path=r"C:\maks_1.png", cameras=[camdict[label] for label in ['DSC_0001', 'DSC_0002', 'DSC_0003'])

As said, I don't get an error and the function seems to execute, but I don't see the result.

When executing in GUI, I see in the log "GenerateMasks: path = C:\maks_1.png"
But when I import form the GUI, I see "ImportMasks: path = C:\maks_1.png cameras = 0,1,2"

Which should indicate that GenerateMasks ignores the camera list? I saw in the Python reference that ImportMasks was renommed to GenerateMasks, but it seems that this function still exists, while inaccessible... How can I access the original ImportMasks function from Python, while it still seems to exist? Or at least get the one targeted in the GUI... Explicitely setting masking_mode=MaskingModeFile and mask_operation=MaskOperationReplacement does not change anything, the process looks like it's doing something but no result... By the way, the amount of renamed functions in Python reference is astonishing  :o what's the reason for this? On the user end, it's just aweful to have an incomplete doc containing partial informations and a forum where almost everything is outdated...

Thanks in advance!

Metashape 2.1

General / Re: Laser scanner + photos
« on: January 03, 2024, 08:23:13 PM »

Try to uncheck "Intensity" in the "Show Depth Map" panel, if your points have no intensity, you'll see a black picture.

Dear Agisoft Community,

I am currently working on a project involving the calibration of a multi-camera system integrated with a laser scanner . While I have successfully completed the calibration process, I am encountering uncertainties regarding the interpretation of the calibration results, specifically in the camera calibration pane.

My primary concern revolves around the representation of angles. The system, comprising four cameras and a laser scanner, appears visually correct (see attachment, 0 is the master). However, the lack of reference materials on how angles are displayed in the calibration pane is perplexing. In the Slave Offsets, we see XYZ and OPK. When examining the XYZ translation values, they suggest an unusual orientation of the cameras, seemingly facing towards -Y. This orientation is atypical and raises questions about the accuracy of these readings.

The confusion escalates when delving into the OPK values. It is unclear how these values are applied - whether in the sequence of O->P->K or P->O->K, and whether they correspond to the XYZ axis or transformed axes (XY'Z''). Although a conversion function exists in the Python API, it provides little clarity without a foundational understanding of the initial parameters. Adding the initial orientation (where is facing OPK 0,0,0?) does not help...

My ultimate goal is to determine accurate levers in XYZ and orientations in quaternion format. This data is essential for correctly positioning the cameras and scans within an E57 file. A significant challenge I face is establishing the laser scan as the reference point. I have aligned it by converting point coordinates into spherical coordinates and interpreting these as a depth map in JPEG format with a visible color scale. This method enables alignment using Ground Control Points (GCPs). However, this approach seems effective only when the laser scan is not set as the master camera. It appears that the alignment process in a multi-camera setup disregards matches in slave cameras.

As a workaround, I am considering translating and rotating the point cloud during the final E57 file construction. Since the orientation of the point cloud is not a concern for my purposes, this seems viable. However, this again hinges on understanding how OPK angles translate into XYZ angles and hox to apply a correct translation.

I would greatly appreciate any insights or references that could help clarify these issues. Understanding these details is crucial for the success of my project.

Thank you for your assistance.

Bug Reports / Orthorectification multithreading hardcoded?
« on: August 29, 2023, 02:37:01 PM »

When generating orthomosaics, it appears that Metashape orthorectifies images 4 by 4 and the CPU load is way lower than 100% (maximum 30% with 32 threads when "loading" images and 16% under orthorectification charge). RAM is not the limitation (>230Go free), anyway Metashape never cares about the RAM status.

Is there a way to optimise that?


Feature Requests / Rebuild tie points without redetecting tie points
« on: July 25, 2023, 03:22:14 PM »
On some projects, surfaces are doubled and you have to rematch the surfaces by adding control points. Then, I use Tools - Build Tie Points to rematch the corresponding surfaces. It would be great to be able to do this without having to recalculate all Tie Points since this operation is otherwise pretty fast.


Dear Agisoft team,

I would like to discuss a challenge I've been facing while working with Metashape, especially when dealing with panoramic imaging. Currently, it seems that the initial orientation of chunks appears to be random, which often requires significant manual adjustment to correct, particularly for panoramic imagery where you realise that the orientation isn't perfect after generating the panoramic view (no, the preview isn't good enough for that).

Generally, photos are captured with a level horizon or the camera corrects the orientation automatically. Thus, it would be logical and beneficial to have the software consider this inherent orientation data during the initial setup of chunks (no orientation given = roll to 0 with accuracy of 180°, but now you have to give yaw and pitch too, which breaks this option). Anyway, when this consideration is wrong, random orientation can't be better and trying to consider a roll angle of aprroximatively 0 won't perturbate the process, so there is no inconvenients to this.

This adjustment could significantly streamline the workflow for many users, and specifically for panoramic imaging. Panoramic images often represent a somewhat "rectangular" area, where the minimal overlap inherently indicates the horizon. Leveraging this information to initially align the panoramic chunks would save users from the often tedious task of manually adjusting the height of the panorama or trying to align the horizon based on tie points represented on a sphere.

While I understand that each image set can have its own unique challenges and may require some level of manual intervention, using the image's initial orientation as a starting point could greatly simplify the alignment process and increase the efficiency of the workflow.

I hope this suggestion can be taken into consideration for future updates. Thank you for your ongoing efforts to improve this essential tool.

100% related to this thread:

The instability of depth maps calculation on AMD GPUs is horrible and it's the only problematic step with these GPUs, which clearly means the problem comes from Agisoft. The only time I got something relevant, I got a message saying "Assertion "23915205205203748 (value=7.11311e+31/61.3534 encountered, computation device is unstable)" failed at line 3771!"

While we understand that all computational devices can inherently exhibit some level of instability, it is crucial to remember that robustness and fault-tolerance should be an essential consideration in professional-grade software, especially one as vital to our work as Metashape. If you're not considering that, you can remove a zero to the price of the software.

In this context, an error-handling mechanism designed to catch failures during computation could significantly improve the software's reliability. By developing Metashape to handle such computational errors, failed computations could be retried from their last successful state, effectively making the main thread 'incorruptible'.

This would necessitate the creation of 'checkpoints' at various stages of computation to allow for a reliable state to revert to when errors occur. While such an architectural change could present its own challenges and potential performance trade-offs due to the overhead of maintaining these checkpoints, the enhancement in the software's robustness could justify the trade-off.

As customers investing in a premium software suite like Metashape, we look for a certain level of reliability and resilience to hardware-related issues. By implementing these measures, I believe Agisoft could further strengthen its reputation and provide users with a more consistent and reliable tool for our professional needs.

I hope these thoughts can be taken into consideration for future development and updates to the software. Thank you for your time and for your ongoing work on this essential tool.

General / Re: Rolling shutter compensation
« on: July 25, 2023, 10:10:15 AM »
Okay, I'll explain the problem clearly.

Rolling shutter is a method of image capture in which a picture or frame is captured not by taking a snapshot of the entire scene at once, but rather by scanning across the scene rapidly, either vertically or horizontally. This can introduce distortions to the captured image, especially when capturing fast-moving objects or when the camera itself is moving rapidly.

During camera calibration, distortion parameters including tangential distortions (P1, P2) are typically estimated to correct for lens-induced distortions. However, these corrections are calculated under the assumption that the image is captured all at once, i.e., under a global shutter mechanism.

When a rolling shutter mechanism is used, this assumption no longer holds true. The different parts of the image are captured at slightly different times, and thus the distortions in different parts of the image can be slightly different.

As such, when rolling shutter compensation is applied, it might impact the precalibration done on tangential distortions. It effectively compensates for the time delay between the capturing of different parts of the image. Therefore, it might modify the effects of the precalibration, leading to further adjustments being required for an accurate representation of the scene.

In essence, even though precalibration helps in correcting tangential distortions, the presence of a rolling shutter effect may require additional corrections to achieve an accurate and undistorted image representation.

Given these considerations, it becomes beneficial to provide known parameters that can further assist with the rolling shutter compensation process. Notably, if the sensor readout time is known, it can be provided to help reduce the uncertainty in the correction. The sensor readout time, or the time taken by the rolling shutter to scan from one side of the image to the other, is a crucial piece of information in determining the nature and degree of distortion present.

Additionally, information on the trajectory or speed of acquisition can also be invaluable. This information can help model the relative motion of the camera and the scene during the time it takes for the rolling shutter to capture an image. This, in turn, can provide insights into the spatial variance of the distortions across the image.

By providing these parameters, we can minimize the unknowns in the rolling shutter compensation process, thereby reducing its potential impact on the precalibration. This ensures that the benefits of precalibration, in terms of correcting for tangential distortions, are not unduly compromised by the necessary adjustments made to compensate for the rolling shutter effect.

General / Re: Rolling shutter compensation
« on: July 24, 2023, 09:06:39 PM »
Sure, but it does not include parameters estimated in the rolling shutter calibration. If you take for example the trajectory and speed, and could be given by the user but it can't be saved from a previous project. These informations would help when you're activating the rolling shutter compensation because the distorsions added by the rolling shutter can make your alignment go totally wrong, That's what I'm looking for...

General / Re: Rolling shutter compensation
« on: July 24, 2023, 04:11:31 PM »
Thanks, but I know that. The question is, if it is possible to fix some of the parameters estimated in this compensation (which is precalibration).


You just have to add points on the edges of this object (which will become GCPs). Then if you want to set the orientation of your project using this reference object, you can input it's edges as local coordinates. If you just want to scale up your project, then select two points, right-click on them and select add scalebar. Then you'll be able to set the scale bar length.


And the agisoft color scheme for DEMs is awesome, it would be great to be able to export those DEMs with Agisoft's color scheme (in every available format) with the color scale bar too.

Feature Requests / Re: Copy-paste mask
« on: July 24, 2023, 12:02:20 PM »

And improve coherence in extensions when importing/exporting masks between agisoft interface and the os interface.

Feature Requests / Re: Request ability to change default settings
« on: July 24, 2023, 11:59:18 AM »

In fact, we don't have diferent cameras for each project, it would be nice to automatically save cameras used and ask user if he wants to reimport parameters saved for a camera having the same aspect ratio and resolution (frame type, pixel pitch if not present in exifs, focal length if not in exifs, rolling shutter parameters, ...). Calibration is another thing, but we could save it manually and have the choice to import a certain calibration matching the detected setup.


Generally, all functions should be in the batch dialog with the SAME interface as when we're using them out of the batch dialog. Some functions can't be used in batch but most of them can. On complex workflows, batch is only good to automatically save the project after executing a function, but the rest of the time you always have to do manual steps.

Pages: [1] 2 3