Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - ManyPixels

Pages: [1] 2
General / Can't upload masks from script
« on: June 06, 2024, 09:17:19 AM »

I generated a Python script to manage repetitive tasks on many chunks: doc creation, images import, calibration settings and masks import.

Everything is working expect the masks import. I've got 4 sensors and a mask by sensor, so I'm trying to import these masks with the generateMasks function.

The function executes with no error and the progress bar indicates that the masks are imported, but is there one more step of assigning the masks or what's going on?

Here is the script I use, I first make a dict to access cameras from their label instead of their index which I don't know:

camdict = {}
for cam in chunk.cameras:
    label = cam.label
    ide = cam.key
    camdict.update({label: ide})
chunk.generateMasks(path=r"C:\maks_1.png", cameras=[camdict[label] for label in ['DSC_0001', 'DSC_0002', 'DSC_0003'])

As said, I don't get an error and the function seems to execute, but I don't see the result.

When executing in GUI, I see in the log "GenerateMasks: path = C:\maks_1.png"
But when I import form the GUI, I see "ImportMasks: path = C:\maks_1.png cameras = 0,1,2"

Which should indicate that GenerateMasks ignores the camera list? I saw in the Python reference that ImportMasks was renommed to GenerateMasks, but it seems that this function still exists, while inaccessible... How can I access the original ImportMasks function from Python, while it still seems to exist? Or at least get the one targeted in the GUI... Explicitely setting masking_mode=MaskingModeFile and mask_operation=MaskOperationReplacement does not change anything, the process looks like it's doing something but no result... By the way, the amount of renamed functions in Python reference is astonishing  :o what's the reason for this? On the user end, it's just aweful to have an incomplete doc containing partial informations and a forum where almost everything is outdated...

Thanks in advance!

Metashape 2.1

Dear Agisoft Community,

I am currently working on a project involving the calibration of a multi-camera system integrated with a laser scanner . While I have successfully completed the calibration process, I am encountering uncertainties regarding the interpretation of the calibration results, specifically in the camera calibration pane.

My primary concern revolves around the representation of angles. The system, comprising four cameras and a laser scanner, appears visually correct (see attachment, 0 is the master). However, the lack of reference materials on how angles are displayed in the calibration pane is perplexing. In the Slave Offsets, we see XYZ and OPK. When examining the XYZ translation values, they suggest an unusual orientation of the cameras, seemingly facing towards -Y. This orientation is atypical and raises questions about the accuracy of these readings.

The confusion escalates when delving into the OPK values. It is unclear how these values are applied - whether in the sequence of O->P->K or P->O->K, and whether they correspond to the XYZ axis or transformed axes (XY'Z''). Although a conversion function exists in the Python API, it provides little clarity without a foundational understanding of the initial parameters. Adding the initial orientation (where is facing OPK 0,0,0?) does not help...

My ultimate goal is to determine accurate levers in XYZ and orientations in quaternion format. This data is essential for correctly positioning the cameras and scans within an E57 file. A significant challenge I face is establishing the laser scan as the reference point. I have aligned it by converting point coordinates into spherical coordinates and interpreting these as a depth map in JPEG format with a visible color scale. This method enables alignment using Ground Control Points (GCPs). However, this approach seems effective only when the laser scan is not set as the master camera. It appears that the alignment process in a multi-camera setup disregards matches in slave cameras.

As a workaround, I am considering translating and rotating the point cloud during the final E57 file construction. Since the orientation of the point cloud is not a concern for my purposes, this seems viable. However, this again hinges on understanding how OPK angles translate into XYZ angles and hox to apply a correct translation.

I would greatly appreciate any insights or references that could help clarify these issues. Understanding these details is crucial for the success of my project.

Thank you for your assistance.

Bug Reports / Orthorectification multithreading hardcoded?
« on: August 29, 2023, 02:37:01 PM »

When generating orthomosaics, it appears that Metashape orthorectifies images 4 by 4 and the CPU load is way lower than 100% (maximum 30% with 32 threads when "loading" images and 16% under orthorectification charge). RAM is not the limitation (>230Go free), anyway Metashape never cares about the RAM status.

Is there a way to optimise that?


Feature Requests / Rebuild tie points without redetecting tie points
« on: July 25, 2023, 03:22:14 PM »
On some projects, surfaces are doubled and you have to rematch the surfaces by adding control points. Then, I use Tools - Build Tie Points to rematch the corresponding surfaces. It would be great to be able to do this without having to recalculate all Tie Points since this operation is otherwise pretty fast.


Dear Agisoft team,

I would like to discuss a challenge I've been facing while working with Metashape, especially when dealing with panoramic imaging. Currently, it seems that the initial orientation of chunks appears to be random, which often requires significant manual adjustment to correct, particularly for panoramic imagery where you realise that the orientation isn't perfect after generating the panoramic view (no, the preview isn't good enough for that).

Generally, photos are captured with a level horizon or the camera corrects the orientation automatically. Thus, it would be logical and beneficial to have the software consider this inherent orientation data during the initial setup of chunks (no orientation given = roll to 0 with accuracy of 180°, but now you have to give yaw and pitch too, which breaks this option). Anyway, when this consideration is wrong, random orientation can't be better and trying to consider a roll angle of aprroximatively 0 won't perturbate the process, so there is no inconvenients to this.

This adjustment could significantly streamline the workflow for many users, and specifically for panoramic imaging. Panoramic images often represent a somewhat "rectangular" area, where the minimal overlap inherently indicates the horizon. Leveraging this information to initially align the panoramic chunks would save users from the often tedious task of manually adjusting the height of the panorama or trying to align the horizon based on tie points represented on a sphere.

While I understand that each image set can have its own unique challenges and may require some level of manual intervention, using the image's initial orientation as a starting point could greatly simplify the alignment process and increase the efficiency of the workflow.

I hope this suggestion can be taken into consideration for future updates. Thank you for your ongoing efforts to improve this essential tool.

Feature Requests / Clarify/complete reference preselection
« on: July 24, 2023, 11:46:08 AM »

Alignments are sometimes giving pretty hard times, especially when a short deadline is given for the project (almost every time...). It would be nice to add "advanced settings" to the images preselection, like:
- Generic: groups size
- Reference / Estimated: Neighbors distance, number and/or max h offset (would be pretty useful for interiors with multiple floors)
- Groups, links (like a folder by room and folders with images linking different spaces)

In fact, those processes can't always be automatic and usually the user knows more informations than the software. Hence, it's absolutely necessary to give users the access to certain parameters they know exists. Grouping them in "advanced settings" can warn the user to know exactly what he expects, or leave default settings. If some of those functions already exists as tweaks, then I ask you one more time to give a list of the available tweaks! Agisoft is supposed to be a professionnal software, so please stop considering that users will break the software every time you give them some room on parameters. Those advanced settings can be part of the professionnal version only for example.



It would be awesome to generate an index file within the Tile Map Service orthomosaic exports which can handle data with CRS other than WGS84. Leaflet or other open source xyz APIs could easily handle that.


General / Rolling shutter compensation
« on: June 29, 2023, 04:20:36 PM »

Using a precalibrated system, I realised that the rolling shutter has a huge impact on final camera positions. Knowing accurate speed, heading, orientation and position for each frame and the sensor readout time, is there a way to help the software to get more accurate? Position already helps a lot, but sometimes alignment still goes wrong.


Bug Reports / Lot of mismatches with 14bit marker n° 63
« on: December 16, 2022, 05:14:37 PM »
I'm using 14 bit markers and there is a lot of mismatches with some markers under number 100. By chance I'm not using them but I have to delete all those mismatches everytime using markers.

The most problematic marker is the number 63, but there are problems with markers 16, 19 and others too for example.

A workaround to fix this issue would be appreciated. Thanks!

Bug Reports / Panoramas are cropped around 45 degrees down
« on: December 14, 2022, 07:06:02 PM »
When I generate panoramic images, although the preview shows me that my images cover the ground up to plus or minus 10 degrees above nadir, the generated image is cut off at around 45 degrees above nadir (it's indeed filled up with nothing, export is in format 2:1).

This is very annoying because the images are correctly aligned and this area also contains correct tie points. The tripod and other technical elements are visible and not part of the scene, but it does not matter because for analysis purposes I need to see this area.

Is there a solution to handle this problem? I specify that I cannot use the "setup boundaries" function because in any case it will not solve my problem (and it outputs the same cropped image). I have to generate a lot of panoramas and this option is not available in the batch process tool. I take this opportunity to also say that the layout, name and availability of settings between batch and standard use should be harmonized! Another example is that it is sometimes possible to export data in precise CRS via the batch but not via the standard functions... It is always divination...

Anyway I have to export images in 2:1 format, so impossible to output cropped images, I need the full area.
Thanks in advance !

General / Pricing in 2.0
« on: December 05, 2022, 12:55:04 PM »
With the arrival of LIDAR and Laser data integrations, wouldn't it be time to split the sale into 3 versions?

1.Standard – $180
2. Pro – $900 and functions identical to 1.8 (without the bugs)
3. Ultra – $3500 and functions identical to 2.0 (without the bugs)

This way no one will have paid too much for the features they needed. Pros would become Ultra and a new category would appear. For sure some customers will say "I bought Agisoft 6 months ago and don't need to import laserscans, so I've paid an extra 2600$ for nothing" but there sure will be solutions to face this type of problem. I see no other problem and it's a win-win operation.

General / Manual image matching
« on: December 05, 2022, 12:52:35 PM »
One more feature that has already been requested: The ability to manually tell the software which images match. It is not uncommon for two images to not match when they clearly do. You would have to be able to tell the software to “match these two images together” and find the matches by any means possible. The only way to do this currently is via markers, but this is very time consuming and manual operation is expensive compared to automatic operations.

General / Can’t set mesh CRS in empty project
« on: December 05, 2022, 12:51:54 PM »
When opening an empty project, setting the CRS wherever possible and trying to import a mesh with a well-defined coordinate system, it is only possible to import it in local coordinates. On the other hand, if you import a dense cloud, regardless of its CRS, it is then possible to import the mesh and choose its CRS (even if it is different from the dense cloud)

General / Better overall RAM management
« on: December 05, 2022, 12:50:45 PM »
General memory management is simply abysmal. How can you run out of memory with 256 GB available? It may be understandable to need 64GB for large projects, but it should never be more, since no step requires to simultaneously process more than 200 images in parallel (even at very high definition and counting the depth, points, and all related data, this is not 64GB).

There is nothing worse than launching a process and watching the RAM fill up endlessly until the software crashes. This should never happen. There are many programs that use all this RAM if it is available, but if it is not available, they also do very well! The ideal is always to be able to set a usable maximum. It's the same with the processor cores, it's interesting to be able to free up one or two to be able to continue working alongside. You can look at astrophotography processing software to see software that are good at managing resources. Siril and Astro Pixel Processor are very good examples.

General / Very poor and bad documentation
« on: December 05, 2022, 12:49:38 PM »
Alexey Pasumansky, after writing 14,000 forum posts, have you ever considered rewriting the documentation properly rather than repeating the same things over and over again on the forum? I never found anything useful in the documentation. You may find this extreme, but it is the truth. The functions and their options are explained so briefly that the only way to know which option to choose in the processes is to stupidly try all the possible solutions and compare the result. And as far as the Python API goes, that's guesswork and prayer.

The log spits out dozens of lines, but we don't really understand what's going on. A complete documentation with an exhaustive description of the functions as well as the lines which appear in the log would be welcome and would free you up a lot of time to deal with the real problems!

Nobody will blame you if you focus your resources for 6 months on fixing everything that is problematic rather than adding new features with their new bugs. On the other hand, paying $3,500 to never get to the end of a project without having problems where there shouldn't be gives you a very bad image.

Pages: [1] 2