Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Gall

Pages: [1] 2 3 ... 6
1
General / Re: Agisoft Metashape 1.5.0 pre-release
« on: October 09, 2018, 01:07:02 PM »
Can we have more info about the Metashape Python 3 module? What's its use, how can we install it if it must check the license?

I tried to install it in a virtualenv (Python 3.6.6) under Ubuntu 16.04 (without Metashape) but maybe dependencies and/or environment variables are missing:
Code: [Select]
In [1]: import Metashape
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-ec7fe42b313a> in <module>()
----> 1 import Metashape

~/.virtualenvs/metashape/lib/python3.6/site-packages/Metashape/__init__.py in <module>()
      1 import importlib
      2
----> 3 from .Metashape import *
      4
      5 # copy private variables

ImportError: ~/.virtualenvs/metashape/lib/python3.6/site-packages/Metashape/Metashape.so: undefined symbol: glColor3dv

2
Feature Requests / Re: Improvement of the Python API documentation
« on: June 07, 2018, 10:28:45 AM »
Well, I understand this is a matter of where photogrammetry sits in your workflow. In mine, this is just a small brick that have to be used by many people working on interpretation or later methods (and they can't afford to spend too much time on pre-processing because there is a lot of projects) so this exaggerated case could actually happen (because automation).

What you call a lack of credibility comes probably from the fact that you may not have a big pipeline around Photoscan. Spending a lot of time making sure a system will not perform a worst case scenario doesn't mean it cannot happen.

Now, if this is not a problem in your case that's good for you.

3
Feature Requests / Re: Improvement of the Python API documentation
« on: June 06, 2018, 05:28:47 PM »
I think you don't understand how an API documentation is supposed to be used. You're not supposed to only list all the signatures and expect the user to study thousands of lines of logs to match procedures to specific functions for every release. That is of course in the case that you don't want to waste that user's time and the time of every other persons (support included) that will have to help him.

I don't complain about missing functions/classes but about having a more complete documentation.
Then, studying the basic workflow is irrelevant here. I took this example because the change log is missing some important information, mainly that the function was split in two distinct functions. That is a big information that Python API users need to notice right away, not after N system failures, not after 48h hours of tests and digging to see if any part has been altered in any way.

That is something that every developer realizes the hard way when you have to rely on more than one software and library.

To actually add to this, this problem is way beyond the scope of your answer because there is some script procedures that may not be inferable by just knowing the complete GUI workflow and looking at the log. It is always frustrating to waste days trying to script a basic behavior available through a single menu and realize that this is either not possible or crash the software, fill a bug report and wait months while this bad behavior was present for several releases because no one had the courage to do it before because they are no Python expert and could not penetrate a very obscure documentation.

Please don't take it personally if I may sound harsh.

4
Feature Requests / Re: Improvement of the Python API documentation
« on: April 11, 2018, 11:07:10 AM »
While I second this request, I would add that improving the content of the documentation is also mandatory.

At the moment, function and attribute descriptions are nearly nonexistent and most of the time you have to ask how something should be used, has changed or should behave. It's a lot of wasted time for everyone.

For example:
Quote
  • Added Chunk.buildDepthMaps(), Chunk.importPoints(), Chunk.refineModel() and Chunk.removeLighting()
    methods
buildDenseCloud: Generate dense cloud for the chunk.
buildDepthMaps: Generate depth maps for the chunk.
I would never have seen that the dense cloud step has been split in two parts if I had not randomly seen a post about "why dense cloud step doesn't work anymore". No mention, no description of what is required prior to use each method, nothing (the quote is the complete documentation for the two methods).

This is a pity that the Python API does not receive more attention while it is, in my opinion, one the greatest strength of Photoscan.

5
Bug Reports / Re: Python API inconsistent behavior of chunk attributes
« on: April 11, 2018, 10:36:46 AM »
This seems great. You should add a few words in the Python documentation to the respective "active attributes" to explain this behavior.

6
Bug Reports / Re: Python API inconsistent behavior of chunk attributes
« on: April 09, 2018, 04:27:51 PM »
Hello Alexey,

that's what I now understand playing with it but, it might be a semantic issue, using clear() is not sufficient as it clears the instance but does not remove it (and this cleared instance will never be used again).
In my code sample I used chunk.model = None (for the optimize issue mentionned) and chunk.orthomosaic.clear() and the result is almost the same.

Besides the "no way to check if a step was done or not" issue I guess the correct way to clear such an instance is to use Chunk.remove(). I don't see the benefit of the clear() method if the instance is not reused, not removed and cannot be checked for being empty.

7
Bug Reports / Python API inconsistent behavior of chunk attributes
« on: April 09, 2018, 12:44:05 PM »
In Photoscan 1.4.1, there is no way to reliably check if a given step has been done anymore.
When you create a new project you have point_cloud, dense_cloud, model, etc, set to None. When you run the alignment, other attributes are still None but once you run the camera optimization, dense_cloud and model are set to a null object of their respective type.

The issue now is that it is really cumbersome to check if the steps has been done as they are not guaranteed to be None if not and there is no attribute/method to check that condition. I guess this is due to the fact that chunks now hold a list of all those products, which also imply that we cannot remove a product by setting its attribute to None anymore.

Another weird behavior is that the list of products keeps null instances when they are cleared and new ones are added. In this project I have only one model and the orthomosaic was cleared.
Code: [Select]
chunk.models
Out[10]: 2018-04-09 11:24:55 [<Model 'empty'>, <Model '332650 faces, 167072 vertices'>]

chunk.elevations
Out[11]: 2018-04-09 11:25:06 [<Elevation '2977x1689'>]

chunk.orthomosaics
Out[12]: 2018-04-09 11:25:14 [<Orthomosaic ''>, <Orthomosaic ''>]


8
    In Photoscan 1.4.1, updating attributes of a photo instance does not trigger a "project change". This results in the modifications being lost when saving the project.
    I haven't tested a lot of tings but I can see it affects the attributes:
    • Photo.path
    • Photo.meta

    This was not working either in earlier versions.

9
General / Re: how to create tiff input from hyperspectral array
« on: October 04, 2017, 04:45:33 PM »
I had a similar issue for one of my Python processes (though not Photoscan related) and the only way I found to create a multi-channel (more than 4 channels) was to use the tifffile module by Christoph Gohlke.
http://www.lfd.uci.edu/~gohlke/pythonlibs/#tifffile

Code: [Select]
# Here data shape is (height, width, depth)
tifffile.imsave(filename, data, compress=5, planarconfig='contig')

In this example, the compress argument enable the lossless zlib compression (remove it for no compression at all). The planarconfig value is required to save a (height, width, depth) image as tifffile usually handles (sample, height, width) .
However, tifffile does not handle the saving of SubIFDs, ExifIFD, or GPSIFD tags so if you need to write them you will have to use pyexiv2 or py3exiv2 (depending on your Python version).

This module does not have an online documentation but you can check the doc strings of tifffile.imsave and tifffile.TiffWritter.save, in IPython or similar, for more informations and/or to tweak the above example.

10
General / Re: "error reading project"
« on: September 27, 2017, 03:54:57 PM »
It might be possible to save the project if its indeed some file references mismatches.

What you can do is verify one of the file in the project directory (the directory that ends in ".files"):
  • open the directory PROJECT_NAME.files/0/0/
  • unzip frame.zip (you obtain doc.xml, make a backup just in case)
  • open doc.xml in notepad++ (or similar)
  • at the bottom of the file you can find the path references (like masks, thumbnails, etc..). Check that those references exists in the directory and fix the path if needed (or remove a line if the reference does not exists at all)
  • once you're done, save then delete frame.zip and zip your updated doc.xml as frame.zip

If you have multiple chunks (PROJECT_NAME.files/0/0/, PROJECT_NAME.files/0/1/, etc...) make sure to check all of them.

If that does not fix the problem (or you're not sure what to change) don't hesitate to post the frame.zip and chunk.zip (from the parent directory) files and describe the content of the directories.

11
Python and Java API / Re: Sparse Cloud
« on: September 14, 2017, 10:51:33 AM »
The source argument of the buildModel() method can take the following values
Code: [Select]
PhotoScan.PointCloudData # Sparse cloud, default if there is no dense cloud
PhotoScan.DenseCloudData # Dense cloud, default if the dense cloud is available

So in your case, simply use PointCloudData instead of DenseCloudData.

12
Python and Java API / Re: How to use optimizeCamera
« on: September 14, 2017, 10:41:44 AM »
It sounds like you are not using the 1.3 branch of Photoscan.
If you are still using the version 1.2.x, the method signature is
Code: [Select]
optimizeCameras(fit_f=True, fit_cxcy=True, fit_b1=True, fit_b2=True, fit_k1k2k3=True, fit_p1p2=True, fit_k4=False, fit_p3=False, fit_p4=False)
You can refer to the documentation for the version 1.2 or, better, upgrade your installation to the latest version.

13
Python and Java API / Re: addPhotos Error
« on: September 14, 2017, 10:34:21 AM »
When you start a new project there is no chunk by default.
In the API, doc.chunk references the active chunk or None if there is no chunk in the project.
You can use
Code: [Select]
chunk = doc.addChunk()
# Or this form if you may have already created a chunk
chunk = doc.chunk or doc.addChunk()

to create a new chunk to work with (it will also be set automatically as the active chunk).

14
Bug Reports / Image quality estimation for multi plane layout
« on: September 12, 2017, 03:42:24 PM »
The image quality estimation feature does not work as intended on multi plane projects. The index seems to be calculated from the right plane (according to the master channel) but it is stored in the wrong place.

I'll illustrate with a simple 2-planes project.

From the GUI
Launching the estimation for all cameras will work as "intended" (meaning it will only compute the quality for the plane set as master channel, not sure if it should compute all the plane at once or not). Now, after estimating the quality for the second plane, the results are only visible (and overwrite the previous ones) when setting the first plane as master channel.
This is clearly not intended.

From Python
From Python we can see the same behavior:
Code: [Select]
>> chunk.estimateImageQuality()
>> chunk.cameras[0].planes[0].photo.meta['Image/Quality'], chunk.cameras[0].planes[1].photo.meta['Image/Quality']
('0.9175', None)
Default parameter, which should represent all cameras, only process the first plane.

Code: [Select]
>> chunk.estimateImageQuality([cam.planes[1] for cam in chunk.cameras])
>> chunk.cameras[0].planes[0].photo.meta['Image/Quality'], chunk.cameras[0].planes[1].photo.meta['Image/Quality']
('0.9175', None)
Setting the second plane as parameter will also only compute the first plane (although, for unknown reasons, I had a couple times the results of the second plane but stored in the first one. Can't reproduce it though).

But, as a workaround, we can at least use the method from PhotoScan.utils:
Code: [Select]
>> for cam in chunk.cameras:
>>     cam = cam.planes[1]
>>     cam.photo.meta['Image/Quality'] = str(PhotoScan.utils.estimateImageQuality(cam.photo.image()))
>> chunk.cameras[0].planes[0].photo.meta['Image/Quality'], chunk.cameras[0].planes[1].photo.meta['Image/Quality']
('0.9175', '0.8142930269241333')
This way, the indices are correctly visible in the photo panel for each plane (when switching master channel).

15
exportCameras() does not seem to default to the chunk crs when exporting in OPK format (haven't tested the other formats though).

Here's a small example on a project in WGS84:
Code: [Select]
# Default projection
chunk.exportCameras(path, format=PhotoScan.CamerasFormatOPK)
produces
Code: [Select]
# Cameras (788)
# PhotoID, X, Y, Z, Omega, Phi, Kappa, r11, r12, r13, r21, r22, r23, r31, r32, r33
0150_530nm.tif 67.7274358037419120 -48.1576494834498305 174.9012951258622763 0.4840078980142850 -9.2832160972426454 -100.7925499560861482 -0.1848011306143397 -0.9820213886105380 -0.0385036937850455 0.9694462852253063 -0.1885854982745571 0.1568738662218514 -0.1613147302217500 -0.0083367950657243 0.9868678004988898
0151_530nm.tif 62.3380782906425210 -46.9575464451915039 175.2130322240972475 0.6908263376255074 -8.3691433348896744 -101.1224507389953686 -0.1908521733696799 -0.9808073005336287 -0.0399059787501298 0.9707680346967246 -0.1946143621493194 0.1404801511113671 -0.1455502343904970 -0.0119285064090072 0.9892789495403823
0152_530nm.tif 56.6615589174082501 -45.8102605583421294 175.6731439080446364 1.8430815242583256 -4.6073431256402895 -99.3065186929488277 -0.1611935265354484 -0.9859090012004662 -0.0447223473776283 0.9836484497408970 -0.1641819193647491 0.0740271887622177 -0.0803266725651923 -0.0320583640506908 0.9962529231921973
0153_530nm.tif 51.3537190435335162 -44.5989861775561707 176.1177954663481842 3.1369944021207945 -0.8980827195184733 -98.7285768762521059 -0.1517351813837043 -0.9868070568120064 -0.0564647443674392 0.9882969011829651 -0.1523742218509173 0.0071646093736062 -0.0156738585741755 -0.0547168085815809 0.9983788865035377

while specifying the projection:
Code: [Select]
chunk.exportCameras(path, format=PhotoScan.CamerasFormatOPK, projection=chunk.crs)
produces
Code: [Select]
# Cameras (788)
# PhotoID, X, Y, Z, Omega, Phi, Kappa, r11, r12, r13, r21, r22, r23, r31, r32, r33
0150_530nm.tif 0.1240930371033702 44.3175085612113122 174.9018362440360477 0.4834784101344093 -9.2838284752460627 -100.7931457061247755 -0.1848108878342219 -0.9820197668131644 -0.0384982249739585 0.9694426699590626 -0.1885943468338339 0.1568855697792619 -0.1613252782442067 -0.0083276605681559 0.9868661533708059
0151_530nm.tif 0.1240254839732798 44.3175193616320655 175.2135095180995847 0.6903244396727838 -8.3697089332840076 -101.1229956658497713 -0.1908611289183387 -0.9808057763987279 -0.0399006071495888 0.9707648096377509 -0.1946225752497196 0.1404910586911091 -0.1455600008140926 -0.0119198232392837 0.9892776172424735
0152_530nm.tif 0.1239543313536925 44.3175296867339483 175.6735599778699566 1.8426306170565241 -4.6078669709574429 -99.3069998413609483 -0.1612016680603882 -0.9859079269681957 -0.0447166831934479 0.9836463712775572 -0.1641899052443549 0.0740370940025618 -0.0803357858431893 -0.0320505001072466 0.9962524413801114
0153_530nm.tif 0.1238877999648306 44.3175405876056061 176.1181580660826569 3.1365864012938212 -0.8985671873743225 -98.7290007634679938 -0.1517424729064216 -0.9868062613393860 -0.0564590515119400 0.9882956475530297 -0.1523819286930088 0.0071736208107785 -0.0156823130941179 -0.0547096919127021 0.9983791437459195

Aren't all methods taking a projection parameter supposed to default to the chunk crs?

Pages: [1] 2 3 ... 6