Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - bmartin

Pages: [1]
General / Can Photoscan improve alignment by using drone inertial data?
« on: February 28, 2018, 07:06:20 PM »

I have a very difficult image sequence acquired underground by a drone presenting various challenges (non uniform illumination, some fast movement, dust, etc).  I worked very hard to get the alignment to work but it succeeded in aligning only 75% of the images and out of this 75%, there was much problems as the camera path is nothing like the real flight path!

I thought that a great way to help the alignment would be Photoscan to also use the inertial data to better constrain alignment solution or even reject them completely if deemed impossible.  In doing so, you would essentially get a Visual-Inertial SLAM algorithm like the ones used in autonomous vehicles instead of just using photogrammetry alone.

Is it something that Photoscan could do?  If so, what kind of format should this inertial data be in order for Photoscan to use it?



Python and Java API / Re: How to show Python script progression to user?
« on: February 16, 2018, 10:56:35 PM »
Thanks!  That's exactly what I needed!


Python and Java API / How to show Python script progression to user?
« on: February 09, 2018, 12:15:55 AM »

I have a script which masks automatically all the images in a chunk.  On my computer, it runs at about 3 images per seconds so masking all the images in a project with 2315 images takes more than 12 minutes. If I can't inform the user of the masking progress, there are very good chances that he will think that the program is stuck and will simply kill it...

I tried adding a print statement into my mask loop but what happens is that ALL the printing is done AFTER the function finishes, defeating the purpose of printing to inform the user of the processing progress. See below:

def CreateMasks(chunk):
   imagesMaskedCount = 0
   for cam in chunk.cameras:
      print("Image(" + str(imagesMaskedCount) + ") masked...") #Just printing after returning!!!
      src =

      openCVImg = AgisoftOpenCVUtils.AgisoftToOpenCVImage(src)
      gray = cv2.cvtColor(openCVImg, cv2.COLOR_BGR2GRAY)

      whiteColor = (255,255,255)

      # Create a mask supposed to cover everything that is white.
      mask = PhotoScan.Utils.createDifferenceMask(src, whiteColor, tolerance=5, fit_colors=False )
      m = PhotoScan.Mask()

      # Assign this mask to our camera.
      cam.mask = m
      imagesMaskedCount = imagesMaskedCount+1

Is there a way to make the printing work "real time" instead of just in "batch" at the end of my long function?



Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: January 19, 2018, 09:35:18 PM »
Thanks PolarNick!

That was exactly what I needed!  No need to use the disk as a scratchpad to get the representation right between Photoscan and OpenCV.

This trick, and the other way around, should be part of the Python API documentation!

Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: January 16, 2018, 11:51:05 PM »
Hello Alexey,

I looked at your example from:

Code: [Select]
import PhotoScan, numpy

    camera =[0]
    mask = camera.mask.image()
    bytes = mask.tostring()
    pixels = numpy.fromstring(values, dtype=numpy.uint8)
    (unmasked,) = numpy.nonzero(a == 255)

In my case, I want to pass the original image to OpenCV which is RGB and not simply an unsigned char.  How can I do to get a numpy array of RGB pixels to pass to OpenCV?

Code: [Select]
cam = chunk.cameras[0]
src =
bytes = src.tostring()
pixelsRGB = numpy.fromstring(bytes, dtype=numpy.uint8) # WHAT dtype TO USE TO GET RGB PIXEL ARRAY?
gray = cv2.cvtColor(pixelsRGB , cv2.COLOR_BGR2GRAY)

Finally, once I'm finished with image processing, how do I put back my own mask from the numpy array of 0/255 I will get?



Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: January 12, 2018, 07:43:42 PM »
"So in some cases small unmasked areas may appear."

In that case, I will probably need to code my own masking with OpenCV.  Could someone tell me the right way to pass the image data contained in a PhotoScan.Photo object to OpenCV?  In my last post, it seems OpenCV expects a numpy array but I don't know how to access it?



Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: January 11, 2018, 11:44:20 PM »

I did another test using the same script but with concentric black and white ellipses.  I also got very strange (and wrong) results where the two innermost white ellipses are NOT masked...  See the attached images.

I don't understand as the createDifferenceMask function is not using a seed and should not rely on the connectivity of the pixel components as long as they meet the expected color within tolerance?

It seems a simple linear image traversal should not miss any pixel meeting the criteria.  Could it be caused of some post processing used in order to get a polygonal representation of the mask border instead of a more memory hungry binary image?

I tried using OpenCV to get the equivalent result (see the attached script) but the Python interpreter kept on telling me the image I gave to the OpenCV function was not a numpy array? 

   2018-01-11 10:54:16 Traceback (most recent call last):
   2018-01-11 10:54:16 File "D:/Corriveau/Python/", line 14, in <module>
   2018-01-11 10:54:16 gray = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
   2018-01-11 10:54:16 TypeError: src is not a numpy array, neither a scalar

What am I doing wrong?  How can I access the real pixel array to do some processing with OpenCV to bypass the problem with the createDifferenceMask function?


Bruno Martin

Bug Reports / Function createDifferenceMask not working correctly?
« on: January 09, 2018, 09:54:49 PM »

I tried using the function createDifferenceMask to mask the WHITE parts of the attached test image named TestImage.png.  I get weird results where the mask also covers blue regions and doesn't mask all the white regions! 

To reproduce this bug, just edit to change the location of TestImage.png to where you saved it on your system (line 13) and run the script.  You should get the same results I got (see MaskPartiallyWorkingBug.png.)

Can somebody confirm if it's a known bug, something new or expected (strange) behavior???  Is there a workaround for this?  I really need to generate good masks automatically for lots of images with white mobile structures and I'm afraid this can be a problem...



Feature Requests / Improvement of the Python API documentation
« on: December 22, 2017, 10:38:48 PM »

I just started my first scripts in Python for Photoscan and I find searching through the Pdf API documentation very difficult

If the PhotoScan Python Reference Pdf were structured just like the Agisoft PhotoScan User Manual with a hierarchical menu linked to the different document sections, it would be very easy to use (see attached snapshot).  It's present lack of structure makes it very frustrating to use, if not almost useless.

Is it something that you could improve easily?

Python and Java API / Re:
« on: December 20, 2017, 07:49:54 PM »
The possibility to make the mask creation additive would be a very useful feature indeed!

That's one of the ingredients that would be needed in using the Magic Wand feature from Python scripts as I asked in



I would like to use the Magic Wand tool from within Python scripts. Here is my dream scenario:

  • By using OpenCV, detect the big blobs of a specific color in the image
  • Use the center of these blobs as the seeds for the Magic Wand tool. (In effect, it would be like doing a manual CTRL-click on each of these blob's center with the Magic wand tool)
  • Save these masks for each image in my project
Is the Magic Wand functionality exposed in the Python API? 
If not, is it something that could be done fast in a future release? 
If not, any suggestion on how to achieve some equivalent results?


Bruno Martin

General / Re: Best strategy to scan thin objects on turn table
« on: January 29, 2016, 06:20:38 PM »
Thanks with your suggestions.
  • I tried the glasses again but very small angular displacement but the result was not better.
  • I tried also with brown plastic coffee stir sticks (one plain, another with spots of liquid paper to add features to it, see attachment). These sticks are a little wider than my glasses branches and much less specular. I get some mesh out of it but the result is still bad.

Maybe with a macro lense and taking pictures of something like 20mmx20mm all around the glasses I could get something but the acquisition process would not be suitable for mass usage. As Arie said, I think I reached the end of the photogrammetric approach.

General / Best strategy to scan thin objects on turn table
« on: January 27, 2016, 10:20:33 PM »

I want to assess if Agisoft is able to scan and reconstruct small thin objects on a turn table.  The kind of objects I have in mind is glasses (see the attached pictures for example).  I have down-sampled the posted image due to this forum size limitation but the real pictures are RAW of 5184x3456 pixels.  Also attached is a screenshot of the textured reconstructed (bad) model.

Details and questions:
1) I used my own pair of glasses for this fast test.  The real ones will NOT have their lenses mounted yet.  This will prevent some deformation near the part holding the lenses.  Knowing this, the reconstruction quality of the branches is what really interest me. If I can get it to work on this kind of structure, I'm pretty sure I will almost always work.
2) I put a piece of paper with text and graphics under the glasses hoping to help the software compute the alignment.  Is it a good idea?  Would it still work on a uniform white background given that the structure of interest shows close to no texture?
3) The angular rotation between the two attached images is representative of my image sequence (35 images in all).  Is it too much rotation?  Must the thin parts of the model show overlap or is my textured background enough for alignment?  What of the depth reconstruction? I seem to be getting very few "Tie points" along the glasses branches...

Thanks in advance for any advice...

Hello Alexey,

Thanks for the tip about the groups.  See GoodPanoramicAlignment.JPG. 

Question: Could you explain a little more why I needed to make a group (I thought that by defaut all the images were part of a default group)?

Also, when I try to build a dense point cloud, I get a "Zero resolution error" as shown in the other screenshot.  Is is because of the impossibility to build depth information as you said?  Is there a way to fix an arbitrary resolution anyway so I later get the texture of the images projected on a cylinder (or sphere)?



I'm trying to reconstruct an indoor scene by taking a video with a rotating camera placed on a tripod at the center of the room.  With the video, I regularly sampled 142 tif images (640x480) to import in Agisoft Photoscan pro.  The problem is that I can't find a way to get them aligned properly (see snapshot).

Is there something special to do for this kind of tripod centered acquisition?


Pages: [1]