Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - bmartin

Pages: [1]
General / Can Photoscan improve alignment by using drone inertial data?
« on: February 28, 2018, 07:06:20 PM »

I have a very difficult image sequence acquired underground by a drone presenting various challenges (non uniform illumination, some fast movement, dust, etc).  I worked very hard to get the alignment to work but it succeeded in aligning only 75% of the images and out of this 75%, there was much problems as the camera path is nothing like the real flight path!

I thought that a great way to help the alignment would be Photoscan to also use the inertial data to better constrain alignment solution or even reject them completely if deemed impossible.  In doing so, you would essentially get a Visual-Inertial SLAM algorithm like the ones used in autonomous vehicles instead of just using photogrammetry alone.

Is it something that Photoscan could do?  If so, what kind of format should this inertial data be in order for Photoscan to use it?



Python and Java API / How to show Python script progression to user?
« on: February 09, 2018, 12:15:55 AM »

I have a script which masks automatically all the images in a chunk.  On my computer, it runs at about 3 images per seconds so masking all the images in a project with 2315 images takes more than 12 minutes. If I can't inform the user of the masking progress, there are very good chances that he will think that the program is stuck and will simply kill it...

I tried adding a print statement into my mask loop but what happens is that ALL the printing is done AFTER the function finishes, defeating the purpose of printing to inform the user of the processing progress. See below:

def CreateMasks(chunk):
   imagesMaskedCount = 0
   for cam in chunk.cameras:
      print("Image(" + str(imagesMaskedCount) + ") masked...") #Just printing after returning!!!
      src =

      openCVImg = AgisoftOpenCVUtils.AgisoftToOpenCVImage(src)
      gray = cv2.cvtColor(openCVImg, cv2.COLOR_BGR2GRAY)

      whiteColor = (255,255,255)

      # Create a mask supposed to cover everything that is white.
      mask = PhotoScan.Utils.createDifferenceMask(src, whiteColor, tolerance=5, fit_colors=False )
      m = PhotoScan.Mask()

      # Assign this mask to our camera.
      cam.mask = m
      imagesMaskedCount = imagesMaskedCount+1

Is there a way to make the printing work "real time" instead of just in "batch" at the end of my long function?



Bug Reports / Function createDifferenceMask not working correctly?
« on: January 09, 2018, 09:54:49 PM »

I tried using the function createDifferenceMask to mask the WHITE parts of the attached test image named TestImage.png.  I get weird results where the mask also covers blue regions and doesn't mask all the white regions! 

To reproduce this bug, just edit to change the location of TestImage.png to where you saved it on your system (line 13) and run the script.  You should get the same results I got (see MaskPartiallyWorkingBug.png.)

Can somebody confirm if it's a known bug, something new or expected (strange) behavior???  Is there a workaround for this?  I really need to generate good masks automatically for lots of images with white mobile structures and I'm afraid this can be a problem...



Feature Requests / Improvement of the Python API documentation
« on: December 22, 2017, 10:38:48 PM »

I just started my first scripts in Python for Photoscan and I find searching through the Pdf API documentation very difficult

If the PhotoScan Python Reference Pdf were structured just like the Agisoft PhotoScan User Manual with a hierarchical menu linked to the different document sections, it would be very easy to use (see attached snapshot).  It's present lack of structure makes it very frustrating to use, if not almost useless.

Is it something that you could improve easily?


I would like to use the Magic Wand tool from within Python scripts. Here is my dream scenario:

  • By using OpenCV, detect the big blobs of a specific color in the image
  • Use the center of these blobs as the seeds for the Magic Wand tool. (In effect, it would be like doing a manual CTRL-click on each of these blob's center with the Magic wand tool)
  • Save these masks for each image in my project
Is the Magic Wand functionality exposed in the Python API? 
If not, is it something that could be done fast in a future release? 
If not, any suggestion on how to achieve some equivalent results?


Bruno Martin

General / Best strategy to scan thin objects on turn table
« on: January 27, 2016, 10:20:33 PM »

I want to assess if Agisoft is able to scan and reconstruct small thin objects on a turn table.  The kind of objects I have in mind is glasses (see the attached pictures for example).  I have down-sampled the posted image due to this forum size limitation but the real pictures are RAW of 5184x3456 pixels.  Also attached is a screenshot of the textured reconstructed (bad) model.

Details and questions:
1) I used my own pair of glasses for this fast test.  The real ones will NOT have their lenses mounted yet.  This will prevent some deformation near the part holding the lenses.  Knowing this, the reconstruction quality of the branches is what really interest me. If I can get it to work on this kind of structure, I'm pretty sure I will almost always work.
2) I put a piece of paper with text and graphics under the glasses hoping to help the software compute the alignment.  Is it a good idea?  Would it still work on a uniform white background given that the structure of interest shows close to no texture?
3) The angular rotation between the two attached images is representative of my image sequence (35 images in all).  Is it too much rotation?  Must the thin parts of the model show overlap or is my textured background enough for alignment?  What of the depth reconstruction? I seem to be getting very few "Tie points" along the glasses branches...

Thanks in advance for any advice...


I'm trying to reconstruct an indoor scene by taking a video with a rotating camera placed on a tripod at the center of the room.  With the video, I regularly sampled 142 tif images (640x480) to import in Agisoft Photoscan pro.  The problem is that I can't find a way to get them aligned properly (see snapshot).

Is there something special to do for this kind of tripod centered acquisition?


Pages: [1]