Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - an198317

Pages: 1 2 [3]
General / Re: Algorithms used in Photoscan
« on: May 30, 2013, 02:09:00 AM »
Hi Dmitry,

We are a research team at Kansas State University. We used PhotoScan Pro heavily for various of projects. So do you think it's possible to share the literatures you mentioned in your post? We need do a project report as well based on Algorithms Photoscan uses.

Thanks a lot!!

Hello Arko,

We are happy to get positive feedback regarding PhotoScan software.

PhotoScan workflow is similar to the one you have presented, with an exception that our implementation is not based on the popular Bundler+PMVS2+CMVS assembly.

Here is a more detailed explanation of individual processing steps:

  • Feature matching across the photos.
    At the first stage PhotoScan detects points in the source photos which are stable under viewpoint and lighting variations and generates a descriptor for each point based on its local neighborhood. These descriptors are used later to detect correspondences across the photos. This is similar to the well known SIFT approach, but uses different algorithms for a little bit higher alignment quality.
  • Solving for camera intrinsic and extrinsic orientation parameters.
    PhotoScan uses a greedy algorithm to find approximate camera locations and refines them later using a bundle-adjustment algorithm. This should have many things in common with Bundler, although we didn't compare our algorithm with Bundler thoroughly.
  • Dense surface reconstruction.
    At this step several processing algorithms are available. Exact, Smooth and Height-field methods are based on pair-wise depth map computation, while Fast method utilizes a multi-view approach.
  • Texture mapping.
    At this stage PhotoScan parametrizes a surface possibly cutting it in smaller pieces, and then blends source photos to form a texture atlas.

Many of the PhotoScan algorithms are based in part on previously published papers, but are implemented from scratch and are thoroughly optimized for faster processing speeds. It is worth noting that we have favored algorithms with higher accuracy output over faster approaches with less accurate output.

With best regards,
Dmitry Semyonov
AgiSoft LLC

General / Does Photoscan Pro minimize shadows?
« on: May 20, 2013, 08:23:09 PM »
Hi everyone,

We are taking photos with cameras for our research plant plots and trying to used Photoscan Pro to build 3D models of the plants. Since the cameras are mounted on the camera frame, during the shiny day there are some shadows of camera frame in some of the photos.

But after building the model and exporting orthophoto, the shadows are either gone or minimized in the orthophoto. So I am curious about if PhotoScan Pro is taking out of shadows by accident or on purpose.

What I should also mention is we have a lot of overlap among photos, so the same spot can have shadow in one photo and another photo of the same spot will not have shadow.

I used "mosaic" option when I did the orthophoto output.


Hi There,

Since my lab bought Agisoft PhotoScan Pro, our research and projects are heavily replied on Photoscan Pro now. The more I used PhotoScan pro for my projects, the more I realized I lack some fundamental knowledge about  3D Photogrammetry and DEM modeling.

So I am wondering are there any books I can buy on 3D Photogrammetry, DEM modeling techniques that PhotoScan Pro might use as reference?

Thanks in advanced!!


Thanks RalfH for your quick response.

I had tried what you said in your post: export the texture as a orthophoto, did image segmentation for the orthophoto for plants, and then use function under Tools->Import->Import Texture. But the Photoscan Pro can lay the segmented orthophoto correctly on the top of the mesh.

That's why I posted the question because I don't know if I can use the "Import Texture" for this work.

I don't think I followed your last sentence though: how to "delete" the soil around the plant from the model?


I am wondering is there any function in PhotoScan Pro that I can segment/classify the texture and bring it back to Photoscan Pro and lay the segmented texture on the 3D mesh?

Basically, we used the images to build a 3D model, and then we want to segment/classify the images (texture) to get rid of everything except plants. Then we want to see the segmented plants on the 3D mesh.

Thanks a bunch for any input!

General / Re: camera positions after alignment
« on: February 24, 2013, 12:54:30 AM »
You can also go to Tools->Export->Export Cameras. When you see the pop-up window, you can choose Omega Phi Kappa output from the dropbox.

Hi Alexey and RalfH,

Thanks again for your detailed information. It's hugely helpful to me, and I really appreciate you guys' fast response. And thanks RalfH for your rotation matrix info, that makes sense to me now.

As Alexey mentioned, yes, once the cameras are mounted above the table, all of the cameras will only be controlled to shoot photos by computer hourly. No position, aperture, focal length, shutter speed changes at all. We had made this plan from the day one.

The reason I am not sure about the ground control method you guys keep talking about is this study is different than the typical remote sensing studies I did in the past: we have to utilize FOV of each camera very well, so no empty space of table can be seen from any of those cameras. And the plants will be moved randomly to make sure every single plant will have the same chance to gain equal light and air flow, etc. So if I make any markers on the tray of plants, the markers positions will be changed from time to time. This is why I am not sure ground control method works. Please enlighten me if my understanding about ground control is not right.

After thinking through, we came up a plan to test camera position import. Right now there are two cameras on the both ends of one row are not perfectly leveled: Phi values are 5.065235 and -4.045621 (Omega and Kappa values are very close to zero). Since there is no way I can input Omega, Phi, Kappa back to XML file, I will modify those two Phi values close to zero and then back calculate rotation matrix  using original Omega and Kappa values and modified Phi values. Then put matrix along with xyz into the XML and import back to PhotoScan using Import Cameras.

So my question is will PhotoScan use this new camera position XML file to correct those two cameras on the both ends. And after importing this XML file, should I redo the workflow again to make a new 3d model?

Now the 3D model with cameras can show those two cameras are not perfectly flat. So I am curious about after redoing this, will those two cameras' position be changed according to the Phi values I modified?

Sorry about my long reply! Any input will be hugely helpful. If this setup can be done, the huge part of this project is fixed by PhotoScan.


Thanks for your detailed information, RalfH. And sorry to get back to you late.

After studying what you mentioned the xml file format, the only xml format I can find is the output from the tools> export > export cameras... Is this the xml format you mentioned?

If so, I checked the content in this xml file. The calibration part is the internal camera calibration parameters like focal length, principle point, etc. So is the "Transform" section is the external calibration?

I compared values with the Omega, Phi, Kappa output values, I assume in the Transform section, the value order is r11, r12, r13, x, r21, r22, r23, y, r31, r32, r33, z.

So do you think this is the right assumption just by looking at values? I can't see any values for rotation angles Omega, Phi, Kappa.

P.S. RalfH, we are taking photos of different known objects using this 18-camera setup, and we have written a program to do external measurement. This still is a on-going process. Once we get this step done, I can share what I got.

Thanks for helping!



I haven't finished reading the post you attached. The reason we want to use our own coordinates is because we need do some biological measurement like plant leave length and width. The camera position in the Agisoft is relative position. I assume, if we can import our own coordinates, then I can do these measurements from the 3D model, using reasonable units (cm, inch, etc). Right now, we can't do measurement based on the original camera coordinates.

I've seen people talking about using GCPs or GPS for aerial photography. But our setup is different, which is no GPS information we can use. If we use GCPs, I assume I have to do this over and over again for different models. The 3D model will be built hourly based on 18 photos per hour.

So I thought, since the cameras never move, I can do external calibration one time at the beginning, then we can use the same camera position over and over again.

Does this make sense to you? I am looking forward to hearing from Alexey to help me. Thanks you guys a lot!!



Thanks for your detailed information. We are using PhotoScan Pro version. It seems like there is a way to import modified camera position and rotation parameters. We do want to import X, Y, Z, Omega, Phi and Kappa into PhotoScan. So can you let me know where I can find function to do this step?

And yes, our setup never changes, so I assumed I only need import modified parameters once. But I read through manual and just couldn't find it. Maybe I was looking at the wrong place. I will really appreciate your help if you can enlighten me on this.

I will be very glad to share the approach to measure those parameters once I test my theory.


Face and Body Scanning / Re: Interesant article - scanning people
« on: January 23, 2013, 03:16:41 AM »
This is So COOL! Can't wait to see PhotoScan get powerful and powerful.

Hello there,

Our lab bought the PhotoScan Pro for our project and we love this software a lot!

For our project, we mounted 18 cameras (9 cameras per row and 2 rows) on a flat panel above a table in the growth chamber. We have plants on the table so 18 cameras can be controlled taking photos of those plants at the same time every couple of hours. Eventually, we want to make 3D models of plants on the table using every 18 photos from those 18 cameras, and then we can do some measurements like plant leave length and width, etc from 3D models.

We have our own coordinate system built for the entire system, so we know the exact locations of each camera. So I am wondering is there any way that we can input camera position information like X, Y, Z, Omega, Phi, Kappa back into PhotoScan? So this way we can use these information to make a 3D model and extract more information from the model.

I do see the camera position output containing these information automatically built by PhotoScan Pro. But we want to use our own camera position parameters for more information.

Thanks for your help! I am looking forward to hearing about the solution.

Pages: 1 2 [3]