Community Forum

Recent Posts

Pages: [1] 2 3 ... 10
Hello Boony,

It would be helpful, if you can provide the processing log from the version 1.4.0 related to the camera alignment and the source images to, so that we can test the processing on our side.
Hi there!

I am trying to recreate a scan that was tested on set with an older version of PhotoScan, but I get vastly different results with newer versions of the software.

With seemingly the same basic settings they got a decent headscan from a set of photos using version 1.1.6 Build 2038, while I can only align half of the images with version 1.4.0. Playing with different settings yields similar results, matching only one side of the face and not aligning the other half of the images.

Are there any basic settings I'm overlooking, that could help with that?
Feature Requests / Re: Alignment step add-on
« Last post by Yoann Courtois on Today at 06:46:43 PM »
Ok perfect ! Just noticed the previous point cloud didn't disappear when starting subset alignment !
We now will be able to Optimize afterwards !

The next step would be to give the possibility to lock one or several camera positions during model optimization. Indeed, as we don't use georeferenced images (or not enough accurate georeferencement), we cannot force the position with camera accuracy.
One solution would be to copy/paste images coordinates, from estimated (where it's actually not possible to "copy") to source, and set a tiny camera accuracy so that those pictures won't move. It would be easier to add a "padlock" tool on one or a set of picture. You see ?

I understand that the chunk coordinate space is local to the chunk, but I'm not clear on how to convert and position everything within real world coordinate space.

I'm confused, since the measurement tool gives me exact accurate measurements and everything is located around the 0,0,0 point of the 3D grid and aligned as expected along the XYZ view axes as well.
Feature Requests / Re: Alignment step add-on
« Last post by Alexey Pasumansky on Today at 06:32:04 PM »
Hello Yoann,

If you do not have key points from the initial alignment, then you'll lose the tie points from that alignment - key points will be detected for all the images, but the tie points would have at least one projection on the newly added photos.

In case you have key points kept from the initial alignment, the tie points wouldn't be lost.
When I reconstruct, I also import a camera position CSV file as reference. This properly scales and orients the scene to the Agisoft grid axes reliably.

My issue is that I want programmatically scale and orient the build region after this step. My script below is run after alignment and importing the CSV file (which does correctly scale and orient the entire camera rig).
It appears that the region parameters are always in relation to one of the aligned cameras, and doesn't seem to be the same camera every time, so I'm having trouble reliably placing the build region box at the center of the rig on the ground plane where I need it. I can offset and rotate the box, but since its relative to some camera each build, it's not reliable.

How can I best place the region in my chunk at an absolute position in world space (for instance, at 0.0,0.0,2.0 and scaled to 3.5,3.5,4.0)?

Thank you!

My script: (apologies, the code tag button is not working in Edge browser)
#load CSV reference file
      #import csv reference for scale and orientation
      csvFilePath = (path+"/CameraData/CamData.csv")
      print("Loading references from "+csvFilePath)
      print("loaded references!")
      print("couldn't load references")
   #set volume bounding box
   print("===Setting Bounding Box Volume===")
   new_region = PhotoScan.Region()

   new_center = PhotoScan.Vector([0.0,0.0,2.0]) #this offsets relative to some cameras Z axis, undesired
   #new_center = offset + new_rot * new_center

   x_scale = 3.5
   y_scale = 3.5
   z_scale = 4.0

   new_size = PhotoScan.Vector([x_scale, y_scale, z_scale])

   T =
   v_t = T * PhotoScan.Vector( [0,0,0,1] )
   m = PhotoScan.Matrix.Diag( (1,1,1,1) )

   m = m * T
   s = math.sqrt(m[0,0] ** 2 + m[0,1] ** 2 + m[0,2] ** 2) #scale factor

   R = PhotoScan.Matrix( [[m[0,0],m[0,1],m[0,2]], [m[1,0],m[1,1],m[1,2]], [m[2,0],m[2,1],m[2,2]]])
   R = R * (1.0 / s)

   new_region.size = new_size = new_center
   new_region.rot = R.t() = new_region
Feature Requests / Re: Alignment step add-on
« Last post by Yoann Courtois on Today at 06:06:29 PM »
Hum, has something changed in last realease ?

I will test that again in my next project, and give you news !

General / Re: processing report - pointcloud point count
« Last post by Alexey Pasumansky on Today at 05:28:14 PM »
Hello flt,

The numbers are used (valid) tie points and total number of tie points (used + unused). So basically the number of points in the sparse cloud against the total number of detected matching points.
Feature Requests / Re: Alignment step add-on
« Last post by Alexey Pasumansky on Today at 05:26:41 PM »
Hello Yoann,

Incremental image alignment shouldn't remove existing matching points.

Are you sure that you have checked on Keep Key Points option in the Advanced preferences window before the initial alignment is started?
Feature Requests / Re: Alignment step add-on
« Last post by Yoann Courtois on Today at 04:43:15 PM »
Hello Alexey !

Yes yes, I have turned on this option, which well accelerate the process when adding a few pictures to a big project. But I wasn't talking about that.

I meant, it would be good if the first main tie point cloud would be stored somehow so that we could use the "Optimize" tool after other pictures were aligned on the main model.
Explained with a example :
- A first main model (1 000 pictures) is composed of 1 000 000 tie points, which linked those 1 000 pictures together.
- A subset of pictures (50 pictures) is added afterwards and aligned on the main model. Then the cloud is composed of, let's say 50 000 tie points, which linked those 50 pictures together, and those 50 pictures with some of the 1 000 first pictures. BUT, we have lost the 1 000 000 tie points which linked the 1 000 pictures together ! Then it's not possible to "Optimize camera" !

It would be so nice if those 1 000 000 tie points who be kept, so that after subset alignement we get 1 040 000 tie points (I supposed 10 000 (out of 50 000) tie points existed already beforewards).

Is it sufficiently clear ?  :-[

Pages: [1] 2 3 ... 10