Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - tkwasnitschka

Pages: [1] 2 3 ... 5
Feature Requests / Re: Import Video - Drone location metadata
« on: January 19, 2023, 06:15:26 PM »
Could Agisoft support please give a comprehensive reference as to which geospatial video metadata standards are currently supported?

There is no such reference in the manual and I read in the forum about SRT and some other standard tags that may be supported, but which are not named.

STANAG KLV Metadata support would be a great fetaure and work well along with the adaptive frame extraction feature.
Many thanks!

General / Components and parts of them
« on: March 17, 2021, 06:55:12 PM »
I have a big 10k image project that falls apart into 1 component of 2 parts.
Part 2 is clearly disconnected from Part 1 and has 22 images, part 1 contains all the others that are clearly internally connected with matches - yet part1 shows me a complex pyramid of smaller parts.
Why is that?
What is the difference between a part and a component in the first place?
Are only "level one parts" disconnected?

I should add that I processed the data set on a cluster of 7 machines. Is this the reason?

Please explain here or expand the reference!

Python and Java API / Re: Merge identical cameras
« on: December 13, 2019, 02:07:29 PM »
allow my to simplify my question:
How can I merge identical cameras of two chunks and their projections the same way I can merge markers?

Python and Java API / KeyError for cams in thinned sparse point cloud
« on: December 13, 2019, 02:04:37 PM »
I want to export the per-camera uv coordinates of a heavily thinned sparse point cloud with the script provided in this thread

As soon as the script hits a camera that does not contain any projections due to cloud thinning, I get a key error:

Code: [Select]
KeyError                                  Traceback (most recent call last)
<ipython-input-31-d4c1239ec97f> in <module>()
----> 1 projections[chunk.cameras[2]]

KeyError: <Camera '20160325_154525_IMG_102768.JPG'>

How can let the loop ignore those cameras? I dont understand how to grab the cameras with anything else but "projections".

two more observations:
1. the GUI Reference pane still lists projections even it there are no points at all left on the image
2. Decimating by quality makes sense, but it creates the situation where some images do not have projections left at all. How do I decimate the sparse pointcloud by spatial subsampling? I actually just want to subsample "a point per area" of the cloud.

Python and Java API / Merge identical cameras
« on: December 03, 2019, 05:45:27 PM »
I run very large projects with bad camera calibration that I split into overlapping chunks, align them separately and then merge them back. Then, they need to be optimized to improve on my calibration and vague referencing, so I need matches shared among the components of the former chunks. This leaves me with the follwing simplified situation:

Chunk A (Cam1, Cam2, Cam3, Cam4) + Chunk B (Cam3, Cam4, Cam5, Cam6) =

Merged Chunk (Cam1, Cam2, Cam3, Cam4

                                                                   Cam3, Cam4, Cam5, Cam6)

For reasons I dont understand MS does not match (i.e., re-align after reset) the identical cameras unless I do a full alignment from scratch, which is not an option.

To link them with control points would mean many hundred points, slowing down the GUI considerably. To be fair, this is what Alexey recommended in the past:

But as he points out this is not a perfect merging solution. I know the cameras are identical, so alignment isn't actually necessary.

I want to be able to merge those identical cameras the same way I can merge markers!

I thought this could be done in python, and yes you can transfer all projections from one camera to another with the following code:

Code: [Select]
projections = doc.chunk.point_cloud.projections
camera_3A = doc.chunks[0].cameras[2]
camera_3B = doc.chunks[1].cameras[0]
projections[camera_3B] = projections[camera_3A] # replaces projections even if target is empty

But I want to append, not replace the projections! How can this be done? Apparently there are no operands or write functions for projections or their dependencies:
Code: [Select]
projections[camera_3B] = projections[camera_3A].append(projections[camera_3B])     # Doesnt work, but wanted!
# then reduce number of tiepoints
# then delete duplicates

By the way, could someone once and for all clarify the relationship of
  • cameras
  • camera keys
  • keypoints
  • tiepoints
  • matches (deprecated??)
  • projections
  • tracks
  • track ids
  • points
  • sparse cloud
This is so central that there should be a document, preferrably with python code showing the relationship.


General / Re: Optimize overlapping chunks
« on: December 17, 2018, 02:06:24 PM »
Am I really the only one who needs to optimize several very large chunks relative to each other?

General / Optimize overlapping chunks
« on: December 13, 2018, 07:28:02 PM »
I have 25 chunks in a grid that overlap to each of their neighbors, i.e. they partly contain the same cameras. Even though I ran them all with the same intrinsic parameters, overlapping areas dont perfectly match since the calibration is imperfect and cannot be done any better. Thus, the residual misfit was pushed into the extrinsics.

So, how do I
- optimize chunks relative to each other so that overlapping areas actually overlap
- merge chunks so that there are no double images (-> do I really have to manually pick the cameras or find them with a script?)

-> I cannot just run all chunks in one optimization step as each chunk already has 10k images, and there are 25 chunks.

Python and Java API / Re: Numpy array to mask/image?
« on: September 14, 2018, 03:16:30 PM »
This is my updated script. It creates a mask contained in the alpha channel but I fail to load it back to Photoscan.
The mask always returns the full image area and disregards what I saved in the A channel. Saving the Photoscan image, I see the mask is contained in the image. What am I doing wrong?
Code: [Select]
import PhotoScan, cv2
import numpy as np

chunk =
scale = chunk.transform.scale
camera = chunk.cameras[0]
image =
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
map = np.fromstring(depth.tostring(), dtype=np.float32)

# scale array:
map_scaled = map*scale

# apply treshold:
threshold = 4
mask = ((map_scaled > threshold) * 255).astype("uint8")

# write back:
mask_img = PhotoScan.Image.fromstring(mask, image.width, image.height, 'K', datatype='U8')  <-- WRONG??
camera.mask = PhotoScan.Mask()
UPDATE: Found the Error. The channel must be K, not A. Updated the code above.

Python and Java API / Re: Numpy array to mask/image?
« on: September 13, 2018, 03:11:26 PM »
Alexey, I have seen that post but I dont get it.
Please, how do I convert a numpy.ndarray back to a photoscan image?
Thanks so much

Bug Reports / Re: Function createDifferenceMask not working correctly?
« on: September 13, 2018, 03:04:45 PM »
Maybe this is really stuped (not for me):
could you please show "the other way around" = how to write a numpy array back to a photoscan image? I just dont get it.

Python and Java API / Re: Numpy array to mask/image?
« on: August 29, 2018, 05:40:03 PM »
Thanks Alexey,
yes I have been staring at this post all afternoon, but I need the other way around: write the contents of an array back to the photoscan image. That post only talks about extracting an image as a numpy array, right?
If I follow your instructions, I get:

Code: [Select]
PhotoScan.Image = mask

Out[60]: 2018-08-29 16:32:52 array([0, 0, 0, ..., 0, 0, 0], dtype=uint8)

Out[61]: 2018-08-29 16:33:01 array([0, 0, 0, ..., 0, 0, 0], dtype=uint8)

mask = PhotoScan.Mask()

Out[63]: 2018-08-29 16:33:43 <PhotoScan.Mask at 0x1dfc7af8>

TypeError                                 Traceback (most recent call last)
<ipython-input-65-8fdb8ae6c111> in <module>()
----> 1 mask.setImage(PhotoScan.Image)

TypeError: argument 1 must be PhotoScan.Image, not numpy.ndarray


Python and Java API / Numpy array to mask/image?
« on: August 28, 2018, 08:13:09 PM »
I want to create masks from depth maps using numpy. Haven't checked all my code yet but first of all, how do I convert my numpy array back to a photoscan mask or image?

Code: [Select]
import PhotoScan, numpy

chunk =
scale = chunk.transform.scale
camera = chunk.cameras[0]
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
threshold = 4

# convert to numpy array:
map = numpy.frombuffer(depth.tostring(), dtype=numpy.float32)

# scale array:
map_scaled = map * scale

# apply threshold:
mask = ((map > threshold) * 255).astype("uint8") # <-- is this actually right...?

# write back to image:
camera.mask.image() = mask    # this is not right...

sorry for the basic question, many thanks!

Python and Java API / remove shape layers
« on: August 24, 2018, 04:50:28 PM »
My shapes are organized in layers, or, groups. Right-click allows you to remove the layer and its shapes, which is what I want.

How do you do that in python?

or, looking at my code:
Code: [Select]
for i in camgroups:
newchunk = chunk.copy()
newchunk.label = chunk.label+"_group_"+str(i)
# delete the other cameragroups
# delete the other shapegroups (this is the part that does not work)

And, please: Why are they called layers in the gui and in Alexeys scripts but groups in the python API?
Many thanks!

General / Re: Starting client from PowerShell
« on: September 05, 2017, 04:04:38 PM »
you could just throw a batch file starting the nodes into each machines autostart folder? Of course this means they run all the time...
works well for us...

General / Less than 4 common tie points = still aligned?
« on: July 11, 2017, 03:38:58 PM »
If two calibrated images only have one valid tie point in common, what is that good for? Does it mean they are actually aligned to each other or just maybe indirectly through a loop closure with other pictures? As I am providing Camera poses, it is hard to tell.

Inferring from requirements in manual tie point placement, one needs at least four tie points to align two images, correct?


Pages: [1] 2 3 ... 5