Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - tkwasnitschka

Pages: [1] 2
1
General / Alignment Components inaccessible to Automation
« on: November 24, 2023, 05:03:10 PM »
I constantly encounter data sets that, after alignment without georeferencing, produce up to 50 alignment components. This makes sense, the images are a sequence of a single track with some interruptions.

BUT:
- There is no way to view alignment components side by side to set corresponding points efficiently
- I see no reference or possibility to acces an alignment component through python or any batch function
- There is no way to separate alignment components into image groups or better into chunks, making them accessible to established workflows
- There is no way to merge these components to get rid of them - they do not overlap!
- you may want to clarify the terminology here: Alignment components, not mesh components.

My only option is to duplicate each chunk and incrementally erase all but one components as many times as you have components - manually!!

PLEASE! Clarify and/or suggest a workaround!
The one thing I notice is that including camera poses helps to create less components, but in how far?

Many thanks
Tom

2
General / Components and parts of them
« on: March 17, 2021, 06:55:12 PM »
Hi,
I have a big 10k image project that falls apart into 1 component of 2 parts.
Part 2 is clearly disconnected from Part 1 and has 22 images, part 1 contains all the others that are clearly internally connected with matches - yet part1 shows me a complex pyramid of smaller parts.
Why is that?
What is the difference between a part and a component in the first place?
Are only "level one parts" disconnected?

I should add that I processed the data set on a cluster of 7 machines. Is this the reason?

Please explain here or expand the reference!
thanks
Tom

3
Python and Java API / KeyError for cams in thinned sparse point cloud
« on: December 13, 2019, 02:04:37 PM »
Hi,
I want to export the per-camera uv coordinates of a heavily thinned sparse point cloud with the script provided in this thread https://www.agisoft.com/forum/index.php?topic=10730.0.

As soon as the script hits a camera that does not contain any projections due to cloud thinning, I get a key error:

Code: [Select]
projections[chunk.cameras[2]]
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-31-d4c1239ec97f> in <module>()
----> 1 projections[chunk.cameras[2]]

KeyError: <Camera '20160325_154525_IMG_102768.JPG'>

How can let the loop ignore those cameras? I dont understand how to grab the cameras with anything else but "projections".

EDIT:
two more observations:
1. the GUI Reference pane still lists projections even it there are no points at all left on the image
2. Decimating by quality makes sense, but it creates the situation where some images do not have projections left at all. How do I decimate the sparse pointcloud by spatial subsampling? I actually just want to subsample "a point per area" of the cloud.

4
Python and Java API / Merge identical cameras
« on: December 03, 2019, 05:45:27 PM »
I run very large projects with bad camera calibration that I split into overlapping chunks, align them separately and then merge them back. Then, they need to be optimized to improve on my calibration and vague referencing, so I need matches shared among the components of the former chunks. This leaves me with the follwing simplified situation:

Chunk A (Cam1, Cam2, Cam3, Cam4) + Chunk B (Cam3, Cam4, Cam5, Cam6) =

Merged Chunk (Cam1, Cam2, Cam3, Cam4

                                                                   Cam3, Cam4, Cam5, Cam6)

For reasons I dont understand MS does not match (i.e., re-align after reset) the identical cameras unless I do a full alignment from scratch, which is not an option.

To link them with control points would mean many hundred points, slowing down the GUI considerably. To be fair, this is what Alexey recommended in the past: https://www.agisoft.com/forum/index.php?topic=10097.msg46129#msg46129

But as he points out this is not a perfect merging solution. I know the cameras are identical, so alignment isn't actually necessary.

I want to be able to merge those identical cameras the same way I can merge markers!

I thought this could be done in python, and yes you can transfer all projections from one camera to another with the following code:

Code: [Select]
projections = doc.chunk.point_cloud.projections
camera_3A = doc.chunks[0].cameras[2]
camera_3B = doc.chunks[1].cameras[0]
projections[camera_3B] = projections[camera_3A] # replaces projections even if target is empty

But I want to append, not replace the projections! How can this be done? Apparently there are no operands or write functions for projections or their dependencies:
Code: [Select]
projections[camera_3B] = projections[camera_3A].append(projections[camera_3B])     # Doesnt work, but wanted!
# then reduce number of tiepoints
# then delete duplicates

By the way, could someone once and for all clarify the relationship of
  • cameras
  • camera keys
  • keypoints
  • tiepoints
  • matches (deprecated??)
  • projections
  • tracks
  • track ids
  • points
  • sparse cloud
This is so central that there should be a document, preferrably with python code showing the relationship.
Thanks!

Tom

5
General / Optimize overlapping chunks
« on: December 13, 2018, 07:28:02 PM »
I have 25 chunks in a grid that overlap to each of their neighbors, i.e. they partly contain the same cameras. Even though I ran them all with the same intrinsic parameters, overlapping areas dont perfectly match since the calibration is imperfect and cannot be done any better. Thus, the residual misfit was pushed into the extrinsics.

So, how do I
- optimize chunks relative to each other so that overlapping areas actually overlap
- merge chunks so that there are no double images (-> do I really have to manually pick the cameras or find them with a script?)

-> I cannot just run all chunks in one optimization step as each chunk already has 10k images, and there are 25 chunks.
Thanks!
Tom

6
Python and Java API / Numpy array to mask/image?
« on: August 28, 2018, 08:13:09 PM »
I want to create masks from depth maps using numpy. Haven't checked all my code yet but first of all, how do I convert my numpy array back to a photoscan mask or image?

Code: [Select]
import PhotoScan, numpy

chunk = PhotoScan.app.document.chunk
scale = chunk.transform.scale
camera = chunk.cameras[0]
depth = chunk.model.renderDepth(camera.transform, camera.sensor.calibration) #unscaled depth
threshold = 4

# convert to numpy array:
map = numpy.frombuffer(depth.tostring(), dtype=numpy.float32)

# scale array:
map_scaled = map * scale

# apply threshold:
mask = ((map > threshold) * 255).astype("uint8") # <-- is this actually right...?

# write back to image:
camera.mask.image() = mask    # this is not right...


sorry for the basic question, many thanks!
Tom

7
Python and Java API / remove shape layers
« on: August 24, 2018, 04:50:28 PM »
My shapes are organized in layers, or, groups. Right-click allows you to remove the layer and its shapes, which is what I want.

How do you do that in python?

or, looking at my code:
Code: [Select]
for i in camgroups:
newchunk = chunk.copy()
newchunk.label = chunk.label+"_group_"+str(i)
# delete the other cameragroups
newchunk.remove(newchunk.camera_groups[i+1:])
newchunk.remove(newchunk.camera_groups[:i])
# delete the other shapegroups (this is the part that does not work)
newchunk.remove(newchunk.shapes.groups[i+1:])
newchunk.remove(newchunk.shapes.groups[:i])

And, please: Why are they called layers in the gui and in Alexeys scripts but groups in the python API?
Many thanks!
Tom

8
General / Less than 4 common tie points = still aligned?
« on: July 11, 2017, 03:38:58 PM »
If two calibrated images only have one valid tie point in common, what is that good for? Does it mean they are actually aligned to each other or just maybe indirectly through a loop closure with other pictures? As I am providing Camera poses, it is hard to tell.

Inferring from requirements in manual tie point placement, one needs at least four tie points to align two images, correct?

Thanks
Tom

9
Feature Requests / Shape transparency
« on: July 11, 2017, 03:05:08 PM »
Would it be possible to add shape transparency in addition to color properties?
I am plotting tielines between matches as lines in a shape layer and would like to give each line a 50% transparency.

10
Python and Java API / visualize success of matching process
« on: May 03, 2017, 03:21:28 PM »
I often end up with subsets of my chunk that appear as if they are matched, but if you look closely they are just well positioned by ground control and not actually linked to each other. So I am looking for a way to visualize matches.

My goal: Save to file the labels and XYZ coordinates for every two cameras that have at least three valid matches.
Code: [Select]
CAM1.jpg, X_coord, Y_coord, Z_coord, CAM2.jpg, X_coord, Y_coord, Z_coord
...

I intend to plot lines in between each of those two coordinates either in 2D or 3D.

I have found a snippet of code in a previous post but never got it to work. It always returns the total number of matches, not the valid ones, and it has other issues too.

Hello James,

Currently these numbers are not accessible through Python, but if you wish I can post a sample script to do the calculations (unfortunately, for large datasets it's quite slow).

Could someone please have a look at this?
It is really important and I have not solved this after several attempts...
Here is my attempt at getting the script to run:

Code: [Select]
import PhotoScan

doc = PhotoScan.app.document
chunk = doc.chunk

point_cloud = chunk.point_cloud
point_proj = point_cloud.projections

photo_matches = list()
total_matches = list() #here the result will be stored

for photo in chunk.cameras:
try:
photo_proj = point_proj[photo]
total = set()
for proj in photo_proj:
total.add(proj.index)
except:
total = set()
photo_matches.append(total)

for i in range(0, len(chunk.cameras) - 1):
for j in range(i + 1, len(chunk.cameras)):
match = photo_matches[i] & photo_matches[j]

total = 0
valid = 0
invalid = 0

for p_index in match:
if point_cloud.points[p_index].valid:
valid += 1
total = len(match)
invalid = total - valid

pos_i = chunk.crs.project(chunk.transform.matrix.mulp(chunk.cameras[i].center))
pos_j = chunk.crs.project(chunk.transform.matrix.mulp(chunk.cameras[j].center))

if valid > 3:
total_matches.append((chunk.cameras[i].label, pos_i , chunk.cameras[j].label, pos_j, total, valid, invalid)) #the result - photo1, photo2, total, valid, invalid
else:
continue

print(total_matches)

Thanks so much!
Tom

11
General / Effect of cluster mode on hardware choice
« on: March 24, 2017, 07:07:56 PM »
Much has been said about hardware and benchmarks, but how does cluster mode influence these choices?

Some questions:

1. How does cluster mode affect operations for the individual steps? What is actually split bewteen nodes?
2. In which steps does cluster mode result in less RAM requirements, and where is there no effect?
3. In which cases is it wise not to choose fine level segmentation and why?

We are going to invest in a large cluster but I wonder wether it makes sense to have many low level machines for parallel steps and one or two very large-RAM, powerful machines for the tasks which cannot be split. Comments?

12
General / NVIDIA DGX-1
« on: February 10, 2017, 03:31:48 PM »
Hi All,
we are considering to acquire an NVIDIA DGX-1 computer for our work with Photoscan, among other things.
http://images.nvidia.com/content/technologies/deep-learning/pdf/Datasheet-DGX1.pdf

Some Specs:
8x Tesla P100
512Gb RAM
Dual 20core Xeon E5-2698

Am I failing to identify some compelling reason why this is a bad idea? We do not really want to deal with a multi machine cluster instead. Yes, I am aware it costs 130k€.

Opinions welcome!
Tom

13
Python and Java API / Getting PySide/PS120 to work
« on: June 16, 2016, 05:22:17 PM »
Hello,
I tried the PS120 split into chunks script from the website as well as a modified version, both on 1.2.5.2594 and 1.2.3.2331. It requires PySide I read, which I see as part of the PS install. I have not installed anything but PS.

Nothing happens - no error message, nothing in the console. HELP!

best greetings
Tom

14
General / How to detect "island alignments" in the set of photos
« on: June 10, 2016, 10:11:18 AM »
I am reconstructing generic complex objects from >10.000 images at a time. These are deep sea hydrothermal vents, the approach is similar to body scanning but of course far more cumbersome and technically restricted. Source imagery is interlaced video stills, don't get me started.

Photoscan aligns most of the images, but it builds "island" solutions that are aligned by the camera poses I provide, but have no connection to the rest of the model because otherwise Photoscan would recognize and properly align them. In fact, I end up with multiple, slightly shifted "ghosts" of the same facade from multiple passes. The surveys are so complicated that it is extremely hard to make out which batch of cameras is responsible for which "ghost" facade.

So I usually isolate these island solutions into separate chunks, create a sparse model, set points, align chunks, merge chunks and realign the entire set.

Questions:

1. Is there a way in photoscan to highlight and select such disconnected Island solutions? Does someone have an idea for a python script that checks for tie points between images and throws them into image groups?

2. Is there a better way up front to make sure the passes are recognized amongst each other? Please bear in mind: no GPS, no man made objects, thousands of images, so manual control point placement is not really an option.

In the attached image, you can see the actual facade in the center and a darker, sparser ghost to the left of it.

Thanks!
Tom

15
General / Memory when aligning in cluster mode
« on: May 31, 2016, 07:28:12 PM »
I am using three machines with 64gb or RAM and Nvidia 690 or titan x cards. I have just matched and aligned several chunks/projects of 11k to 13k images (20mp and 16mp mix) and never got even close to hitting a RAM barrier.

I would be happy to chop the chunks up for further processing using the respective python scripts from the website.

I would like to hear advice what the maximum image number might be, just for the alignment step.

My issue is that these chunks do not align well using camera based alignment even though I precalibrated and froze the calibration during alignment and chose High alignment quality and xyz reference, but no hpr. 40.000 points, 4.000 tie points.

Alexey, do you have an estimate for me? Any special circumstances in cluster mode?

Thanks very much!

Pages: [1] 2